The Clausewitz Engine: A Major Scientific Advance

86-Owens-AR-Bottle-Machine700x470Wittingly or no, the folks over at Paradox Development are making an amazing scientific instrument. It's called the Clausewitz Engine. I know almost nothing about it, other than having been its victim over and over again.

The Clausewitz Engine is apparently a gigantic autonomous agent model. Tens of thousands of agents act according to a sophisticated set of instructions. These instructions are apparently very flexible. In Hearts of Iron, they drive divisions, armies, and countries to war against one another, whereas in Crusader Kings, the instructions drive men and women to seek marriage partners. The information on which the agents act is incredible. The agents respond to grand strategic considerations, political concerns, territory control, economic resources, personality traits, and past actions of other agents. Yes, they have memory. They also have variable goals and strategies. Not everyone is offended that you executed your brother's children. Some generals want to capture ground, others want to avoid looking bad.

In playing against these teeming worlds of code-people, I find myself feeling immersed in a genuine society. The whole thing responds in a way that feels right; my reputation and prospects rise and fall in a very natural way.

One interesting aspect of the Engine is that it allows you to take the place of almost any of these actors. You can be Stalin, or a commander of a single division outside Stalingrad. You can be the Holy Roman Emperor, or the Earl of Argyll. It doesn't matter who you choose, because every other agent in the game is handled by an AI. If you want to play as America but don't want to bother with politics, the AI that would handle American political actors if you had chosen Germany just takes over. Of course, if you do that, you might end up working for President Lindburgh, fighting for the Master Race. But you can do it if you want.

Why is this scientifically relevant? The brilliant mathemetician Stephen Wolfram has written about the need for a revolution in scientific practice, away from experiment and toward computational modeling. Do I know if he's right? Not a chance. I have no idea whether the natural sciences are moving in this direction or not. I do know that the social sciences are not doing much with this idea.

Yet over in games, we find a computational model of human society that is orders of magnitude more advanced than anything I have seen done on campus. The Clausewitz Engine could be modded to study how norms propagate through society, how political factions rise and fall, how crowds try to get out of disasters, how diseases spread, how religions influence shopping. I say that, not knowing anything about the guts of it. I have only been smacked around again and again by the darn thing. But my sense of it, as a machine of the human world, is that it is vast, flexible, accurate, comprehensive, and infinitely malleable.

What does the Clausewitz Engine reveal about us? I do not know. Much, I imagine. I suspect that the devs could have made all agents independent of one another, little islands, self-reliant. I wonder what would happen to total economic product then? What happens if all agents slavishly follow the commands of superior agents? What if resources are redistributed equally across the agent population, what happens to economic growth? Wow.

To Paradox Development: Bravo!

Comments on The Clausewitz Engine: A Major Scientific Advance:

Richard Bartle says:

If you wanted to use something like the Clausewitz engine to study how norms propogate, you could do so. However, you'd have to be careful.


"I have this real data about norm propagation but I don't know the underlying mechanism, so I'll create a simulation and see if I can find something that replicates the real-world observations." You can do this.

"I don't know how norms propagate but I have some ideas. I'll test them in this simulation and if they look promising then those are the ones I'll go try find real data to support." You can do this.

"I've built a simulation that shows how norms propagate, therefore this is how norms propogate." You can't do this, although its bamboozle-with-science underpinnings might persuade a fair few people you can.

The kind of AI used by the Clausewitz Engine looks to me to be what's now called "good old-fashioned AI", ie. it uses symbolic logic at a "mind" level rather than fuzzy logic at a "brain" level. Having looked at their events code (I'm a big fan of Paradox games myself, flawed though they always are with their one-idea-too-far approach), it seems as if they're what in the olde dayes would have been called a "production system". You have a set of production rules that you search until you find one that fires, then you fire it. I haven't looked at the code or anything, though, so don't know how individual agents actually behave.


Posted May 10, 2013 3:47:56 AM | link

Edward Castronova says:

Richard, I agree with your comment. Approaches 1 and 2 seem valid, 3 is not. I wish I understand AI well enough to have a feel for the code. I can say that the computational agent modeling that is done in academic work does not involve anything fuzzy or sophisticated at the level of mind. The programmed agents generally have trivially simple AI, like "walk randomly. if you bump into sugar, eat it."

Posted May 14, 2013 10:35:18 AM | link

Richard Bartle says:

The "walk around until you find sugar then eat it" approach is a Finite State Machine, which is a simple, manageable way of representing systems where the decisions are clear-cut. You can represent such systems declaratively as a production system, but you may as well code them procedurally because there's no ambiguity.

A production system would be used when you have competing rules. "If I find sugar then I eat it" is one rule, but "if I see an enemy I run away is another" and "if I see an injured friend I take them back home" is a third. Several such rules could fire at once, in which case you need a method for deciding which one to act upon. The agent is still represented as a state, but what they do next isn't bound up implicitly within the state.


Posted May 15, 2013 2:41:49 AM | link