« Hector Postigo's: The Digital Rights Movement | Main | Diablo III Hyperinflation »

May 08, 2013

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c022953ef017eeaed0a45970d

Listed below are links to weblogs that reference The Clausewitz Engine: A Major Scientific Advance:

Comments

1.

If you wanted to use something like the Clausewitz engine to study how norms propogate, you could do so. However, you'd have to be careful.

Examples:

"I have this real data about norm propagation but I don't know the underlying mechanism, so I'll create a simulation and see if I can find something that replicates the real-world observations." You can do this.

"I don't know how norms propagate but I have some ideas. I'll test them in this simulation and if they look promising then those are the ones I'll go try find real data to support." You can do this.

"I've built a simulation that shows how norms propagate, therefore this is how norms propogate." You can't do this, although its bamboozle-with-science underpinnings might persuade a fair few people you can.

The kind of AI used by the Clausewitz Engine looks to me to be what's now called "good old-fashioned AI", ie. it uses symbolic logic at a "mind" level rather than fuzzy logic at a "brain" level. Having looked at their events code (I'm a big fan of Paradox games myself, flawed though they always are with their one-idea-too-far approach), it seems as if they're what in the olde dayes would have been called a "production system". You have a set of production rules that you search until you find one that fires, then you fire it. I haven't looked at the code or anything, though, so don't know how individual agents actually behave.

Richard

2.

Richard, I agree with your comment. Approaches 1 and 2 seem valid, 3 is not. I wish I understand AI well enough to have a feel for the code. I can say that the computational agent modeling that is done in academic work does not involve anything fuzzy or sophisticated at the level of mind. The programmed agents generally have trivially simple AI, like "walk randomly. if you bump into sugar, eat it."

3.

The "walk around until you find sugar then eat it" approach is a Finite State Machine, which is a simple, manageable way of representing systems where the decisions are clear-cut. You can represent such systems declaratively as a production system, but you may as well code them procedurally because there's no ambiguity.

A production system would be used when you have competing rules. "If I find sugar then I eat it" is one rule, but "if I see an enemy I run away is another" and "if I see an injured friend I take them back home" is a third. Several such rules could fire at once, in which case you need a method for deciding which one to act upon. The agent is still represented as a state, but what they do next isn't bound up implicitly within the state.

Richard

The comments to this entry are closed.