« State of Play Virtual Architecture 1.0: Contest Judges Say Enough with the Simulacra Already | Main | Terra Nova Welcomes Mia Consalvo »

Oct 14, 2005



As one fortunate enough to have worked a lot with stackless Python (it is used at a fundamental level in EVE Online), I can certainly agree that this coding pattern has many blessings. This is especially true for any system that relies on asynchronous calls, which is often the case in client/server and server/server communications and agent based stuff. One of the biggest benefit is that you can have coders, that may have a good knowldege of scripting methods but maybe not so much of low-level state machines and multi-threading issues, code things that in any other language would have been completely obfuscated by the complexity of the coding pattern.

On the other hand, we should not forget that stackless is not a magical solution to elementary multi-threading problems such as deadlocks and race conditions, so if your design is not sound in that respect, you will end up in trouble regardless. Also, you often have to interact with external components such as the graphic card and the operating system, where you will need to serialize your code.


The new mono based scripting engine I'm developing for Second Life works in a very similar way to Stackless Python, building partial continuations on the heap to allow switching to different microthreads within the same operating system thread. Having developed Warhammer Online using the more traditional event based paradigm I can confirm that microthreading makes developing virtual worlds much easier. The manual stack tearing you have to do to turn a conceptually procedural algorithm in to a sequence of event handlers split on blocking calls in an event based system is very painful. Automated stack management makes things much easier, but typically operating systems don't support the 10s of 1000s of threads you'd need to allocate an operating system thread to each game object, so using microthreading on a single operating system thread makes lots of sense.


I should point out that this goes hand-in-hand with the latest wave of movement regarding Web 2.0. A good write-up regarding Web 2.0 is...



If you think of game design and the game engine as separate things, it opens some interesting possibilities. Think of HTML as the “scripting language” for a variety of competing browsers. When you browse the web, you can go from website to website transparently with no one company controlling that entire experience. When you create a website, you can make certain assumptions about the browser and be assured that visitors with all sorts of operating systems can enjoy your creation. So ultimately, separating the game engine builders from the game designers may not be a bad thing.

I know this is leads us back towards VRML (a dirty word to some of us), however with Microsoft introducing XAML / Avalon into Windows Vista and their plans for .NET based web application delivery, maybe the beginnings of a 3D scripting format is coming that could eventually be used to distribute virtual worlds as if they were websites.

Ok, that’s a bit of a stretch; however XAML could be the first step in a long evolution that takes us in that direction. Whether or not we like the fact it’s coming from Microsoft is another thing.


MUD1 was (what we'd now call) threaded. Each player had their own peer process, communicating with other processes through shared memory. The operating system's timesharing mechanism switched between processes, although since the CPU was dual-processor it was, in theory, possible for two processes to be executing in parallel. In terms of programming, the way to do it was to take the abstract view and treat each process as if it was indeed running on its own, separate machine.

This being the case, we had to have a lot of signal/wait locking code to prevent memory from being accessed simultaneously by two processes. Not only was this a pain (we had to put it in manually, as MUDDL was too tied to its interpreter for us to generate them automatically), but it was slower overall than if we'd just locked the entire shared memory each time we had a command to execute.

MUD2 switched to a non-threaded model. MUDDLE code is written with the assumption that there won't be any problems with shared memory: as a programmer, you can mess about with objects without the worry that someone else might mess about with it at the same time. This was much faster (remember, the clocking of PCs was measured in KHz in those days!) overall, but it did mean that if one player's command took inordinately long to execute then everyone else's commands would have to wait, and there could be noticeable server lag as a result. In other words, the game itself was faster but the players occasionally perceived it to be slower.

As it happens, MUDDLE doesn't really care how it's implemented, and I've toyed in the past of switching to a threaded model using automatically-generated locks. I wouldn't need to change the MUDDLE, just the interpreter (or RTS, as I have a program that converts MUDDLE into executable C).

From my point of view, the biggest paradigm shift we may see in programming is the relationship between speed of execution and speed of access to memory. With execution times getting so much faster and memory access times not keeping up, it's getting to the stage where it's faster to recalculate data items on the fly rather than store them in memory and retrieve them. There used to be a space/time trade-off, in that if you wanted your code to run faster you had to use more memory. Now, though, we're nearly at the stage where you'll only need to use memory if you must. It really will be faster to calculate a sine every time you want one than to look it up in a table. Once that happens, programming will be flipped on it head.



The internal architecture of MMOG code certainly has an impact on the ease of development as discussed above, but remember that programming paradigms that start off protecting developers from muddle end up protecting against malice. This is certinaly true of operating systems. I wonder what sorts of game architectures make it easiest to express security policies: e.g. preventing item duping, chat snooping, or the spread of an infectious disease outside a 'firewalled' instance?

I agree with Harry Kalogirou's concept that a game engine could be conveniently expressed as a code library (aka an API), and I reckon that it probably is the best approach for considering security, and thus in the long term building a MMOG platform that can contain the work of many different developers (potentially with conflicting visions of how a virtual world should turn out).

I pick holes in RL banking transaction systems for a living, and I've been thinking for a while about how to properly investiage MMOG transaction systems, which are (ironically) much more complicated than the RL equivalents. However security doesn't seem to be such a hot issue in the MMOG world while specific companies reign over worlds, as they have two powerful weapons, which have little parallel in real life:

1. Banning people. You don't need to construct a watertight court case to evict someone from a virtual world. So if there are trouble makers, deport them.

2. Rollback. In the worst of worst cases, if everything has gone to hell, wake your characters up to find that tomorrow is today, Groundhog Day style.

Until we see the first genuinely communally owned MMOGs (perhaps built on a MMOG gaming operating system sourced and licenced by SOE or Microsoft or whoever) where the above sanctions are harder to deploy, I guess security won't be important. Which is a pity, because by then the architecture will be in stone, and we will have to make do with the good ol' security retrofit.

ps. hello everyone! long time reader, first time poster :-)


BTW, MischiefBlog extends many of these thoughts deftly. My earlier reference to declarative models suggests (though unwittingly) his more complete thought:


I have more to add, but will save for another post...

The comments to this entry are closed.