Earlier in the year we discussed Tim Sweeney's POPL talk (Troubles with Tribbles) and the challenge posed by parallel computing to the games industry. Leaky Faucets suggested the difficulties of managing all those software objects. Perhaps there is another question that has a little less to do with algorithm and a litte more with how you perceive and relate to your virtual world.
This week saw the publicity of a new draft paper in the rarified space of Software Transactional Memories (STM). Earlier, Tim introduced STM's as one possible solution to the problem of concurrency and a games industry striving to exploit hardware advances. The gist was this: instead of relying upon a hand-crafted solution (software engineering) to manage game simulation objects and how they interact in multiple threads of execution ("shared state concurrency"), why not push some of that responsibility down to a programming language (and infrastructure)?
Let us start with the view that game objects need to share bits of information. Interestingly, Philip Wadler pointed out the importance of transactions to the semantics of game play, regarding Tim's talk he noted:
...What I found most surprising is that communicating processes... would not work well for objects in a game: one needs transactions to ensure that when one character transfers hit points or momentum to another that nothing is lost (just as one needs transactions to ensure that when one bank account transfers money to another that nothing is lost)...
Yes, consistency is important to our view of the world, and C (onsistency) is but one letter in ACID - a quality of databases geeks take as ideal. At the end of the day we want our Auction House ledger to tally, and too we would be miffed if damage we dished out went astray...
However, Software Transaction Memories do not work like databases - they are about detecting conflicting operations rather than deconflicting shared memory. Detection is key, prevention is not. STM's embody an *optimistic* view of the world that doesn't prevent an operator equivalent of a "Leeroy Jenkins!" from happening as it does to correct for one after it has been unleashed (e.g. rollback to a previous checkpoint).
Contrast this with an approach that seeks to lock-down all the details before making a move. Such would be safer but would require greater control in execution and cunning in implementation. While STM's are conceived to work optimistically on a micro-level, their exuberance is intended to be transparent to higher processes: I may have a wild mail-clerk working for me, but if the mail eventually gets to the right place in time before I notice, so what.
From the perspective of the user experience, inconsistencies in the virtual world (arising from latency and all its flavors, especially) may represent deliberate trade-offs. Use of UDP packets, dead-reckoning algorithms are often used to improve perception at the expense of accuracy in perception. Yet even optimists must occasionally confront a reality not quite what they expected. Lo, even vehicles and avatars can sometimes jump around when the networks are wild.
Yet perhaps inconsistency in moderation is not such a hobgoblin. After all, we're used to at least one imperfect world - e.g. so much of our neural circutry exists to organize our perception of a messy place. The deeper question may lie with the how, when, and to what degree can virtual worlds be cavaliar with their participants and when should they strive to be more careful.
This is very interesting. Design patterns to the same end such as the problematic use of decorators or broader Design by Contract techniques suffer from scalability problems (not to mention developer overhead). Pushing the responsibility to the language would allow for optimizations as pointed out in the paper. I'll be interested to see a full implementation of this.
My knee-jerk question is one of purity of A(tomicity) for objects participating in STM. This imposes an implied constraint on reliance upon Reflective design techniques because polymorphism of objects could violate optimistic invariants unless all dynamic behavior is consistently understood at design time. Am I correct about this, or would the language-level usage of invariants detect and deal with this?
Posted by: randolfe | Apr 03, 2006 at 01:44
Recent related ACM article.
ACM Queue vol. 4, no. 10 - December 2006 / January 2007
Multicore programming with transactional memory
Posted by: nate combs | Dec 10, 2006 at 19:40