« Lineage II Launched in Korea, Achieves 100K Club in One Week | Main | Game Research Grid »

Oct 16, 2003



It seems like a standard network security book targeted specifically at the MMOG designer audience (and, fwiw, it would probably fit the online gambling audience pretty well too...) I can't find any fault with selling advice for robust security systems in games. Certainly the vast majority of hacks don't create any social benefits. I think you're right Ted, though, that making MMOG security 100% airtight is probably going to be either impossible or impossibly expensive. But that's life...

What I find most interesting is that he assumes (like Julian, maybe?) that if you don't break any rules at the code level, that means you're playing fair.

In his words, the book "explores how the right equations can let people roll dice, shuffle cards, pay bills, and trade things like swords without worrying about fakes, frauds, counterfeits, or tricks." Seems to me that "fakes, frauds, counterfeits, and tricks" are all very possible without the need for breaching a game's security.


The problem with his systems are summed up in the line:

"The new owner accepts the bill and checks every digital signature in the chain. If they're valid, then the new owner closes the transaction."

Verifying a single digital signature is a non-trivial computing task, that can take several hundreds to tenths of a second for a good PC (depending on the algorithm, in this case the context would indicate a public-key scheme, which tends to be on the slow side). Verifying an indefinite number of them (under his scheme, that of every player to touch the digital currency unit since it left the central bank) is going to be correspondingly more processor-intensive. Verifying them for a virtual stack of potentially thousands or millions of units....

Because they routinely work with problems that involve calculations that cannot be solved by all the computing power on the planet, cryptographic security experts get into the habit of treating every problem that can be solved by a single computer in finite time as trivial. But there's a big gap between "finite time", and "real time", and online games operate in real time.



I don't buy the processor load argument. It's factually correct, but it ignores the rate of change in processing power. Who would have thunk twenty years ago we'd be where we are today in that respect? The PC I play games on has more processing power than ALL the computers on earth 20 years ago.

And the issue with security isn't perfect, airtight protection, it's "raising the bar," upping the cost to crack through that security sufficiently high that most won't bother trying. Fort Knox is not impenetrable. It's just not a challenge many consider worth the risk when they subconsciously or otherwise do a risk/reward analysis. A recent study in Europe, for example, showed that if a company had a firewall, 90% of hackers automatically moved on to the next potential target. The darned thing could have a gaping hole, but if the hacker saw "firewall," he moved on. And yet a lot of companies don't even have a firewall (which is why the hackers move on, they know there's a "low-hanging fruit" just up the virtual row of trees).

Who would have thought such a thing as an asymetrical-key algorythm was even possible? I suspect if game companies, and others, got together and got serious about security the problems would be either overcome or gotten around with good old Yankee ingenuity.

But the industy(s) in general are not viscerally convinced of the risks they take day to day. Until disaster strikes, they keep believing it can't happen to them.

It's just a matter of time, guys.

Yeah, I'm one of those "security experts" too.

The first step to all that verification of digital signatures is to decentralize it. Sure, it takes measurable time, but that way it's measurable time on each user system that has reason to be interested in a transaction (namely, two per transaction) not one central computer. It's a peer-to-peer things, not a client server thing. That's the design meme behind PUBLIC key infrastructure. Tasks of seemingly overwhelming complexity become near-trivial once distributed across enough systems.

Game companies knee-jerk fearful of delegating that sort of task, however, due to bad experiences with client hacking in the past. And they should when they deploy programs not designed with security as a very high priority (they've gotten better here, but still, a lot of game hacks involve the client or unsecured data streams). But the way digital signatures work, the very nature of public-key systems, makes them about as immune to this sort of attack as a system can be. Again, it's not a perfect system we seek, it's one that makes it harder than alternative attacks. The con-gamer can still try social engineering... and will. ("This letter is to inform you that someone attempted to hack your account last night. To assure that your account remains intact, please go to this url and login using your normal credentials. If something appears wrong, please email us. If not, all is well. Have a nice day. --your customer service rep at XYZ Game.")

(I hope I don't have to explain how that one works... :)

The object of the kind of security Wayner is promoting isn't to protect us from our own stupidity, it's to protect us from the obscure working of code we can't, or don't, understand. Software has been operating under fairy-tale rosy-pink rules. We write it, but we aren't responsible if it doesn't work as advertised! That is about to end.

Get in front of the curve, or be crushed against the curb when that big bus turns. Rumbling of new laws are in the distance and getting closer.


No, I got what he was saying about decentralizing it into a P2P scenario. Here's the thing:

1) There must be some discrete currency token, at some point issued by the central bank.

2) These tokens don't have to all be of the same value (they don't even have to be currency, they could be a sword or an animal hide or raw material unit), they can be of different denominations like real currency, but once issued by the central bank they must be inviolate. You can't cut a $20 in half to get two $10's. Only the central server can merge or split a token.

3) Every transaction between two clients requires that every token involved in the transaction be authenticated, in the P2P scenario the entire chain it has passed through since issue by the central bank must be authenticated. In the case of currency, this may be thousands or millions of tokens just for the single transaction, in the P2P scenario they must all be authenticated by *both* systems and new signatures attached (if strictly centralized, we're talking billions of token but only one signature each). Hence the problem of finite vs. real time.

As far as Moore's Law goes, it cuts both ways, more processing to use keys is also more processor to break them, so the processor time to handle any given authentication will remain roughly a constant for the usefully forseeable future.

And I do see the bus coming, you're preaching to the converted. My objection is strictly technical, the suits just hear technobabble about a problem they don't believe will ever actually occur.



I am a yet unconvinced a token is necessary. No, don't ask me how I would do it without tokens, I don't have that answer. But I am not at all convinced that that answer does not exist. That is my point.

You're trapped in a paradigm based on historical currency, and we're talking about something that goes outside the bounds of what yet exists. Who would have thought that today most "currency" would be nothing more than entries in computers? It is. What is the token for that money? There isn't one, as far as I know. But that is a centralized system, and we're talking about perr-to-peer.

In past systems (to resort to historical paradigms myself...) though, there was no centralized authority. "Banks" issued their own currency. An analogous system, where we managed our own trust relationships, might work fine. This is something we already do in quotidian life. We do business with those we have chosen to trust, and not with others. We don't know that X brand is as good as Y brand until we ourselves test it, but we often take it on trust.

If such a system were created, we might have innumerable issuers of "tokens" in your token scenario, not one central one. This would do much to dilute the processing power problem a central system would suffer under. It wouldn't yet be true peer-to-peer.

I haven't yet read the book in question so I can't speak to the possibilties of any specific system he may be advancing. I've, myself, in the past, put some thought into how such a system might evolve (I worked for two PKI companies, among others). There are problems to overcome, no question. But the way to overcome them is not to assume what is today will be what is tomorrow.

The comments to this entry are closed.