Virtual world paradigms for managing identity are tethered to real lives. So, for example, subscribers with credit cards pay for accounts that correspond to characters with abilities that may evolve. Beyond the impoverishment (or not) of the avatar-as-metaphor, mapping a real identity into a virtual seems to generally work well from a user's perspective.
However, from computer security comes ideas about a different way of thinking about identity. Do these ideas have any translation into virtual worlds?
While a great deal of Mark Miller's Google TechTalk (7/12/2006, "Paradigm Regained: Abstraction Mechanisms for Access Control," video feed here) is beyond us, it is interesting because it illuminates a schism amongst computer security paradigms that is provocative. The question starts with, should access to a system and its resources be identity based or authorization based?
Simplistically, an identity based scheme might be to have Joe Public validate his identity (register/ password) and grant him privileges/roles based on a confirmation of his identity (to the limit of risk imposed by hacking, malware, etc). An authorization based scheme instead might focus on managing Joe Public's actions: what is he allowed to do it and when. This sort of management sounds simple but is complicated in environments where a big permission (Joe can or can't...) is a composition of a lot of little changing permissions (Did Mary, Betty, Sam say Joe was allowed to...).
In the end, however, were this debate extended beyond design and engineering practicalities (e.g. scalability) it would likely come to this. Is it more important to worry about who you are verus what you are allowed to do? On the surface this sounds enigmatic: what if who you are determines what you are allowed to do. But there is a distinction: if I am better able to control what you are able do, I may be less interested in who you are. Or by way of an extreme example, if you can't trash my world, sure, I'll let you log in anonymously.
If this sounds a bit abstract consider this thought experiment. The industry view towards aggregating and centralizing identity management might be given as such (as exemplified by Microsoft's identity metasystem, also ZDNet overview ):
Many of the problems on the Internet today, from phishing attacks to inconsistent user experiences, stem from the patchwork nature of digital identity solutions that software makers have built in the absence of a unifying and architected system of digital identity. An identity metasystem, as defined by the Laws of Identity, would supply a unifying fabric of digital identity, utilizing existing and future identity systems, providing interoperability between them, and enabling the creation of a consistent and straightforward user interface to them all.
We have discussed on Terra Nova security circumstances in virtual worlds that seem more dependent upon a player's access to functionality (see Hot Blooded Objects) than their virtual identity. In the age of Web 2.0 and Virtual World UI add-ons and user-scripted content, is identity-based access a bottleneck waiting to happen?
Like so many things, the basic reasons for why virtual worlds want capability-based security rather than access list were covered by Chip and Randy several hundred years ago: http://www.fudco.com/chip/lessons.html
While many have commented that capabilities may forever be the future of security, I happen to agree with Chip and Randy's arguments. It is quite convenient that capabilities map really well onto REST APIs.
Posted by: Cory Ondrejka | Jul 16, 2006 at 19:53
This is a fairly simplistic view of a service. While you might not care if the person logged on with a particular account is the owner of that account, the *owner* probably cares. While that account might not have any ability to destroy or grief *other* accounts, if the wrong person gains access to an account it can destroy *that account*.
Or to use one of your examples; Mary, Betty and Sam probably care that Joe *is* Joe, particularly if they've allowed him access to something of theirs.
Posted by: Krisjohn | Jul 17, 2006 at 04:25
I'm not sure I understand the distinction you are trying to draw here. Privledges and roles are an authorization based scheme. Authorization and identity (authentication) are inextricably linked.
Posted by: Thabor | Jul 17, 2006 at 10:37
http://www.erights.org/elib/capability/3parts.html
The following is adapted from a message by Bill Frantz, originally posted on the SPKI list, and then reposted by Jonathan Shapiro on the EROS list. This web page adaptation and the title is by Mark Miller, and is posted with Bill's permission.
I [Bill] was asked in private mail:
>I'd like the longer rant. I'm particularly interested in getting
>practical examples of where distributed capabilities can be used
>to solve enterprise/IS security problems. I'd like to see an
>alternative to strong authentication and NT ACLs.
What we intuitively call security is really made up of three things: keeping objects secret, protecting objects from modification, and preventing the misuse of objects. We can really only algorithmically control two of these things.
We can keep an object secret by withholding the authority to access the object, or confining the computations which use the object. ACLs are a way of withholding access.
We can protect objects from modification by withholding the authority to modify them. ACLs also are a way of withholding modification authority.
We can't prevent something which has legitimate access to an object from miss-using that access. A simple example is a program which has the authority to erase files from a directory. There is no way we can prevent it from erasing a specific file in that directory. We must trust it not to erase that file whenever we run it.
....
Posted by: no | Jul 17, 2006 at 12:40
http://www.identity20.com/media/OSCON2005/
Posted by: Michael Chui | Jul 17, 2006 at 12:57
We can't prevent something which has legitimate access to an object from miss-using that access. A simple example is a program which has the authority to erase files from a directory. There is no way we can prevent it from erasing a specific file in that directory. We must trust it not to erase that file whenever we run it.
Uh, "rm" deletes files in unix/linux, but if you don't have the authority to delete that file you can't delete it. I think some of the basic assumptions in this discussion are faulty.
Posted by: Krisjohn | Jul 17, 2006 at 20:27
>>>>
Authorization and identity are linked. However, I think the distinction might be best made by asking where would an ACL/permissions authorization scheme break down? Possibly the worst circumstance would a fine-grained permissions environment (say on a per object basis) where access was dynamic (constantly invoked, revoked). Think of all those nested and intersecting sets.
As for the 'rm' case, consider how to scale that up to delegating permission to delete that file to someone else, once.
As for the underlying analogy. Fine grained permissions - say capabilities, per Cory's ref above - might be thought of as less a property fixed to an identity and more a token of trust, bounded (can be revoked).
Consider a scenario in a world of user-created content where full API available to players can be misused (even innocently). Imagine selective exposure to functionality based on performance in that world: demonstrate you know the 'laws of aerodynamics' in that world you can create objects that fly... do something suspect lose that capability. Now throw into the mix, I sell you a flying car, now you are are allowed to fly only that car. etc.
Sure, one can do this using ACLs. Sounds a mess. Go one step further. What if I want to transfer/translate just that capability on a limited use basis to another world/server. Sounds a bigger mess.
Posted by: nate combs | Jul 17, 2006 at 20:57
I appreciate the discussion on a technical level, but the question I have is this: "Isn't there a special need, given the social nature of virtual worlds, for players to be able to reasonably infer that their interactions with the same avatar constitute interactions with the same person?"
There's also the issue of account selling, which the authorization approach seems even less capable of resolving than authentication.
Posted by: monkeysan | Jul 18, 2006 at 18:24
Monkeysan>
"Isn't there a special need, given the social nature of virtual worlds, for players to be able to reasonably infer that their interactions with the same avatar constitute interactions with the same person?"
-------
Well, let me turn this around. Why should I *have to* reveal even this to another player unless I choose to? Part of the current answer (why not) involves accountability and the subscriber business model.
But to speculate on other models. Take this extreme possibility. Consider a world where players can anonymously log in and play random permanent avatars. Let's say that then abilities are unlocked through in-world (authorization) and last for the session. Log out, poof. Next player.
A thought experiment.
In this model, of course, there are no accounts to sell.
Posted by: nate combs | Jul 18, 2006 at 21:05
A way to look at this thought experiment is to reframe the concept using a maze metaphor with pathways (authorization-based) and gates (identity-base).
I don't know where the fine-grained combination of this will breakdown with scale and complexity, but because there is a decreasing margin of return for these two methods Nate suggested a type of use-based system, proficiency-based:
"Imagine selective exposure to functionality based on performance in that world: demonstrate you know the 'laws of aerodynamics' in that world you can create objects that fly... do something suspect lose that capability. " - nate
Nate then ask the question about possession (as in by a ghost or the demonic variety -hehe-), which I see as a car(or spaceship)-borrowing issue. Another tool in the belt is collateral. Put some real money in an escrow account, and I'll roam freely; but I'll probably fix the collateral needed based on the risk and cost involved.
So in a EVE-type environment, anyone can borrow the basic spaceship and fly in zone 1 (gated-in based on identity) on certain routes (authorized pathways). If they want to fly special spaceships, they have to demonstrate their proficiency and if they want to take that out of zone 1 they'll have to put up collateral.
There are probably more methods..like a GPS tracker, black box recorder, autopilot override, a nagging computer voice, etc. All of RL-based methods can be applied in VWs.
Frank
Posted by: magicback (Frank) | Jul 18, 2006 at 22:02
magicback>
Another tool in the belt is collateral. Put some real money in an escrow account, and I'll roam freely; but I'll probably fix the collateral needed based on the risk and cost involved.
Hmmm. I suppose one advantage of attaching economic constraints to authorizations is that even if you turn out to be a real bad apple, you can only go so far before grinding to a halt.
Posted by: nate combs | Jul 19, 2006 at 21:35
I had a similar idea to the Microsoft "Identity Metasystem" a few years ago while working for Verizon. I called it the "virtual_id;" play on words as it's "ID" as in "Identification," and "Id," as in, "not ego or superego." Har-de-har-har. I do crack me up.
The problem, at the time, seemed to me that there were (and are) legitimage uses and reasons for anonymity and "measured identification" (levels of identifiability) online. For example, the bank from whom I get my mortgage and insurance needs to know all kinds of creepy, personal, private things about me. I do my banking with that bank online. Does that bother me? Not in the least. I trust their security, and my own personal security measures (that may be foolish, but, if so, it's my own foolishness, not a "system" issue). Let's call my avatar/self with my bank the "top of the PRIVACY food chain," in that MyBank knows bloody well everything about me from a "that would be scary if the bad guys got ahold of it" standpoint. On the other hand, they know almost nothing about me PERSONALLY. They don't care about my gaming, my poetry, my sexual habits, my taste in music, etc. So they're at the bottom of the PERSONAL food chain. MyBank cares about whether or not I make payments or move out of my house, not about whether or not I ninja loot noobs and cross-dress.
At the bottom of the "privacy" food chain are people that I meet randomly in games, chat rooms, blogs, etc. I know nothing about them from a financial, geographic, educational, criminal, etc., standpoint. All I have are words on a screen. These may establish some PERSONAL ties, which are wonderful, give us joy, fun, seasons-in-the-sun, etc. And can lead to the sharing of PRIVATE info, when appropriate. BlkGothGRRRRL27 cares about whether or not I can help her improve her dark poetry, not about my credit card payment history.
Probably.
Which is why a link between PRIVATE and PERSONAL information can be helpful, even if it is only a binary one; a link that says, "Yes, this is a real, registered person."
Which was the basic idea behind "virtual_id." Some trusted institution -- and my thought was that, regardless of how you fall on the political scene, big banks and banking-type gloms like Visa or American Express are pretty "trusted" -- would do a background check at the level of at least what is done to provide a store credit card. Something that says, "You am who you say you am." If you wanted to make it harder, and have a bronze, silver and gold... Fine. Provide a physical address, phone, SSN for a gold. Whatever.
But the bank/institution keeps the "root info" and issues you a user/avatar name and/or link that basically says, "This Internet 'person' is a real person; we know who they are, and in the event of illegal behavior, you can track them back to here. We will comply with duly issued subpoenas, but nothing less."
You are "anonymous" in a game, chat room, etc., the same way you are in a bar or crowd. Nobody knows your real name, or what you do for a living, etc. But you are also "real" a bit more the same way. If I come up to you in real life and pick your pocket or assault you or (if you're a kid) make a sexual suggestion, you can yell for a cop or put me in a headlock or ask the manager to give me the bum's rush. Not so much in many online establishments... And many owners are forced to choose between "all or nothing" when it comes to giving their customers privacy.
No information need be shared beyond, as I said, the "Oh, yeah... This guy has a virtual_id" level. It basically says I'm willing to "show you mine" at some point. I'm not seeking to do harm. I may want to stay anonymous... but only to protect my private info that is inappropriate to the situation, not to do damage to yours.
I'm still not sure about this. My friends in the programming ends of things, at the time, told me it would be too easy to fake. To which I said, "OK. Fix it." The breakability of an idea doesn't make it a bad idea, just impractical ; )
Once something like this was established, you could certainly have levels of "connectedness" to your virtual_id that responded to accomplishments, scores in games, if producers/players so chose. Because, again, the level of personal/private info shared beyond the "root binary" would be totally up to the user.
Posted by: Andy Havens | Jul 23, 2006 at 09:04