Sherry Turkle asked in this essay (NY Times, "When E-Mail Points the Way Down the Rabbit Hole") whether the (spam) filters we choose and construct can become our alter-egos? Her point may seem exotic for spam-filters but are they less so for virtual world avatars...
Professor Turkle's point was:
(While w)e might tell our filters, for example, that we don't like certain kinds of e-mail, ...the filters could eventually start to challenge us by observing that what we say is inconsistent with what we do - that sometimes we actually look at the messages or come-ons that we profess so strongly to condemn. In such a future world, the supple intelligence of the super filter could become a kind of alter ego, knowing us better, perhaps, than we know ourselves.
Moving to virtual worlds...
Could one day soon avatars become intelligent filters of our virtual worlds? Could they then, shortly after that, become our synthetic alter-egos who would know us better than we know ourselves? Should this pass, this may be a good thing - we may need all the help to reinforce and manage our own identity in these worlds. As worlds become larger, more complex: guild stuff, personal stuff, trade stuff, team stuff... Might they help us effectively express ourselves in these worlds?
Alter-ego Avatars may also enable a richer virtual world expression of ourselves. What if we could customize our avatar's behavior to exhibit our nuanced tastes and choices: a sort of behavioral grafitti. That's a start. What if along the way they became our synthetic conscience? Lawful good, behave it! Chaotic evil, do it! Or maybe its just griefing they care about - deep genetic codes imprinted by developers. Imagine a world where the avatars become the AI mediated expression of ourselves within our new social context. If Code is Law why not bond some of it, especially the social bits, more directly with ourselves... our avatars.
Then, what about the spaces between ourself and the avatar and the avatar and it's synthetic world?
Isaac Asimov through his robot fiction (e.g. the iconic "I, Robot") gave us the Laws of Robotics. These were the fictional principles shaping the relationship of robot to man. After years of storytelling, the revised law of robotics (1985) stood as:
Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.
(from Roger Clarke's excellent documentation - at Asimov's Laws of Robotics).
Are you ready for a partnership with your avatar? Under what laws?
This is reminiscent of the concept of "agents" as applied to software. http://en.wikipedia.org/wiki/Multi-Agent_System
http://en.wikipedia.org/wiki/Software_agent
A thought-provoking scenario...
When an AI agent stands between us and the information we consume or expericences we participate in (by for example leading us to a particular movie, setting us up with a particular date), if such an agent were (today) subverted or (tomorrow) be endowed with an understanding of our responses to different pieces of information and an understanding of our overall psych. Could we become puppets manipulated by the agent?
Anything that tailors and reshapes information (example: radio, TV, google) has a vast potential for abuse.
Posted by: Andres Ferraro | Sep 03, 2004 at 02:29
This is reminiscent of the concept of "agents" as applied to software. http://en.wikipedia.org/wiki/Multi-Agent_System
http://en.wikipedia.org/wiki/Software_agent
A thought-provoking scenario...
When an AI agent stands between us and the information we consume or expericences we participate in (by for example leading us to a particular movie, setting us up with a particular date), if such an agent were (today) subverted or (tomorrow) be endowed with an understanding of our responses to different pieces of information and an understanding of our overall psych, could we become puppets manipulated by the agent?
Anything that tailors and reshapes information (example: radio, TV, google) has a vast potential for abuse.
Posted by: Andres Ferraro | Sep 03, 2004 at 02:31
I sure hope agents and filtering will be at least as accurate as tivo (http://dollar.biz.uiowa.edu/~street/6k275s04/tivo.html) ;-).
Posted by: Cory Ondrejka | Sep 03, 2004 at 08:31
Ha! This post really gets the imagination rolling. I almost picture our avatars being our virtual defenders (or attackers, if you're that kind of person) while we aren't online...protecting (or spamming) inboxes, fighting (or inflicting) spyware and viruses, etc. It's almost like thinking of anti-virus programs and 3d representations of yourself, fending off incoming garbage while you go about your business. It would be interesting to have a customizable avatar to project your machine and your online identity, which you'd pre-program with how you'd like it to act. I'm thinking the ROMs from Neuromancer would be along these lines.
-B
Posted by: Bart | Sep 03, 2004 at 14:58
Some thoughts –
In my wilder writings I do posit the idea that digital identity should be seen as an extension of identity and thus should enjoy protections derived from rights of the person rather than rights of property.
There are a number of challenges to this notion. One being the distance between digital identity and, let’s say ‘self identified identity’ – the person that we tell ourselves were are (I’m leaning towards narrative / ludic theories of self here of course).
To me this distance simply means that we have to be more fine grained about the way we conceptualise and formalise the relationship with this digital self.
I’m sorry to take Second Life as an example again, but it does give one freedoms that most other spaces do. In SL I certainly have an on going relationship with my SL Identity and this identity, or at least it’s projection into the world, is acting as a social filter – just as visual markers of identity do in the physical world thought possible more so because of the lack of other clues within SL.
What I mean is that one’s avatar has so many tribal associations that in a crowd people are going to filter whether they talk to you or not. I guess the Furry’s are a great example of this – any two Furries in SL know each other instantly buy the fact that they will probably have a fox or cat or dog avatar – thy are likely also to be displaying group membership and have a furry name e.g. ‘foxy’.
So sure this is just the same as being identified by the clothes that one is wearing, but I guess that one next logical step is for the system to start enabling Avatars to pick each other out in the crowed, or automatically start to interact with each other by virtual of these properties e.g. if there is another one in the same room and we have more than two groups in common, the system might automatically swap calling cards – or identify their text differently so they stand out more. Of course if you’re a Furry in secret and the system start to highlight you to everyone else in the room, then we start to get interesting.
But ultimately it’s not really avatars that interest me. It’s a node of characteristic data that is held in a structure that allows it to interact with other data that I see as real digital identity – bank records, loyalty card records, this sort of thing, but let loose onto a wider scale. Here things seem to get suddenly problematic as with an avatar I have a large degree of control, with these data nodes and the data shadows that they cast control falls away and ownership, other peoples ownership, starts to kick in. Thus I like the avatar model in the sense that I more control over the digital me – just so long as that control is real and legally protected.
Posted by: ren | Sep 03, 2004 at 16:13
Ren>In my wilder writings I do posit the idea that digital identity should be seen as an extension of identity and thus should enjoy protections derived from rights of the person rather than rights of property.
But much of the power in extending identity comes from the fact that rights of the person do not extend into the virtual world. When you play a virtual world, you can "be who you want to be". If you can't do that (because over-imposition of identity rights mean that you're always grounded to be who you really are) then it hacks away a key reason people find virtual worlds attractive.
There may be some partial extensions of identity that can be protected, but I don't think we yet know enough about it to barge in willy-nilly and impose them. Different virtual worlds are more or less amenable to identity protection, and it's always possible to conceive of a virtual world where a digital identity should be kept very separate from real-world identity as it's "part of the game". A virtual world based on The Prisoner might be like that, for example.
Painting all virtual worlds with the same broad sweeps of law without allowing for different fidelities is, I suspect, going to do more harm than good.
Richard
Posted by: Richard Bartle | Sep 04, 2004 at 08:02
Richard > Painting all virtual worlds with the same broad sweeps of law without allowing for different fidelities is, I suspect, going to do more harm than good.
Yes, that’s what I was getting at when I said “To me this distance simply means that we have to be more fine grained about the way we conceptualise and formalise the relationship with this digital self”. I’m not sure that the VWs that we have now fall into the category that I worry about, though I’m not yet set out the full set of characteristics that I think do tip things over into that category. For the moment I don’t think that the interaction between the data set that might be considered to represent some projection of personhood / persona with other things in the world is quite strong enough.
Moreover I agree that any formalisation would need to talk into account role play in all its manifestations and it’s not clear to me that things could be framed or interpreted well enough yet. Its probably a generational thing, we need a set of High Court judges that have grown up digital and for whom multiple digital expressions of identity are not a strange thing but a part of their every day life.
Posted by: ren | Sep 05, 2004 at 08:45
ren>we need a set of High Court judges that have grown up digital and for whom multiple digital expressions of identity are not a strange thing but a part of their every day life.
Let's hope that current law-makers and law-interpreters don't mess things up so that we never get to this stage!
Richard
Posted by: Richard Bartle | Sep 06, 2004 at 03:01
Maybe there's an aftermarket for software for PDAs that let your avatar "carry on" without you, alerting you via your Blackberry when important events occur/decisions have to be made (reeks a bit of Tamagochis *sp.* but still, for those who can't leave their online world behind...)
Posted by: Orland Outland | Sep 07, 2004 at 21:18
post
Posted by: joseph | Apr 11, 2005 at 00:01