This past weekend there was a road-race in the deserts of Southern California. It featured 15 robotic vehicles careening for a million dollar prize: The Darpa Grand Challenge. Wired magazine extensively covered the year-long build up; their last article summarized last weekend's result. For all efforts, however, the best that human ingenuity could muster was a pair of autonomous vehicles capable of 7 and a bit miles along a 142 mile circuit. Yet it was considered a success and there is a hint of a 2006 repeat.
On the one hand this event could be seen by Virtual World denizens as an orcish irrelevance: this industry, these machines, their wheels, that torn sod, and those hell-bent contraptions impaled one-by-one by barb-wire, confusion, and fence posts in a dusty place outside of Barstow California... what does this stuff matter to the beauty of Elves and Sim parties. But on the other hand, in the spirit of Philip K. Dick (sci-fi, "Do Androids Dream of Electric Sheep"), are these machines and their descendents in the future, the conundrum of Virtual Worlds (VW) to be? Do these robots, with their virtualized minds of sensor and effector interfaces and web of intelligent instincts, suggest a future taboo and threshold VWs must cross? Is there a massive convergence waiting, in the wings, between the real and the virtual.
Sherry Turkle is quoted at the Evocative Objects seminar as claiming:
The line between real and simulation continues to blur, too. "Authenticity is to our generation what sex was to the Victorians," she says... there is a sense of taboo and a fascination with the fake versus the genuine article. Like a virtual personality in a computer game like EverQuest.
And the line separating us from machines has moved. Emotions alone no longer do the job.
Are we really that negotiable? Is there a moveable line separating us and the fake? In an earlier thread, Ed Castronova speculated one manner we might cross-over and embrace fakery: we could stoop to some linguistic and behavioral pidgen that panders to an Artificial Intelligence of our Virtual Worlds:
...the Turing test will first be passed in a VW. Not because AI has risen to human level, but because human language will have adapted to AI scripts... The humans have innate sensitivity to social norms and will try to conform to the language patterns of the autonomous agents... The nature of the gameworld encourages humans to adapt whatever AI patterns are beneficial.
But consider another possibility. Might we wake up one day, non-plussed, to share a chat over a cup of coffee with our AIBO about last night's gossip we missed last night in some MMOG? Gasp, "You don't say?" Our AIBO would be an ambassador of some virtual existance... a real foothold, whose liason from a shared virtual experience panders to ourselves. We may be ripe for this sort of thing. After all, quoting Sherry Turkle again:
"We are very vulnerable to technology that knows how to push our buttons in a human way," she says. "We're a cheap date. Something makes eye contact with us and we cave hard. We'd better accept that about ourselves."
One view is pessimistic: once we start the dance (its already started) and all the lines move, who knows where we will end up? Another view is optimistic: are we so well grounded in ourselves, that we, like the pigeon, just know where magnetic north is. We can bid our cars off to races in some desert somewhere in the morning, without us. And when they come home wanting to talk about it, we offer 'em a cup of coffee. And then we go off and play NASCAR Thunder on the computer before bed.
Are we that balanced? Do we want to be that level-headed?
We, software consumers, are cheap dates.
We buy buggy software. Period.
I wouldn't be surprised if DARPA awarded the grant to the best finisher in spite of the fact that they didn't reach the finish line.
Frank
Posted by: magicback | Mar 17, 2004 at 23:42
GrandTextAuto has an earlier discussion on the Boston Globe Article: "Turkle and Emotional Agents" (thanks Ian Wilson!):
http://steel.lcc.gatech.edu/cgi-bin/mt/mt-comments.cgi?entry_id=262
Posted by: Nathan Combs | Mar 18, 2004 at 07:48
We can bid our cars off to races in some desert somewhere in the morning, without us. And when they come home wanting to talk about it, we offer 'em a cup of coffee. And then we go off and play NASCAR Thunder on the computer before bed.
What an optimistic view. As much as I like the idea that humans will develop best-friend relationships with AI buddies, I think the more likely scenario is that we'll use them as maids while we sit in our virtual hot tubs.
I could be wrong. Recently on AOL UK News there was an article that claimed:
A study conducted by poll experts MORI showed that 44 per cent of youngsters regard their PC as a "trusted friend" - and pine when they have to switch off. It also found that 37 per cent of children and 34 per cent of adults think that by 2020 computers will be as important as family and friends. The poll also found that 11 per cent of children and 13 per cent of adults regularly talk to their computers.
What's really interesting about this statement is that it implies the Turing Test may be irrelevant. After all, this poll did not ask how people felt about NPCs or robot pets. It asked them how they felt about their PCs. So perhaps it is possible to understand that the PC is a machine...and still regard it as a trusted friend. This makes sense for kids. They are experts at anthropomorphizing inanimate objects. But for adults? Maybe Turkle's right.
Posted by: Betsy Book | Mar 18, 2004 at 07:50
We actually kicked some of this around before in the Robot Love thread, and in previous AI discussions.
One thing I've brought up before is the work being done by Peter Kahn and Batya Friedman, which goes directly to the question of whether we'll be drinking coffee with our AIBOs in the future -- and what we should think about it. You can find most of it through links here:
http://www.ischool.washington.edu/robotpets/preschool/
And here are some specific reports:
http://www.idemployee.id.tue.nl/c.bartneck/chi2004/Kahn.pdf
http://www.ischool.washington.edu/robotpets/Articles/CHI2002_Pal.pdf
A good crash course of readings for the philosophical implications is here:
http://ls.poly.edu/~jbain/mind/mindsyll.htm
I think the gist of the reports and the popular (and correct) stance is that while the majority of children understand that AIBOs are real dogs, the increasing number of people who claim to have emotional "relationships" with AI "friends" is somewhat disturbing. There's evidence of this in the GTA thread...
But like I've said before, some kids (and adults) have deep emotional attachments to particular teddy bears.
Posted by: gregolas | Mar 18, 2004 at 09:53
I just realized that two of the final reading assignments on Johnathan Bain's syllabus are from Michael Mateas of GTA.
No doubt they are the ones most worth reading for the VW crowd. :-)
Posted by: gregolas | Mar 18, 2004 at 10:20
The Turing Test, as originally put forth by Turing, is indeed irrelevant except to the dwindling devotees of mid-century behaviorism (though Katherine Hayes' gender reading of it is fascinating).
But in colloquial use, when someone mentions "The Turing Test", what they often seem to be referring to is something rather different. Namely, that a virtual agent will become a good enough receptor of our own behavior--in whatever differing contexts; MMOG, robot pet, etc--that we *choose* to treat it as human(like). This seems not so much like ascribing intentional cognition or even complex emotions to the agent, as something more like a modern magic mirror: affirming our own behavior (this can even be done antagonistically), giving us someone to talk to, or just "making eye contact".
The way people talk about AI makes me think that most people aren't really interested in a truly sovereign, intelligent, autonomous agent--not they they are against it, but the personal interest lies in something like a conversant AIBO. Something we can fool ourselves into treating person-like, but without all the complications of free will. We don't want our AIBOs to choose another owner. In the case of Edward's VW Turing Test, we aren't rigorously examining the other avatar's text chat for signs of a computer masquerade--we just want someone to play with. You don't have to convince me that you are human to join my EQ party, you just have to avoid making it obvious that you aren't.
And not to pick on Nathan, but this seems representative:
"We can bid our cars off to races in some desert somewhere in the morning, without us. And when they come home wanting to talk about it, we offer 'em a cup of coffee."
It's implied that the car chooses to go off racing, and explicit that they want to talk about it...yet somehow it is a given that they come home at the end of the day. This site isn't primarily concerned with AI, but it's certainly worth examining what we really want from artificial others.
So to answer "Are we really that negotiable?", I'd say yes. But we aren't negotiating between binary states of real/fake. Humans are both more varied and more versatile than the past few thousand years of history have allowed expression for, and if someone would rather sip coffe with their AIBO than with a human, where's the loss? The only "disturbing" thing about that is its deviation from the current norm--and pretends that human relationships are normal or perverse, rather than distributed along multiple axes of some rather flattened bell curves. Victorians would have been aghast our our public acceptance of homosexuality, and ancient Greeks might be bemused at our prudishness of the topic. Middle aged men marrying 12 year old girls is frowned upon today, but par for the 19th century. People speaking to their car for sympathy after a bad day at work was sign of insanity 50 years ago, and will be a marker of status and wealth not far from now...
Posted by: Euphrosyne | Mar 18, 2004 at 13:28
An interesting quote:
"We are very vulnerable to technology that knows how to push our buttons in a human way," she says. "We're a cheap date. Something makes eye contact with us and we cave hard. We'd better accept that about ourselves"
Never underestimate the ability of human cynicism to rise to the challenge. By definition, when eye contact begins to be overused to trick us into caving, WE WILL NO LONGER FALL FOR IT.
Well, maybe us old fogeys will be bilked out of our life savings in 2020 when some "kind-hearted" computer cons us. Our grandkids will shake their heads in wonder at how we could be so easily suckered, however.
The problem is that each generation tries to model human nature off its favorite toy of the time. That is currently the computer. If you learn what buttons to press on a computer, you can keep pressing them billions of times and get the same results. This is a very poor model of humans.
With a computer, you can have the smallest hole in security, and thereby gain complete access. I am unconvinced the same is true with humans. You can very easily trick me into believing one false fact. You can then build a logical structure around it that demands I do something. Even if I am unable to refute any step of the process, I may still refute your conclusion. We are, thankfully, neither beholden to emotion nor reason.
Saying we are susceptible to technology is no different than saying we are susceptible to con artists. Saying we "cave hard" at eye contact is just laughable. Any city-slicker has no problem walking by the homeless of their respective city, easily overcoming any eye contact. Likewise, telemarketers are fast teaching the world how to hang up on conversations without obeying the natural niceties required. "thankyoubutimnotinterestedbye" is usually the extent of the conversation they get from me. I don't want for a break in the conversation, or an acknowledgment, etc. If I can be conditioned to be so rude and disrespectful to something I *know* is a fellow human being, why do you think we'll have any trouble adapting to the emotional-button-hitting-robots?
- Brask Mumei
Posted by: Brask Mumei | Mar 18, 2004 at 14:32
Euphrosyne wrote:
This site isn't primarily concerned with AI, but it's certainly worth examining what we really want from artificial others.
Possibly one way of looking at the "AI" in these cases is as a "contract" (with the entity) that the person (player) enters into as part of the interactive experience. Per some of the earlier points, that contract is flexible over time in the sense we may adjust to it, and similarily it may adapt to us (tuned, improved). I wonder whether anyone has seriously looked at if there is some abstract model/theory that transcends both real robotic and VW entities in this space? If such a model/theory is not possible, I would speculate if there are some big implications: do VW's imply a "mode-shift" in how we perceive the world, etc... And alternatively, if both spaces are more or less interchangeable, why not an AIBOO as an interface to your EQ avatar - "hey aiboo-joe, see if you can find me a good party to join up around crushbone, get back to me..."
Brask Mumai wrote:
With a computer, you can have the smallest hole in security, and thereby gain complete access. I am unconvinced the same is true with humans. You can very easily trick me into believing one false fact. You can then build a logical structure around it that demands I do something. Even if I am unable to refute any step of the process, I may still refute your conclusion. We are, thankfully, neither beholden to emotion nor reason.
I wonder if its less trickery than it is a desire to believe - and that may have a whole set of different dynamics involved. Been there, done that :)
Posted by: Nathan Combs | Mar 19, 2004 at 07:51
Nate> Possibly one way of looking at the "AI" in these cases is as a "contract" (with the entity) that the person (player) enters into...
Yes, but we're free to breach "contracts" we make with bricks, lampshades, and AIBOs.
Nate> ...why not an AIBOO as an interface...
One of MS Clippy's incarnations is a dog, y'know. I'm a cat person. I usually use the cat, well-- I never "use" it -- it just falls asleep in the corner of my screen. It can't talk or drink coffee. (Yet.)
Euph> ...if someone would rather sip coffe with their AIBO than with a human, where's the loss?
So you're asking if people prefer to have relationships with simulacrae rather than other people, why should we find that disturbing? I think I'm gaining a new-found respect for Baudrillard...
Posted by: gregolas | Mar 19, 2004 at 11:58
Nathan> "alternatively, if both spaces are more or less interchangeable, why not an AIBOO as an interface to your EQ avatar"
Having a smart agent (AIBO or whatever) be able to control to VW avatar would be handy. But what if they were actually the same entity? Right now we are still in a discrete, embodied mode with our technology, but with the rise of wireless data communications, by the time we create a robust and capable general-purpose artificial agent, there will be no reason to bind it into a single object. Your AIBO, car, and refrigerator will all be communicative in various ways, but there's no reason they need to be seperate. By extension, your avatar can be productive in your absence, and IM or call you on your cell when something unusual happens (VWs should be really interesting when the avatars, and not just the worlds, are persistent).
But back to Turkle, the real/simulation dichotomy doesn't begin to handle this sort of evolution. The "human" qualities of speech, emotion, etc make for effective communication, but we may regard our future agents as something akin to guardian angels or djinni rather than puppies. Of course, they can be both if we wish.
Greg> "So you're asking if people prefer to have relationships with simulacrae rather than other people, why should we find that disturbing?"
Yes, that is pretty much what I'm asking. I'm not suggesting a full substitution, but what is so disturbing about someone having breakfast with (a more advanced, near-future model) AIBO if their husband has already left for work? If an artifact can evoke a full range of emotions from the owner, how is that more distasteful from TV or literature? This isn't a zero-sum game. I'm not going to debate the broad philosophical issues here for lack of time, but just addressing the narrow issue of whether we should feel revulsion at someone who gets genuine enjoyment from their artificial pet, I'd say the evidence is far from conclusive. And of course as an endgame, if and when simulacra are indistinguishable from the "real" thing, making a distinction is irrelevant. We premise our moral distaste for simulations not on the quality itself, but on our ability to identify it as such, and much ink has been spilled obfuscating this fact.
Posted by: Euphrosyne | Mar 19, 2004 at 13:20
Euph> ...if their husband has already left for work?
Why do you add that qualifier? I assume if the husband has *not* left for work and is presently imploring, "Please stop talking to the AIBO and talk to me!" that would constitute a slightly different situation? ;-)
Posted by: gregolas | Mar 19, 2004 at 13:30
How will interaction with life-like simulacrae affect our lives? Will we have a cybernetic chip in our brain first before machines can talk to us in our language?
What about relationships with imaginary friends? Should we find this disturbing?
Take this scenario: I'm walking down the street with a friend and suddenly I stop. My eyes moves rapidly. My friend asks "what's going on?" "Talking with experimental my brain-chip," says I.
Is physical simulacrae of life important in building relationship or a smarter MS Clippy is sufficient in eliciting the same response?
Frank
Posted by: Magicback | Mar 19, 2004 at 13:34
Frank> What about relationships with imaginary friends? Should we find this disturbing?
Well, beyond a certain stage of childhood, we usually call it a psychosis.
Diverting one's attention from other members of society in response to stimuli isn't something new. It happens with primates. So, e.g., many husbands and wives will today say "Please stop watching the television and talk to me!"
But the AIBO, unlike the television, might respond "Excuse me, but will you please allow me to finish my sentence?"
Posted by: gregolas | Mar 19, 2004 at 13:53
Since it seems on-topic, Nicolas Nova just pointed to this by Mike Kuniavsky:
http://www.adaptivepath.com/publications/essays/archives/000272.php
Posted by: gregolas | Mar 19, 2004 at 15:47
Interesting,
So I assume that your AIBO is different from a TV in that it can respond in kind and that the interaction is received by all members of the party involved.
So let's modify my scenario: Both husband and wife have brain-chips. These implanted future-AIBO-on-a-chip passes all test of consciousness and can communicate effectively as any other person. Interaction with this brain-chip version of future-AIBO is the same as the physical AIBO except the physical aspects.
These brain-chips, therefore, are sentient "invisible" agents that do our or their own bidding in interacting with other agents and principals.
Well we then move closer to the imaginary/invisible friend issue?.
I am currently on the pessimistic camp, but am optimistic that clear lines will be drawn to force clear distinctions. This leads to something like Asimov's laws of robotics and musing like I, Robot.
"No, you can't have an AI Chucky."
"And, no you can't have an AI Buzz Lightyear"
"Aww, but I see them in Chucky 13 and Toy Story 12. I want AI Chucky with the long knife!"
Dementally yours,
Frank
Posted by: Magicback | Mar 19, 2004 at 16:07
gregolas wrote:
So you're asking if people prefer to have relationships with simulacrae rather than other people, why should we find that disturbing? I think I'm gaining a new-found respect for Baudrillard...
Well, Baudrillard seemed to find the concept of privileging simulacrae over the real deeply disturbing, even as he called the trend to our collective attention. I always thought it ironic that Simulacra and Simulation, one of the most valued pieces of digerati ur-texts, is actually a rather condescending critique of this phenomenon. I wonder how he feels about unwittingly becoming the poster boy for all of us who are infatuated with the concept of simulacrae (myself included).
But anyway....
Euphrosyne wrote:
Right now we are still in a discrete, embodied mode with our technology, but with the rise of wireless data communications, by the time we create a robust and capable general-purpose artificial agent, there will be no reason to bind it into a single object. Your AIBO, car, and refrigerator will all be communicative in various ways, but there's no reason they need to be seperate.
Now you're talking! I want a pervasive digital connection in my metaverse. BTW, you wouldn't happen to be associated with MIT would you? Every time I read about people talking about putting AI in refrigerators or toasters it inevitably winds up being someone who spent time at the Media Lab .
Posted by: Betsy Book | Mar 19, 2004 at 17:53
Greg> Why do you add that qualifier? I assume if the husband has *not* left for work and is presently imploring, "Please stop talking to the AIBO and talk to me!" that would constitute a slightly different situation? ;-)
Certainly it's a different situation for the husband, but he would be similarly frustrated if his wife were talking to her old high school boyfriend instead of him...But I don't see how it makes a difference to society as a whole, so long as she continues more or less in whatever social role she (would have) filled sans AIBO. Or sans boyfriend.
When this subject is brought up, some people project themselves into the role of the ignored husband, and others take an observer's view. The former tend to strongly disapprove of treating artifacts the same way we treat people (ie, them). But this seems over-reactive to me. We don't yet have hordes of borderline autistic schoolchildren who only speak to their AIBO, nor is that a likely future. Decades ago, people had cataclysmic visions of robots replacing human workers. What actually happened was that, yes, automation has replaced a great many manufacturing jobs--but new, generally better, jobs were created at the same time, and the process was gradual and stable overall. The brave new world is nothing like was feared, even though the mechanism was correctly foreseen.
So the conventional debate seems like a straw man: it's not a choice between either treating AIBO human, or as a rock. It's possible to treat our future artifacts in a much more human fashion than we've ever had anything other than humans to treat. That doesn't imply the downfall of society. Rather an expansion.
Betsy, I'm not at the Media Lab--though I will say they appear to have had a positive effect on the AI department at MIT. Actually, I almost deleted the refrigerator example because you're right, it seems to have been theoretically done to death. :) But such is the fate of the king of appliances.
Frank, my 2 cents: I don't think that physical embodiment makes any difference to the ethical questions. But I do notice that in your example situation, the "strangeness" (from which others might perceive a threat) comes, as in the wife-AIBO case, from another human who might feel that his social superiority is at stake. Just pointing out a theme I see...
Posted by: Euphrosyne | Mar 19, 2004 at 21:57
Euphorsyne,
I also don't think that physical emobodiment will make a difference on the ethical front. There will be adjustment periods, but I do agree that this will not cause the downfall of mankind.
However, it will be intersting to see how the adjustments will play out. In literature and in films there are interesting POVs: AI, Bladerunner, and Centennial Man to name a few.
I do foresee a plateau of consciousness population density. We'll start talking like machines so that we can get more productivity out of our communication efforts (assuming that machines do talk in a more efficient manner). Ted is on the right track on this front.
Soon, well have a brain-chip so that we can talk TCP/IP or Win OS :)
Frank
Frank
Posted by: Magicback | Mar 20, 2004 at 00:55
As a coda to this discussion of AI in VWs, Mythic just announced that they're going to use the EMotion FX 2 engine for their "future characters".
This is all very well, but it does imply that their future characters will have something to seem emotional about.
Richard
Posted by: Richard Bartle | Mar 25, 2004 at 03:34
"This is all very well, but it does imply that their future characters will have something to seem emotional about."
Judging by the picture that they place beside that link, the only emotion they really need to code is lust.
- Brask Mumei
Posted by: Brask Mumei | Mar 25, 2004 at 10:38
Why are we considering technology as the Other, here? I consider my laptop to be another organ of my own body, an auxiliary brain. Doesn't that make sense, in a distributed cognition model? If my laptop were stolen, I would feel inclined to try to get the perp charged for mental damages on top of theft -- until I got the laptop back or painstakingly re-set-up my preferences on a new machine, I would feel disoriented, frustrated by the ineffectiveness of my reflexes, and depressed at my loss of function. I am having a hard time conceiving of the line where an intelligent agent in a machine would come to feel like an "other"... maybe when it has an avatar? when it takes on more complicated functions than extending my memory and ability to filter information?
I did notice the other day in developing a database for doing observations of classrooms in the field that the issue that came up most was whether and to what extent we would have to adapt our collection model and behaviors to what the software was able to do...
Posted by: gus | Aug 25, 2004 at 14:43
I am having a hard time conceiving of the line where an intelligent agent in a machine would come to feel like an "other"... maybe when it has an avatar?
Consider the (relatively) obvious case: engaging fiction, and an agent that turns adversarial, e.g. Hal. Now move closer... what if the agent was merely influential... and you resented some of its manipulations? ... Then, what if the agent were merely bookish and without any cunning... no tension, no identity, just the facts...sounds like software? And that feels organic, not other...
The case of avatars would appear to introduce another dimension: emotional closeness that may or may not be impacted by uncanny valley considerations, but that is impacted by depth of storytelling (engagement).
Posted by: Nathan Combs | Aug 25, 2004 at 21:55
euphrosyne wrote:
"The Turing Test, as originally put forth by Turing, is indeed irrelevant except to the dwindling devotees of mid-century behaviorism (though Katherine Hayes' gender reading of it is fascinating)."
Can you please direct me to Hayes' writing on the turing test (either online or a citation for a journal somewhere)? I'm interested, but haven't been able to turn anything up through Google or Vivisimo. Thanks, and sorry to get off of the topic at hand.
Darga
Posted by: darga | Aug 26, 2004 at 17:08
Hayles, N. Katherine How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics.
There is a fascinating discussion on this here.
My lay(person's) take is that a posthuman view would be more sympathetic to the Turing Test, than say an "enlightened" view of human intelligence: it is an emergent quality that can be superfically appraised.
Posted by: Nathan Combs | Aug 26, 2004 at 22:39
i just remembered to come back and check this thread to see if anyone had posted the info. thanks so much nathan! this looks to be a great read indeed.
Posted by: mike darga | Sep 09, 2004 at 20:03
I do believe that there will come a point in time, somewhere in the first half of this century, when so-called 'artificial' entities gain sentience. By this I mean that they will be alive by any meaningful definition of the word, besides being biological in nature. It would seem that this is an almost essential progression of evolution.
If you can give a technology every appearance of life, it may be considered alive, without regard to the mechanism behind such appearance. Any robot that can think for itself will have outgrown its' intended purpose as a 'forced-labourer'. Free will is not known to engender productivity. It may become necessary to limit some functionality in order to preserve others.
For the entire history of human technological advancement, a 'quickening' of sorts has been occuring. This is an exponential curve of accelerating advancement that, if continued not much longer, means the entirity of our colletive past will be outdone in smaller and smaller timeframes until we hit a brick wall! What this means is more change will occur in the coming decades than has occured in the past ten thousand. We are in for a ride we can scarsely imagine...
Steven Young
http://tryptamind.com
Posted by: Steve Young | Dec 26, 2004 at 00:34