Can you imagine the day when turning off your computer is considered murder?
In the context of our lifetimes evolution is a very slow process. Mutations occur infrequently, and rarely is evolution noticeable within small numbers of generations. In comparison to the lifetime of the solar system, evolution can seem quite fast. Consider that more than 99% of all the Earth's life forms are already extinct. Human evolution is only a small percentage of all the evolution that has already occurred here on Earth.
Now consider the speed at which we can control evolution in a simulation environment. We can create a program (virtual organism) and give it some basic intelligence features, then allow it to function in an environment that we control. Then we can spin off as many simulations as we like and measure how the virtual organism performs across the environments we create. The organisms that do well are allowed to procreate. Organisms combine themselves, mutate, and the process begins again. After several generations the organisms begin to adapt to the environment and the other organisms in the environment. Some become highly optimized.
We can use virtual organisms to optimize all kinds of things like traffic systems, manufacturing, medical procedures, or even non-player characters (NPC's) in virtual worlds. Mike Sellers wrote an interesting piece a few weeks back in May about AI for NPC's. I have been thinking about this type of intelligence for a while now, so it struck a nerve. We could evolve NPC's to inhabit and participate in our virtual worlds.
One of the areas I have been studying recently is the affects of emotion on our behavior and our physiology, and how to induce those behaviors into virtual organisms. How do you write a program that can sense fear, or be afraid? (Answer: You don't, you let it learn fear all by itself.)
Lets take a small virtual world and place some of our intelligent NPC's in that world. We give them goals, food, some predators, and lots of stimulus in the environment for them to experiment with. We give each NPC a neural brain with emotion capabilities, the ability to build relationships, and the potential to reproduce, and we let them grow. We also give them the ability to remember, and learn from previous generations. As we let them all live and die over multiple generations, groups of virtual organisms optimize with other groups and, to the environment. The ones that become stable and interesting are selected to appear as NPC's in a virtual world that we can interact with.
Once the intelligent NPC's can interact with humans, they can evolve based on what they learn from us, and the decisions they make interacting with us. We, as players or participants, now become a stimulus for the virtual life. We can observe the NPC, help it with its goals, alter its food source, reproduction, start relationships, make NPC's jealous, and start fights, even wars. They would remember you from the last time you visited. They would have real experiences, real relationships, and real intelligence, and if the power happens to go off or memory gets corrupted... they cease to exist.
Consider the near future as we begin to develop virtual life. The life forms we develop will become more and more real, and more alive. Virtual worlds are the Petri dish of this intelligence. There are already evolving organisms growing in SecondLife, on Terminus and Svarga. Semi-intelligent virtual pets are right around the corner. Virtual friends and virtual enemies are not far off either. Who lives and dies, thats another matter.
Not to degrade your post, but I don't think it is relevant to discuss this in a day and age when even making a computer game character get around a fence is a huge challenge (and often a failed one).
The subject is interesting, but artificial intelligence is not even on a bug level today, and unlikely to take quantum leaps any time soon.
So I'd say save the topic for a science fiction novel, or bring it up later, when it has become even remotely likely that your scenario will ever be possible.
Regards
Posted by: Thomas | Jun 01, 2007 at 18:22
Oh, come on. We can absolutely make a character get around a fence. That's totally a solved CS problem. The issues tend to be CPU load for tens of thousands of them do it, and the datasets on which the pathing operates.
There have also been plenty of experiments with artificial life entities that DO approximate bugs, at least as far an an observer can tell. These have happened both with robotics and with virtual entities.
Whether any of these have reached a level where they can be considered as "alive" is an open question, but it's come up before -- for example, the makers of Creatures argued that their AIs were "alive."
So yeah, it's a perfectly valid debate to have right now.
Posted by: Raph | Jun 01, 2007 at 18:31
You have no clue what you're talking about , for 3 simple reasons : you don't know where the life is comming from, you don't know where it's going, and i let you guess the third .As i'v already said in another thread , this is intelectual virtuality ( the word was " masturbation " ) ; actually you need first to see a real vagina , before to have an opinion about virtual sex. Or life, whatever.
Posted by: Amarilla | Jun 01, 2007 at 19:59
" Semi-intelligent virtual pets are right around the corner."
I'm still waiting to see a semi-inteligent real designer.
Posted by: Amarilla | Jun 01, 2007 at 20:01
Wow. AI: an imaginary technology that elicits sharper reaction than Second Life! Heck, I'm all for discussing the POSSIBILITIES.
AI may just be getting to the bug level, but exponential info and tech trends (supported by Moore's law, Metcalfe's Law, etc) do indicate that quantum leaps are just around the corner. What with all of the biological and system simulation going on, it's not really that much of a stretch to combine exponentials + the tendency to simulate and arrive at the conclusion that we may at some point in the near future create at the very least limited AI processes that trigger ethical dilemmas. While human-level AI may be a ways off, where do we begin to consider emergent digital processes as alive or sentient? That's what I find interesting about this post - is the notion that the notion of digital murder will pop-up long before higher level AI, when it becomes clear that digital eco-systems are in fact very life-like. Heck, Prokofy is already arguing for 1-to-1 human-to-avatar rights, a very interesting philosophical position (on that I do not agree with) that will probably proliferate as a square to the complexity of digital eco-systems.
What light does the POSSIBILITY of emergent digital sentiency shed on our own existence? Can God erase us if he/she/it pleases without getting prosecuted by the Deity Simulation Regulation Council? Is Bostrom's argument likely to be true? What IS life? What is love, baby don't hurt me...
Posted by: Vis | Jun 01, 2007 at 20:57
Oh yeah, it may be , according to Murphy's Laws too. Btw, pass to me what you're smoking . Doh.
Posted by: Amarilla | Jun 01, 2007 at 21:24
Bob, don't let the naysayers get you down. :) I absolutely believe we're a lot closer to what you're talking about than others may suppose.
This is an extremely interesting topic, one I've been focused on for several years (there's a bit about our "People Engine" tech on our site). We're not taking a genetically evolutionary approach, though a form of memetic evolution, if you will, is necessary for any learning system. I think you may be looking at this area (of artificially evolved organisms) more broadly than we are, which I have to admit is a bit dizzying to me.
How do you write a program that can sense fear, or be afraid? (Answer: You don't, you let it learn fear all by itself.)
To some degree. Some forms of pain- or threat-avoidance appear to be deeply wired, and the state experienced as fear accompanies these neurological responses.
But I agree that emotions are primary, not a side-effect of cognition as they have often been treated in AI. I've written before about our "islands of rationality" hypothesis: that we spend most of our time swimming in the sea of fluid emotionality, making most of our decisions based on emotional content, and only occasionally crawl out onto the dry land of logic and primary cognition -- but we have an internal monologue that is highly adept at looking backward and ascribing 'rational' motivations for our actions. The primacy of emotions and emotional motivations are, I believe, key points in creating any kind of believable artificial agent or organism.
... We can observe the NPC, help it with its goals, alter its food source, reproduction, start relationships, make NPC's jealous, and start fights, even wars. They would remember you from the last time you visited. They would have real experiences, real relationships, and real intelligence, and if the power happens to go off or memory gets corrupted... they cease to exist.
This is a real issue. Or at least it feels like one -- and that's the problem: does the fact that it feels real make it real? This can quickly lead down an epistemological rabbit hole.
We've had many surprising moments working with our AI-NPCs... such as, early on, when two of our agents (Adam and Eve) ate all the apples on their tree (we truly did not see the significance of that juxtaposition until later). They knew how to eat and they had learned what was good to eat and what wasn't. They also knew a third NPC, Stan, whom they didn't like all that much. So, they didn't value him socially very much, and due to a couple of errors in their associative learning, they had come to associate Stan with food (well, he was there whenever they were eating, so...). Being out of apples and since he wasn't of much value any other way, Adam and Eve, perhaps naturally, decided to eat Stan. Due to another bug, this caused Stan's mass to go to zero, and he disappeared before our eyes: our own first virtual cannibalistic murder.
We've tried very hard to not put in hard/programmed limits to the agents' brains, but cannibalism is, as they say, right out.
So, epistemological questions: was Stan's death any more significant than that of an orc in a regular MMO? If not, why did it feel like it was? And when we turned off the simulation, adjusted a few parameters, and re-started it (so poor Stan would stick around), were these the same agents as before? Does that make a difference?
These agents build relationships with each other and even with themselves (that is, they have a form of self-image). If I kill one of them, others will miss him and be sad -- by which I mean they will have the analog of the conscious experience we have when we have a base unmet need, such as for the presence of another person, something we call sadness. Is making NPCs sad a concern, or should it be? If I berate one of them and he feels bad, is there a moral component there for me? If I put one in a state of constant fear, is that immoral? What if it's a child agent?
I continue to be of the opinion that, while our agents are not simply state machines, they are at some level clockwork beings essentially different from ourselves. Not all of my team agrees with me -- there have been multiple times when one or another of us has hesitated just a moment before closing down a simulation. It feels different than closing a book or shutting down a word processor, or even a video game.
Ultimately we think this is a good thing -- that building relationships with and attachments to NPCs can be good in a variety of different ways (gameplay, fiction, therapy, training, and of course business). But these "artificial psychology" capabilities do raise thorny questions -- and ones that are firmly in the sphere of reality, not science fiction.
Posted by: Mike Sellers | Jun 01, 2007 at 21:27
Bob, would you care to comment on the (someday) upcoming game "Spore"? Is this on the path you are hoping for, or something entirely different?
Posted by: Robert Bloomfield | Jun 02, 2007 at 01:24
Well, it's a catchy title that caused me to read the article. You've certainly articulated the point well, but I'm not sure I see the analogy.
Psychologically, I feel the "evolution" would actually regress me back to when I had an affinity for my doll Betsy when I was 5 years old. She also had a certain artificial intelligence but the more I think about it, I know it was really me who gave her a personality, even a name. But lifeform??? Ummm.. no, sorry. I wouldn't go that far.
Posted by: Jaded | Jun 02, 2007 at 01:24
Ooh! That was one of the most tantalizing openings I have seen to date!
Posted by: Lisa Galarneau | Jun 02, 2007 at 02:21
Mike Sellers: So, epistemological questions: was Stan's death any more significant than that of an orc in a regular MMO? If not, why did it feel like it was?
Because it was your kid? The downside of AI is that it is all wasted if you only spend a few minutes with each character. It requires a different type of game-design.
Posted by: Ola Fosheim Grøstad | Jun 02, 2007 at 04:19
Mike: "We've tried very hard to not put in hard/programmed limits to the agents' brains, but cannibalism is, as they say, right out."
Why?
Cannibalism is something we know happens in both human and animal societies. As you noted, by placing artificial boundaries on your agents, you have changed them at a fundamental level.
To add to this debate: I know a girl who is emotionally-attached to her "friends" in Animal Crossing. She associates emotions to what they do; regardless of the fact they are pretty poor in AI terms (to say the least!) She gets upset when one of her friends moves out.
Are we already are where this post discusses? I think humans see, at a fundamental level, that these things are not real, and discounts them.
I think the concern will begin when animatronics reaches a more advanced stage.
Posted by: Synthetic | Jun 02, 2007 at 05:32
We asked this question when I was doing my PhD in AI 25 years ago. If you actually manage to create intelligent life in a computer, does that mean you can turn the computer off? It's a valid question - and one which the AI in question might want to have a say, when the time comes. After all, for all you know, what we call reality might be someone else's virtual world, and YOU might be an NPC.
As for the practical issues, we have plenty of time. You don't think we'll get AI any time soon? How about if we gave humanity 1,000 years to do it. Not enough? How about 10,000 years? We can stretch to several million or even billion years if we like. Is it the case that we can NEVER EVER have AI, or is it just a case of waiting until the science has been worked out?
I'm on the side of science, by the way.
Richard
Posted by: Richard Bartle | Jun 02, 2007 at 07:50
There's people out there who seriously consider this theory of US living in a simulation. This is highly philosophical stuff and it always make me brains hurt :) but nevertheless it is a tremendously interesting thought-game:
http://www.transhumanist.com/volume7/simulation.html
http://www.edge.org/documents/archive/edge116.html
http://www.simulation-argument.com/
Posted by: Dyardawen | Jun 02, 2007 at 09:26
Synthetic asked about why we put in bounding conditions against things like cannibalism: Cannibalism is something we know happens in both human and animal societies. As you noted, by placing artificial boundaries on your agents, you have changed them at a fundamental level.
That's true, and not something we did lightly. There are always nagging questions of which corners you're going to cut, not whether you're going to cut any corners. Cannibalism seemed like a safe bet in this case (though I might well make a different decision for another species; this decision isn't built into the brain architecture itself).
Are we already are where this post discusses? I think humans see, at a fundamental level, that these things are not real, and discounts them.
These two statements seem to be contradictory. I think we're approaching the point where it's common to have emotional attachment to virtual characters, but most people aren't really there yet.
To your second statement, it's arguably the case that we discount effective video game characters the same way that we discount effective TV characters -- this isn't an all-or-nothing phenomenon. For example, I don't really think that Hiro Nakamura is a guy who can stop time -- but within a fictional context he's an emotionally engaging character whom I'm likely to be interested in following. The same could be but typically isn't true of NPCs in video games -- other than a very few like the oft-cited Floyd, a (robot!) character from a text-game released almost 25 year ago. One other major exception to the emotional blandness of NPCs comes, of course, in The Sims, a game that invites players to imbue the characters with their own emotional stories. Seven years after its release this game remains, especially in MMO circles, an often underestimated franchise despite its staggering commercial success (partially I think due to the failure of its MMO name-only component).
This is clearly edging away from Bob's initial comments on applying evolution to NPCs, but the point goes to the ends: is this a desirable end at all? I think so for reasons I've talked about on several occasions; in particular the emotional resonance that comes from having believable (not necessarily realistic, convincing, much less Turing-level) characters in game worlds. We aren't fooled that the virtual world is a real place, and yet it becomes meaningfully real to us; someday (soon) the virtual world's inhabitants will be as meaningful to us as the terrain itself is now.
Posted by: Mike Sellers | Jun 02, 2007 at 13:06
1: "Creatures" was the biggest pile of Sullbhit .. after a creature hatched, it would basically act completely random - while little "neurons" lighted up in the brain diagram, supposedly proving that we dealt with artificial life. Utter rubbish.
If we consider Creatures to be virtual life, then the old screen-saver program with the man on the deserted island also showed signs of 'intelligent virtual life'... randomly played animations, that is.
2: Murder is considered a serious crime only because it is irreversible. With a virtual life form, you can turn the computer on again - instant resurrection, hallelujah. Your virtual love will tell you she fainted in your absence.
Posted by: Thomas | Jun 02, 2007 at 13:35
Thomas wrote:
2: Murder is considered a serious crime only because it is irreversible. With a virtual life form, you can turn the computer on again - instant resurrection, hallelujah. Your virtual love will tell you she fainted in your absence.
Agreed.
But what about if I turned off the computer AND erased the algorithms that produced that virtual program-emulating-a-person-to-some-minor-extent? Ok, I'll admit, I don't really consider it anything more than intellectual masturbation too, and as long as I'm happy to kill a cow to eat it, I'm definitely not going to care much about turning off/erasing software, but it's still pretty interesting to think about.
--matt
Posted by: Matt Mihaly | Jun 02, 2007 at 13:59
I agree that it's interesting to think about, just like many other science fiction concepts are.
But that doesn't mean we should rush to think up ethical guidelines for virtual lifeforms right now, or even consider it a serious possibility that we will need them in the future.
Maybe we will at some point, but that's one problem we can afford to pass on to our grandchildren - I'm sure they will need something to do after this generation solves all the world's more acute problems..
Posted by: Thomas | Jun 03, 2007 at 11:08
Well, like I said above, this isn't speculative science fiction. As Gibson said, the future's already here; it's just not evenly distributed.
For a more humorous take on this question, there's this very cool Flash animation.
Posted by: Mike Sellers | Jun 03, 2007 at 13:54
Thanks for the responses everyone, I'll try and answer the general theme of the discussion and a few individual posts.
Thomas, first off, "bug level" is not an accredited form of measurement for AI functionality ;-). Second, unless you know of a bug that can land a 747 jumbo liner, or accruately predict bid/ask spreads in the stock market, I would argue that we are way past bug level.
As to your point that this is science fiction and not the time for an ethical debate, I would argue that I'm not trying to have an ethical debate. I'm actually interested in the technical challenges of designing systems to keep virtual life alive, and what that really means. It becomes a psychological question because it requires us to ask ourselves what it means to be 'alive'. Once we can create virtual organisms that are unique, insightful, introspective, etc., what does it mean to copy them, alter them, or even kill them. Not only is it a valid discussion, its some of the most intriguing stuff going on in the technology space these days. Sure its bleeding edge, but hardly science fiction.
Mike, I knew you'd get. I had already been to your site, and I'd like to discuss the work your doing in more detail sometime. I'm very interested.
Your story of Stan being eaten was a perfect example of the base intelligence already being developed today. I imagine at some point down the road, simply stopping Stan, and shutting him down for the evening wont be quite so easy. On a side note, my wife, who has no interest in any of this stuff, got a big chuckle out of Adam and Eve eating Stan.
I orgininally employed a semi-evolutionary process to develop intelligence for a traffic system. Moving vehicles quickly through the map provided positive feedback, and slow throughput provided negative feedback. The agents that performed the best were then combined with other good performers, mutations were introduced, and we started the simualations all over again.
You're correct, I have now generalized the whole process and made it reusable to develop any type of organism. The idea being that I can plug it into any simulation environment and optimize certain aspects of the system by creating a mutation model, feedback stimulus, and predators to weed out the weaklings.
Working on that idea got me into modeling emotion receptors, and how those receptors should affect neural pathways and behavior. I realized that the one way to test and develop those types of receptors and their effects on behavior are with an environment chaotic enough to cause their creation. Virtual worlds can provide such an environment. This comes back to your comment about fear being inate. It is inate, but we can design an environment to produce a lot of fear, or any other trait we wish to draw out.
I'm sure you already know this, but creating the artifical lifeform is only half the problem. The other half is to create an environment (test data) chaotic enough to develop the desired traits of our organisms. Is this stuff great or what?
Robert, it seems I have been hearing about Spore for years now. I'm fascinated with the idea, and I'm really looking forward to the game. While the creatures in the game do evolve physically, the intelligence and behaviors appear to be mostly deterministic and predictable.
Soon, non-deterministic NPC's, like the ones being developed by Mike, will launch the next generation of gaming. No more level treadmills, camping respawn sites for hours, or mining some irrelevent mineral for days on end. Spore is a start. The AI in Stalker is supposed to be pretty impressive as well, but I have yet to check it out.
I'm curious if you have ever worked with any of the evolutionary economic systems that are out there. The stock market was the inspiration for many of the first evolutionary systems, and there are several mutual and hedge funds using these types of patterns.
Posted by: Bob McGinley | Jun 03, 2007 at 23:58
In Cambridge, UK, we have a small group of terrorists who attack anyone associated with Huntingdon Life Sciences, who perform vivisection for medical research. Those people have become so misguided as to regard animal life as more valuable than human life.
It looks like humans will become emotionally attached to AIs long before the AI is good enough to deserve it.
Posted by: Peter Clay | Jun 04, 2007 at 04:57
On National Geographic i saw a crow crafting a hook and using it to find worms. I would call that crow a terrorist.The dolphyns are suspects as well.
Dudes , firstly put in good order your values and learn to define the terms you use.
You've never " made " AI and you'll never make Artificial Life. All you did is : you simulated low and poor and very limited aspects of your own intelligence and life , as much as you have them.
Judging the results , i could say "...if you have any at all ...."
Posted by: Amarilla | Jun 04, 2007 at 06:01
Bob: I imagine at some point down the road, simply stopping Stan, and shutting him down for the evening wont be quite so easy.
Maybe. This implies that an agent exists in something other than a terrarium-style environment, maybe more of a networked one with fuzzy bounds, where we are unable to cleanly pull an agent or shut the whole thing down. I have a hard time seeing that happening this side of SkyNet.
creating the artifical lifeform is only half the problem. The other half is to create an environment (test data) chaotic enough to develop the desired traits of our organisms. Is this stuff great or what?
Soooo true. And when you throw in interacting with humans within the environment, the interface between AI and human becomes a third related area.
It's great stuff, but it keeps me up at night. :)
Soon, non-deterministic NPC's, like the ones being developed by Mike, will launch the next generation of gaming. No more level treadmills, camping respawn sites for hours, or mining some irrelevent mineral for days on end.
That's the idea.
We should talk some time in more detail about how crowds, populations, personalities, opinions, fads, etc., emerge from this, and how that affects in-world economics and socionomics (and how this affects the human experience and gameplay).
Posted by: Mike Sellers | Jun 04, 2007 at 13:52
read a cool book called 'permutation city'.
Posted by: jon | Jun 05, 2007 at 00:17
I suspect it's because there is an ethical question here that Stanislaw Lem wrote "The Seventh Sally, or How Trurl's Own Perfection Led to No Good" (one of his "Trurl and Klapaucius" stories included in The Cyberiad).
In this story, a box is constructed in which the clockwork inhabitants are so intricately detailed that it seems wrong to allow their box-world to be controlled by a tyrant. The obvious point is that a simulation that's so like the real thing it simulates as to be indistiguishable from it is the real thing... so if you simulate people well enough, the ethical strictures concerning real people must also apply to the simulated people.
So at what point does a simulation become good enough to be considered "indistinguishable" from what it simulates? What if our subjectivity means that what's "close enough" for one person isn't so for someone else?
Interestingly, in 1989 Will Wright was said to have cited this story by Lem as an influence on his creation of Sim City.
--Bart
Posted by: Bart Stewart | Jun 05, 2007 at 12:30
"That's the idea.
We should talk some time in more detail about how crowds, populations, personalities, opinions, fads, etc., emerge from this, and how that affects in-world economics and socionomics (and how this affects the human experience and gameplay)."
The ideea is maybe you should go to the local Library ; it gonna cost you less, time and money , and you'll better learn the same things.
And who knows, there you may meet a nice real girl and debate the subjects. It amazes me how ppls keep discover the wheel. Everything you're talkin here about, is well known since at least 30 yrs ago ; known and confirmed by the RL.
Meet you next century.
Posted by: Amarilla | Jun 05, 2007 at 12:53
"This is a real issue. Or at least it feels like one -- and that's the problem: does the fact that it feels real make it real? This can quickly lead down an epistemological rabbit hole"
The " problem " got resolved 5000 yrs ago , it's called " maya "; it got resolved in the best possible way , given the human limitations. You aint gonna find a better way , given the much deeper limitations of any VW.
Posted by: Amarilla | Jun 05, 2007 at 12:58
Bart: ...so if you simulate people well enough, the ethical strictures concerning real people must also apply to the simulated people
That's typically the primary ethical issue people think about, but it's not the first one we hit. Long before we get to the point of artificial creatures or people being indistinguishable from the real thing, we hit a difficult area reminiscent of the famous Milgram experiments: in those experiments people were encouraged by an authority figure to hurt others (who were actually actors, but the subjects didn't know that). One set of troubling questions to come out of those experiments is the willingness of normal people to submit to authority figures. Another deeper ethical issue is our own willingness to harm others at all.
At this point I'm not actually concerned with the rights of an AI; we're far from that being a real issue. I am a bit concerned about the ethical question about what it says about me (or anyone in this situation) if I'm willing to threaten, harm, torture, kill, etc., what even appears to be a thinking, feeling person.
Killing a monster in a game might be the equivalent to playing "bang bang" cops and robbers as kids; or to seeing actors act out what we know to be fictional violence in TV or movies. But somewhere there's a line, probably different for each of us.
Ethically, I have no problem with an action movie that contains violence. But I'm not comfortable watching even fictional acts of violence against children or that's of a sexual nature; that's my ethical line. So how does this discomfort correspond to the ethics -- to what it says about me -- if I'm okay lopping the head off of a clearly distressed orc or villager pleading for its (truly non-existent) life in a game? At some point, ethics aren't about what harms another person (or whether they're "really" a person); they're about who I am, and who my actions determine me to be. Am I less than the person I want to be if I turn off a believable, emotional AI that considers me a friend? Or despite appearances is this nothing more than stopping the clockwork mechanism for some cleaning? Ethically speaking, if it feels real to me, does it matter if it isn't? Is there a point at which we begin deadening ourselves to being sensitive to the needs of real people by ignoring the reactions of virtual ones?
Amarilla, you appear to be talking about something besides AI and the evolution of artificial life. FWIW, I met a very nice girl in a library about thirty years ago. Since then we've produced six independent instances of natural intelligence -- much messier than the artificial kind, but so much more rewarding. AI is terrifically fascinating, but nothing I create in AI or in a virtual world will exceed what we've made as a family in this one.
Posted by: Mike Sellers | Jun 05, 2007 at 14:54
Mike: So how does this discomfort correspond to the ethics -- to what it says about me -- if I'm okay lopping the head off of a clearly distressed orc or villager pleading for its (truly non-existent) life in a game?
That's a particularly good question, because it's one I suspect we're likely to run into in the near future in graphical MMOGs. (As opposed to the more philosophical question I asked, which I agree is not an imminent problem.)
I wonder whether all that's required for the behavior of players of MMORPGs toward humanoid NPCs to change is one tweak: no more respawning. Right now gamers don't treat NPCs as realistic because NPCs are so obviously canned. Players know perfectly well that they can kill an NPC and a few minutes later the exact same NPC will respawn on the exact same spot, firing off exactly the same pre-scripted lines.
But what happens when some MMORPG implements characters that are randomly generated with unique characteristics when needed, and whose reactions are produced dynamically in response to a wide range of possible environmental stimuli? Now, if you attack Borbulas the Blacksmith, he'll fight or flee or plead for his life with whatever resources he possesses. And if you kill him, he's gone forever. Another blacksmith may come to that village some day, but there will never be another Borbulas -- something unique has been destroyed permanently.
Would gamers behave differently toward NPCs in such a world? Would that game develop the same ethos of blithe murder as in current MMORPGs?
If so, how "real" would humanoid NPCs have to get? Where's the tipping point?
--Bart
Posted by: Bart Stewart | Jun 06, 2007 at 00:08
@Mike ...
Excellent . I'm forced to admitt : maybe there's still a chance for " VWs " to become VWs. And like any other world, to evolve to intelligence and life.With their goods and bads.It very much depends on WHO 's gonna try . Mea culpa.
Posted by: Amarilla | Jun 06, 2007 at 06:08
At the risk of beating a dead horse, I've been having some second thoughts about my little proposal yesterday.
It occurs to me that I cheated. I didn't name just one change that could get players of a MMORPG to treat NPCs as more lifelike -- I named three:
I'm thinking now that this last feature is more important than I implied. Sure, seeing a character cower in fear or try to talk his way out of a fight might help NPCs seem too real to kill. Likewise for knowing that an individual NPC has a functional role toward players that will go unfilled until another NPC with that skill comes along (although that's more of a gameplay consequence).
But maybe what's really important here is not how an NPC behaves, but the visible impact on the social networks of which that NPC is a part. In other words, if other NPCs behave toward an NPC as though she were real, then perhaps players will, too.
So if you dry-gulch a solitary orc, no one cares. (Other than the orc, that is.)
Borbulas, on the other hand, was more social. Maybe in addition to serving players as a blacksmith, Borbulas also had the following roles with respect to other NPCs:
Well, now what happens when you whack poor Borbulas? First, you've got his pitiful begging on your conscience. (Assuming you have such a thing.) Next, you'll have to go somewhere else to get your gear repaired... and so will every other player.
And then come the multiple impacts of your action on NPCs. Tomas cries and screams, "You killed my daddy!" and follows you around, throwing pebbles at you. On the other hand, Raedela actually likes you more for solving a problem for her. But everyone else in the village will have nothing to do with you. (There could also be practical in-game consequences, such as outlawry, but let's focus on NPC behavioral responses for now.)
Something like this already happens in current MMORPGs through the relatively crude mechanism of faction. The difference I'm suggesting here is that this mechanism gets a lot more interesting when it's extended to allow multiple social networks, when faction change is proportional to the strength of the NPC's membership in each network, and when NPCs have a wide range of faction-driven behaviors available to them. Killing an NPC might mean little to a corrupt sheriff or distant government, but might simultaneously change faction with that NPC's family members, friends, and enemies by a considerable amount and in ways that are visible both to other players and other NPCs.
And of course killing is just one behavior toward NPCs. Other actions, both negative (theft, slander) and positive (quest-solving, fair trades) could generate multiple impacts on the various characters with whom an NPC is socially connected.
So would this be likely to produce a game community in which players treat NPCs as though they were real people?
How real must NPCs seem for player behaviors to change to a norm that doesn't include random destruction of humanoid life? Or will only direct and immediate gameplay consequences ever be enough to cause such a change?
--Bart
Posted by: Bart Stewart | Jun 06, 2007 at 12:36
Bart: maybe what's really important here is not how an NPC behaves, but the visible impact on the social networks of which that NPC is a part. In other words, if other NPCs behave toward an NPC as though she were real, then perhaps players will, too. ...
The difference I'm suggesting here is that this mechanism gets a lot more interesting when it's extended to allow multiple social networks, when faction change is proportional to the strength of the NPC's membership in each network, and when NPCs have a wide range of faction-driven behaviors available to them.
Bingo.
So would this be likely to produce a game community in which players treat NPCs as though they were real people?
I believe it would be likely to increase the emotional and social engagement of players toward NPCs -- and, not incidentally, toward other players. The flip side of the ethical questions we were talking about above is, if I can treat NPCs like vending machines or just as things to be killed with impunity, how much of that slides over into how I treat PCs who look and often behave remarkably like their machine-driven cousins?
If you are emotionally, relationally, and socially situated in the world, are you likely to treat people (PCs and NPCs) differently than if you're dropped into it with no connections at all? I think so, and there are other benefits as well.
Posted by: Mike Sellers | Jun 07, 2007 at 10:34
this question is interesting to me, because it addresses something that i see, and have seen, for sometime inworld: that there is a HUGE emotional cost of entry for anyone coming in-world at first, that seems to revolve around emotional dissonance. i wonder what happens to players who become emotionally involved with a really good NPC, when they show up. will there be sort of an "M Butterfly" scenario, i wonder. will conservatives declare that certain types of relationships were intended to be human-human only? will there be the machine equivalent of a bestiality taboo? will there be an NPC episcopal bishop of second life with the accompanying messy discussion? i think about this stuff.
Posted by: humdog | Jun 13, 2007 at 11:50