« 2L's Virtual Land Sales Attract Investors, Controversy | Main | Authorial Play »

Feb 09, 2004

Comments

1.

There are some well-rehearsed moral arguments in the AI community about what happens if you create an actual artificial intelligence (for example, would it be murder to switch off the computer it ran on?). There are also some well-rehearsed religious arguments (for example, could an artificial intelligence exist without a soul?).

The arguments in favour of not mistreating a virtual kitten tend to be of the kind: "If people get pleasure from hurting something they have anthropomorphised, they should not be encouraged to stoke the flames of this pleasure; otherwise, before you know it they could be hurting real creatures".

Richard

2.

Righto... And I think there's something to that argument about not mistreating virtual kittens.

But in the end, that line of thinking leads us inevitably to the "ban GTA" (the game not the blog) and "Doom is responsible for Columbine" camp, doesn't it?

So what's the right answer? When Mary Flanagan was talking at the State of Play conference, she was emphasizing her own interest in creating havoc in virtual play-spaces -- and pointed to a long history of that kind of play. Paper here. I wonder what her feeling are on the abuse of robotic dogs and virtual kittens?

3.

Er, yeah. I hope I don't sound too judgmental when I say that if you've often wished and dreamed you could kick a puppy without breaking any laws and finally, FINALLY there's a commercial on late-night TV that really catches your attention with an announcer stepping on stage with a booming voice saying "WELL NOW YOU CAN!" then you have "issues".

Not that it's wrong to hurt something kitten-like, but that there's something freaky wrong with you for wanting to.

My word.

4.

You may want to check into the Creatures community, where there have often been assertions (fomented in part by the developers themselves, is my impression) that the Creatures are in fact alive by the standard dictionary definitions of the word. It's led to interesting debates over deliberate torture and other such behaviors among players.

5.

Thanks, Raph!

For those interested -- some posts here, here, and here.

6.

My daughter, at 12, quickly figured out how to hack Catz voxel specifications (then stored in text files) to create what may be described in anthropomorphic terms as mutant kitties with stretched bodies and distorted colors and features. She also figured out how to change the sounds. If I’m not mistaken, it was this very practice that lead to OddBallz. At 17 she still likes to show any new friends her creations, though the software is now antique.

She also owns probably 100 stuffed animals and nearly that many anime figurines. None of which she’s ever deformed in any way. You’ll never meet a more gentle soul. She’d be disgusted and appalled at the suggestion that her ‘hacking’ was in any way immoral or indicative of a warped personality. Software is not alive to her; it’s a tool/toy.

Many (most?) in her generation have no problem separating the virtual from the real.

Neopet.com’s economy is so broken that 99% of the pets are starving and have to get their food from the virtual soup-kitchen. I don’t see the SPCA calling the cops on the pet owners anytime soon.

Why do we keep bringing this up? Do we (designers) have some sort of ego/god-complex that makes us want our creations to become real? I’m in no hurry to go there. Thar be dragons.

Randy

7.

Jeff Freeman>Not that it's wrong to hurt something kitten-like, but that there's something freaky wrong with you for wanting to.

Those who enjoy tormenting virtual creatures might respond that they can't help it if there's something freaky wrong with them, and at least this way they can address their issues without being tempted to harm real creatures.

Of course, this same argument could be used to support a proposition that paedophilia in games is OK.

Personally, I have a strong suspicion that the "it'll lead you to do it in the real world" assertion is not the fundamental reason that people object to tormenting virtual pets. Rather, it's that they find the very idea of tormenting virtual pets intrinsically distasteful, and they don't want other people enjoying something that they find distasteful.

Put another way, if you're freaky wrong you don't get to practice your freaky wrongness because it freaks people out.

Richard

8.

Bartle wrote:
>Personally, I have a strong suspicion that
>the "it'll lead you to do it in the real world"
>assertion is not the fundamental reason that
>people object to tormenting virtual pets.

That's not really what I'm saying though.

Not that if you want to torture kittens (and let's say this desire manifests as torturing things that look and act like kittens, but aren't really), then you don't really need to be "lead" anywhere: You are already there.

More to the point:
> Those who enjoy tormenting virtual creatures
>there's something freaky wrong with them, and at
>least this way they can address their issues
>without being tempted to harm real creatures.

If one enjoys tormenting virtual creatures, more power to 'em. But if they torment virtual creatures because what they'd really like to do is to torture real ones, then, er... well let's just say there's a red flag there, for me.

As doign it in order to address their issues, that sounds like therapy of questionable benefit.

9.

If harming a robot kitten would be bad because it is a resonably good representation of the real thing, then where does that leave us with harming avatars? It seems to me that a well-crafted human avatar with a real player controlling it is much more closely representative of the real thing than are robot kittens. Or maybe kittens symbolize innocence in a way that adult human avatars cannot? If you make the avatar a 5-year-old, the moral freighting changes a bit, doesn't it?

--Phin

10.

Phin> If harming a robot kitten would be bad because it is a resonably good representation of the real thing, then where does that leave us with harming avatars?

Well, same place, I guess: It depends on the context.

'Which isn't as satisfying as zero-tolerance laws and such, but what can ya do?

11.

Mind you I'm just thinking out loud and trying to probe along the edges of my own rationale, but war-type games where you kill player-controlled avatars seems like a pretty common context. Is someone playing a nazi soldier killing american troops in Wolfenstien: Enemy Territory less objectionable than harming a robot kitten?

As I think about it, it seems to me that *why* the player enjoys what they are doing is very important to the question of how objectionable it is. If I shove lit firecrackers into the joints of my robot kitten to try to blow its legs off, I may be savoring the thought of torturing kittens or I may be simply having fun testing the structural integrity of a piece of plastic. Either way, I think I'd be uncomfortable with a company that specifically advertised blowing the legs off of their robot kittens. In a similar manner, I am uncomfortable with video games that glorify their gory treatment of human avatars or that reward violence for its own sake. While I'm not promoting zero-tolerance laws, I do think that some game designers are being incredibly irresponsible with their designs and that more responsible designers should probably speak up more often in telling them we think so.

--Phin

12.

To throw in a utilitarian argument (I'm not a utilitarian but I play one on TV): We should be kind to AI agents because we're going to become intimate with them fairly soon. 'Intimate:' close, together, inseparable, mutually vulnerable to, private with, soulmating with, or at least feeling like it.

Designing a very intelligent, very intimate program so that it will always and everywhere be good to us is a very tough problem. The first challenge: we have only a very weak sense of what's really good for us on a deep personal level. We spend a lot of time and effort doing things whose value, from a deathbed perspective, seems very limited. Self-awareness is a hard thing to achieve. Without it, how do we tell AI what to do?

Second challenge: if we knew what we wanted AI to do for us, how can we make sure it does that - especially as its scope of decision-making widens?

From both perspectives, treating AI in ethical terms makes a lot of sense. Teach AI the Golden Rule. Then be good to it, and trust that if it learns how to help us, it will act on that knowledge.

Now for a non-utilitarian take: many religions teach that animals and humans are different, that animals have no soul, have none of the moral agency of humans, and don't go to heaven. But if you wander by a Catholic Church on October 4, you might see an odd gathering of people on the lawn, with their dogs and birds and snakes and whatever other beasts they have brought. It's the Feast Day of St. Francis of Assissi, and the people come to get blessings for the animals they love. St. Francis taught that all things are God's creatures, and if we happen to love them, well, that is a good thing. And if the teaching of humanity's exclusive hold on moral agency somehow conflicts with the teaching that all things in the cosmos can be loved by humans, and therefore are worthy of blessings and some share of grace, well, then that is a mystery. Sometimes there's no logic to it.

13.

I'm going to briefly summarize the various points made here and at the parallel discussion at GTxA:

Nick says: There are two separate questions here: 1) Does the person believe the virtual character is real or fake? and 2) Independent of the answer to question 1, a person's experience with virtual characters can influence how they treat real people/animals -- for good or ill.

Brad says: Is there a difference between artificial and real anyway? Once it seems real, we should treat it as real.

JJ86 says: You can't empathize with an artificial thing, it feels no real pain.

Richard says: It's a slippery slope from hurting fake creatures to real ones. Also: It's distasteful to torture virtual creatures, hence the opposition to it.

Jeff says: It's sick to want to hurt virtual characters, because it might mean you want to hurt real things.

Randy says: Why do we keep bringing this up? Everyone knows they're not real.

Phin says: Doesn't this extend to harming avatars? (ie, first person shooters) Also: The reasons the player enjoys what they're doing (e.g., why am I shooting a virtual Nazi, vs. why am I torturing a virtual cat) is very important to the question of how objectionable it is.

Edward says: We should be kind to AI agents because we're going to become intimate with them fairly soon. We should teach an AI the Golden Rule, and be good to it, so that it will be kind to us in return (which is important if we are to trust the AI to do things and make decisions for us). Also: some think it's a good thing to love those things that may or may not have feelings, moral agency, e.g, the way real animals used to be perceived, and sometimes still are.

I'll throw in my 2 cents...

I'm not going to bother looking far forward into the future when AI's become orders-of-magnitude closer to the complexity of real living things, and hence theoretically more deserving of our true respect. Based on what I understand about the state of the art, and how slow progress is being made, I don't think we'll have to deal with that issue anytime in the next decade or two, probably longer. So I think that's my answer to Brad's question, and part I of Edward's. (I'd be happily proven wrong about this time prediction though.)

In the short term, e.g. the next 10+ years, I'll venture to predict that virtual / robot characters will exhibit stronger and stronger illusions of life, but internally, in truth, will still have relatively simple brains. It will become easier and easier to suspend your disbelief in the lifelikeness of these characters, but intellectually we'll all know they're fake, because the moment you try to have a real conversation with them, they'll probably break. But they'll stimulate us in ways that they'll feel pretty alive to us if you don't think too hard about it, and we can immerse ourselves in that fantasy easily as long as we don't push on them too hard.

I think the interesting issue here is how people treat such characters relative to whatever lifelikeness they ascribe to them. That is, if I'm willingly suspending my disbelief, in the moment when the characters really feels alive, it does matter how I treat them. It matters because if I act immorally at the moment the character feels alive, I will feel immoral in that moment. It's all safe in the end of course — no actual true immoral act was committed — but nonetheless we'll feel like we behaved badly, and that feeling will suck for any well-adjusted person.

To reply to Phin's point: Once a game immerses you to the point that you get pretty damn close to the actual feeling that you're killing someone, then I would think it would feel immoral, for the above reason. Even in their present form, as thrilling as those games are, I tend to shy away from them, because it allows me to too easily imagine what it would be like to do this for real. And it's not a pleasant feeling. (I'm sure if I just played them enough I'd get desensitized, but I'm trying to stay sensitized.)

All that said, sometimes I'll want to put myself in some painful situations, to see how I react. I'll want to mistreat these characters from time to time, to test my own personal limits, to remind myself how it would feel if I did that in real life. This is for similar reasons that I seek out difficult films, plays and books; they're not always pleasurable, but they're enriching and perspective-broadening.

Similarly, as Edward suggests in his second point, I'll sometimes want to act lovingly to these characters, partly as an act of kindness (or a virtual act of kindness), as a way to remind myself of the rewards of that kind of behavior in real life. But I seriously doubt it'll be a real or lasting substitute for real friendships, just as pornography is a poor substitute for sex.

14.

Ted, I think we have ethical obligations with regard to the treatment of animals, and I hope that point is not much contested -- even among the more carnivorous of us. As a matter of fact, we even have legal obligations to animals.

Phin, we do have ethical and legal obligations with respect to avatars insofar as they are agents of real people. It currently isn't clear, though, how far (if at all) we have obligations with respect to the avatar qua avatar, which is just a mask of sorts.

However, I don't think there should be any debate that we have *NO* ethical obligations to inanimate subroutines (e.g. Norns). I didn't mean to suggest with the post that AI had rights. When I was asking about the "ethics" of dealing with bots, I was really wondering exclusively about the actions of the person relating to the bot rather than the (non-existent) plight of the bot being loved or tormented.

"Bot love" and "bot killing" are really just two facets of the same question -- do we have any ethical obligations in structuring our interactions with lifeless representations? I think the answer has to be "no." E.g., in arcade games, I get rewarded for shooting Space Invaders efficiently. If I don't shoot them, the game ends. So is it wrong to shoot them?

However, I'm with Phin and Jeff that in more open-ended games, one's treatment of AI bots may reveal something about the person engaging in that treatment. It is clear that individuals can practice, experiment, and learn real behaviors through interacting with simulations. So while I think we are free to torment stuffed animals, I would seriously worry about anyone who did it regularly.

That said, I think I would be equally worried about anyone who considered his closest friends to be stuffed animals. As the article points out (echoed by some of Andrew's letters and the comments about Norns), some people seem to easily establish emotional ties to more advanced simulacra. As the simulacra become more and more realistic, I'm afraid we'll see more people loving bots. And I find that disturbing.

15.

I'm not sure whether AI quality is the issue, though it influences the subtlety of the cruelty I'm comfortable with.

While I don't hesitate to swat a fly, I certainly wouldn't be comfortable pulling off wings and watching it suffer. But I wouldn't consider squishing a kitten, unless it was suffering terribly.

What I find common in my squeamishness index is awareness of the entity's suffering, and a certain calculation of how survivable an injury is. (Witness the relatively accepted practice of putting down a horse with a broken leg.)

Maybe the real issue will arrise when robots/avatars are constructed to feel pain. Or at least act as if they do.

[I have to note that I'm fairly reluctant to make this post -- I'm certainly not condoning squishing or torturing of real *or* virtual entities!]

16.

Andrew,

I typed my post before reading yours -- but I agree with you. The trouble, I think, is that with advances in technology, suspension of disbelief becomes easier and the bots become uncomfortably realistic. Where the simulation persists and we interact with a particular AI bot for an extended period of time, almost any type of relationship with the bot is a little problematic from the standpoint of practicing real behaviors -- even *ignoring* a bot that seems real can be a bit uncomfortable. (I imagine marketing agencies have realized that.)

I guess Randy would say we should just get more comfortable recognizing bots as code and treating them instrumentally. (???)

17.

Greglas> Righto... And I think there's something to that argument about not mistreating virtual kittens.

But in the end, that line of thinking leads us inevitably to the "ban GTA" (the game not the blog) and "Doom is responsible for Columbine" camp, doesn't it?

I’d say the line of thinking that leads to “Doom is responsible for Columbine” is thinking of complex multidimensional issues as simple yes/no questions. Thinking real dogs merit 100% respect, and robot dogs, 0% respect. I’d rather live in a world in which all cohesive entities with some history got some respect, and just how much is a continual act of balance in the moment.

Mistreating your possessions seems inherently to assume possession is a one way street. Having spent a considerable time keeping my car and my house in the style to which they would like to be accustomed, my sense is that this street runs both ways. I’d be wary of people who mistreat the things they “own”. I’m strongly of the opinions that while my actions shape the world, they also shape me.

My experience is that creative works take on a life of their own after a while. Whether it be the character in a story, the avatar in a VW, the clay in the hands of a sculptor, or a complex computer program. I’d rather read a novel by an author with some respect for the lives of his characters. Or play in a VW with people who had some respect for the history of their avatar. For me, its when you start listening and responding to the work in progress that creation gets really interesting. I’d see the whole history of the child and her teddy bear as the work in progress. And its that complex combination that merits my respect, not the particular pieces of plastic fur.

I’m not sure we are going to be able to deal with the complex world we are building though. Not while people are coming up with wild simplifications like “Doom was responsible for Columbine” or “it really doesn't matter what you do to a tin object.”

18.

I love this thread -- and have been loving it for over a decade. It just gets weirder with every passing year, and yes, it's plenty weird enough without Turing-perfect AI even on the horizon.

But here's a connection I never noticed before just now: Many of the questions people are raising here about AI seem to be also, implicitly, questions about games.

Remember, early on in TN's life, the debates we had about theft in virtual worlds? The question was whether stealing virtual objects, even in character and in the context of the game, was the moral and/or legal equivalent of stealing real-world goods. Greg, Dan, and Ted argued the case for equivalence, as I recall, and I know I argued the opposite. For me, the bottom line was that if the rules of the game explicitly include role-played theft, then players have no recourse to out-of-game moral and legal complaints when they are stolen from.

Yet even as I made that argument, it felt just slightly hollow. Not to concede any ethical equivalence between in-game and out-of-game acts -- or even any in-game "ethics" per se at all -- but I do think that as games grow more complex, the way we treat other players within them grows somehow more meaningful.

This echoes what people are saying here about the way we treat AI, I think, and maybe blurs the line some would draw between bots and avatars.

19.

When Peter Molyneux was demoing Black & White, one of the stock laughs during the presentation was when he demonstrated that you could smack your monkey around. Perhaps there is a social benefit if people find the idea of harming an AI distressing but in a presentation to 7500 people at GDC it sounded like everyone was laughing.

I'm very much in the AI-as-code camp here -- I've written AI's and know that they are still closer to your average toaster than to a real animal. When testing AI's, it's pretty common to abuse the hell out of them -- like testing any other code -- and I don't think this raises any moral questions. Actually, failing to test an AI used in, say, air traffic control or train routing, would raise the moral questions, not testing the code to failure.

In the future when AI's start meeting the definition of "conscious" I'll be the first to march for silicon (or carbon nano-tube or whatever) rights, but I don't think that current technology exhibits nearly enough "life" to be categorized as such.

20.

FWIW, related to this- in Ultima, "Tamers", those who fight and do damage primarily through the high-end creatures they tame, can form pretty strong attachments to their pets. One guy I knew insisted each of his dragons had a distinct personality (not true, it's same AI for all I believe). I've also seen non-tamers freak over losing a horse they've ridden for years. OSI added the ability to ressurect pets a year or so back though.

21.

Cory, I agree it's silly to argue that game AI invokes the ethical questions attendant on consciousness. But I don't think that's the interesting part of this debate anyway.

I think, actually, that Ted was dead-on bringing the mysteries of religion into it, because I think the issues brought up by people's love for their Aibos border more on questions of faith than of ethics. Go to a Catholic mass in a modern American suburb and ask how many people there believe the wine has literally, chemically been turned to blood. Their answers will be complicated, but the essence will be: no, it's not that kind of belief.

And yet it's belief, and belief of a kind that matters deeply to the people that hold it. Now, I'm not suggesting that the bonds people feel with their robots comes close to the profundity of religious faith. But I am saying that religion is closely related to play (Johan Huizinga, in Homo Ludens, is almost obessively insistent on the point). And play, again, is what I think is really at issue here -- not the cognition or sentience of the bots themselves, but the sophistication of the make-believe they make possible.

The mystery of transubstantiation is an extremely sophisticated form of make-believe, with extremely interesting cultural and, if you buy that sort of thing, spiritual effects. Aibo-love may not approach that level of richness, but I'd say it plays on the same field.

22.

Julian, yes -- and that religious impulse is called animism. With respect to bot animism, this assertion is interesting:

In contrast to the Christian view of the world, in which God made the earth and people were created in God's image, the Japanese have traditionally believed in a primitive animism that endows all things in nature—like water, mountains, and rocks—with spirit. It's not unusual for them to have a sense of affinity with machines and to transfer human emotions to them. As a result, the Japanese have almost no aversion to humanoid robots.

Cultural differences are certainly not that easy, but there may be some truth in it. Btw, some interesting studies of human/Aibo relationships are available online. E.g., http://www.ischool.washington.edu/robotpets/Articles/CHI2003_Hardware_Companions.pdf

Based on our four overarching categories, the most striking result was that while AIBO evoked conceptions of life-like essences, mental states, and social rapport, it seldom evoked conceptions of moral standing. Members seldom thought that AIBO had rights (e.g., the right not to be harmed or abused), or that AIBO merited respect, deserved attention, or could be held accountable for its actions (e.g., knocking over a glass of water). In this way, the relationship members had with their AIBOs was remarkably one-sided. They could lavish affection on AIBO, feel companionship, and potentially garner some of the other psychological benefits of being in the company of a pet. But since the owners also knew that AIBO was a technological artifact, they could ignore it whenever it was convenient or desirable.

If this position is correct, and if, in the coming years, children come of age with fewer interactions with live pets and more interactions with robotic pets, then our concern is clear. People in general, and children in particular, may fall prey to accepting robotic pets without the moral responsibilities (and moral developmental outcomes) that real, reciprocal companionship and cooperation involves.

Sounds right. Interesting thoughts relevant to the bot love problem as an aesthetic issue in this pdf.

23.

[Apology in advance: I didn't have time to go through everyone's comments, so maybe someone's already said the same thing.]

I'd say it's wrong based on the idea that if you kick a robotic dog, chances are, you'll kick a live dog, too. With the increasingly similarity between reality and simulation (of all kinds), the human mind (I would think) makes less and less distinction.

I'm guessing there's a logical fallacy somewhere in what I just said, all the sudden... =P

24.

I glanced at the first comment and now I feel stupid. =) No more of this posting without reading comments anymore...

25.

This is a fascinating thread, really! :)

I write software robots for MMOG's and part of that involves emulating convincing human behaviors. I had a few scattered thoughts on the matter, so I apologize if this is unorganized.

[edits] I decided not bore everyone with anecdotes regarding my anthromorphic bot AI in TSO, so to the point:

I've always wondered what would be necessary to bring AI to 'life', and the jury is still out (for me) on whether the human experience is unique in nature, how we are qualitatively different in our experience from animals, or for that matter, robots or rocks on the ground. Complicate this further with notions of the soul if you wish. I experience my world subjectively, and I feel pain and sensation : these things are very real to ME, however its not clear to me that there is such an objective thing as pain - I cannot directly experience the pain of other people, I simply find it reasonable to assume they are as self-aware as myself, but their pain must remain untestable and unobservable to me, except as minimalistically expressed by biological and neorological definitions, and of course, by their outwardly apparent expressions of anguish. If this is the case, and we cannot directly experience the pain of other human beings, how then can we know for sure whether any of our robots experiences pain, assuming we emulate it in their programming, any less qualitatively than we? I think this is where we get into the moral/ethical issue. Can robots experience True Misery? If a robot appears outwardly to BE miserable, is it actually miserable, or merely stagecraft? Personally I think there is a definative difference between a mere display of pain (the stagecraft) and the personal, subjective experience of pain.

This is tricky, but I think some of you are allowing that convincing stagecraft may somehow impart to an automaton an experience of its own by which we as human beings owe some sort of social consideration, which I really think is innapropriate. Just because something *appears* to be suffering does not mean that it is. I would offer that a virtual kitten is no more capable of suffering than a kitten in a photograph, pencil drawing, kitten made of ceramic or what-have-you. Everything is representation at this point - we aren't technologically there yet where we should worry whether virtual entities require the same consideration as biological ones.

But one day the technology will arrive. Is there some threshhold of technological sophistication which must be crossed before a robot may 'feel'? Perhaps there are degrees of experience, and if so, should we accord robots the same consideration as we offer lesser natural organisms, such as a ladybugs?

I tend to regard human beings as being superb and incomprehensibly complex machines, but nothing more. Living organisms are merely functional systems, mechanical and chemical in nature, composed of the same basic matter as everything else in our world, be it a man, a dog, a leaf or a rock. What I cannot personally reconcile with this description the fact that I seem to experience a fine self-awareness, a unique point of consciousness which *I believe* separates MY matter qualitatively from all other matter. The problem arises in, although my awareness seems to be a real thing for ME, I cannot prove I posses this 'thing' to anyone else, nor can I prove anyone elses possesses this thing - I cannot even accurately define it. If this is the case, how can we ever know with any certainty when our AI creations ever join our 'club' perse and posses bonafide sentience?

On another thought:

I assume most of us have seen the first Star Trek movie where the Enterprise crew encounter a magnificent alien, which ultimately turns out to be a machine-being constructed around our first Voyager probe. Some alien machine race enhanced Voyager so that it might ultimately find its way home, in search of its Creator, wanting to know if there is anything 'more'. In the end, 'V-ger' physically incorporates itself with a human, thus evolving and acquiring for itself a share in the human experience - becoming one with 'God' perse.

I'm going somewhere with this..

I think that alot of our questions will be answered when cybernetic medicine approaches a level that allows us to 'feel' with replacement limbs, see with electronic eyes, augment our brain function with memory devices and so forth. At that point we may more easily allow that pure machines may feel and experience also, and we may come to more profound perspectives about our own nature as well.

On one more thought:

I frequently lament to my friends over the pitiful state in which NPC's exhibit AI in most MMOG's nowadays, citing my own robot as being 100 times smarter than typical game-owned NPC's. NPC's are dumbed down usually so the AI wont tax the server CPU, but it occurred to me, why not offload some of this to another computer farm somewhere? I have a 30 computers in my home, most of which toil my avatars away all day performing some mundane task for financial benefit, BUT what if all my computers were tasked to enhance game CONTENT. Wouldn't that be cool, to have a computer controlled NPC that actually had needs, personality, made assessments, travelled, made trades, etc? Just a thought.

26.

It seems to me that everyone has their unique perspective on this issue. We can categorize these perspectives into culture norms, religious norms, etc. Question: should everyone follow her own compass, follow one particular norm or the norm of her choice?


One group may treat cats the same regardless of physical composition. Another group may use physical form as the primary guide regardless of mental capacity.

The slippery slope previously mentioned applies to situations where someone or something is deemed less because of some discriminating reason.

In a world where people are starving in one locale while there are surplus of food in another locale, the human race as a group is not sufficiently matured to...

Great philosophical discussion.

Frank

27.

Put me in the currently unconcerned camp.

I might destroy a robotic kitten with a shotgun for laughs, or because it creates the appearance of doing something forbidden (killing a kitten) without actually doing it. I am sure there are other motives as well. My three year old might toture the kitten out of desire to change its shape or color or out of some 3 year old desire that I can't explain.

There are just too many possible motivations for me to say that the behavior is bad by its very nature.

I do think its an interesting thread and some of the questions about real vs artifical pets and what will happen as technology advances are thought provoking.

28.

"Jeff says: It's sick to want to hurt virtual characters, because it might mean you want to hurt real things."

No, that's not what I said. Try again.

29.

What the heck, I’ll clarify:

Consider the person shopping for a realistic robokitten because he wants to torment it, and when he torments it, wants it to look like it is suffering. It’s going to take a lot more arguing than I care to listen to in order to convince me that person isn’t just using the robokitten as a proxy for a real kitten.

Torturing kittens is bad. Torturing robokittens is not bad, but someone who does it all the time...?

I’d keep them away from my cats.

30.

For me, the line gets drawn at self awareness. It is wrong to hurt a kitten because the kitten knows it exists. If the robokitten knows it exists, then you shouldn't hurt it.

If the robokitten doesn't know it exists, then it's no different then a punching bag... it's an object with a purpose that may vary by individual.

I don't see a problem making a robot of any shape or size for any particular purpose, as long as it is just an object with a purpose. And market forces will probably dictate just that... we'll see robots designed for whatever purposes people will pay for.

31.

Andrew> "All that said, sometimes I'll want to put myself in some painful situations, to see how I react. I'll want to mistreat these characters from time to time, to test my own personal limits, to remind myself how it would feel if I did that in real life. This is for similar reasons that I seek out difficult films, plays and books; they're not always pleasurable, but they're enriching and perspective-broadening."

Yes, but how the art treats this sort of exploration is very important to how I perceive its benefits or detriment to society. (I suppose I should interject that I think it is naive to imagine that art can benefit society but can never cause it harm.) When mistreatment is characterized as fun or cool or is otherwise rewarded by the game design, I think it starts to shade toward the reprehensible. I imagine that Shindler's List is the sort of difficult film to which you refer, but IMHO, it would not take many artistic changes to turn it from a responsible and educational film into something utterly despicable. That's the power of art, and we may yet find that interactive art can be the most powerful and influential of all art forms. But it's just a game, right? And Shindler's List is just a movie.

--Phin

32.

To Dan and Julian's (somewhat) related points:

Sure, people mistakingly apply meaning, patterns and consciousness all over the place. We are highly evolved pattern matchers, so that is what we do. And, it is an excellent point that why people do this is very worthy of study.

Games provide many examples of this. For example, when Road Rash was in Alpha testing, the sports bikes and Harley-style bikes had exactly the same tuning underneath but had different artwork associated with them. The testers on the game (as one might predict) had coalesced into two almost equal groups, loundly arguing that either the sport bikes or low-riders were unfairly tuned!

33.

In the beta for Wish, some testers objected to there being corrals of horses in the starting town, that players could kill for experience. (I never killed one, though I attacked one by misclick, and it stomped me to death.)

34.

mm> If the robokitten doesn't know it exists, then it's no different then a punching bag... it's an object with a purpose that may vary by individual.


Being punched is the function of the punching bag. Its designed for it. As are skittles to be knocked over, or Mobs in EQ to be “killed”. But if you trash the Mona Lisa painting, is “I consider it to be a punching bag” a good defense? Context and history play a significant part in my view. For a designer to “stress” a robot cat to check it is functioning as intended is one thing. For a child to “torture” his robot cat is another. If you think you are torturing something, I’d say you are a torturer. In my book, torture is wrong. And the wrongness comes not just from the effect on the entity being tortured, but also from the effect on the torturer.

The thing that is missing in the “kicking the robot dog is OK” stance is an acknowledgement the kicker is affected, as well as the target. Kicking a heap of plastic out the way is a different act, even if the physical object is the same. I’d say that moral responsibility has much to do with the world as you see it. For a lab tech, who sees an animal as a test system for a drug, one set of actions may be permissible. For a child with a family pet, another set rules. In a VW, what your avatar represents to you sets some limits to appropriate action. For someone who sees their avatar as a collection of numbers to be optimized, one set of actions is permissible. For someone who sees their avatar as a character with a story of its own, another set kicks in. Which is one reason the two playstyles don’t get on too well together. Like the sports bike and the Harley bike, two avatars with the same internal representation can be very different entities to their owners. And I’d argue that it as at the level of representation for the observer, that the moral choices apply.

35.

I would agree that our actions with respect to an avatar/automaton/animal might reveal something about our own personalities, but I don't think our perceptions about that object should effectively determine what our moral obligations are to it. A real kitten should be accorded a certain level of consideration, whereas a cardboard kitten need not. It may well be that I can be fooled into believing a cardboard kitten is real, but thats MY issue - my moral obligation to the kitten doesn't extend beyond the actual cardboard. People may (and do) perceive things erroneously at times, so I'm personally quite leary of the 'reality is how you perceive it' thinking, especially when the task involves perceiving other actors who are potentially self-aware as well.

36.

If you buy a The Mona Lisa, and you wish to destroy it, it is not illegal or immoral. Foolish? Perhaps, but it is your painting. There are millions of other paintings that people will not have the opportunity to enjoy either, so the Mona Lisa is only different from the millions of others in that you've heard of it.

I don't believe you can logically impose a moral judgement on an act if no entity can claim injury from that act. Sure, this goes on, but I don't think it's a good idea.

As for the effect on the user who may be 'torturing' their robokitten, all I can say is that the robokitten is not aware (again, assuming that it is, in fact, not self-aware), and it's not suffering, and it's certainly better for that purpose than a real cat. Being at the top of the food chain, one thing I can be sure of is that that robokitten, which is not aware, can exist only to please me.

In the absence of some sort of intelligence, what we are talking about is legislating how we play with our toys. This is neither a wise nor a productive way for our legislators to be spending their time.

37.

M.M > Being at the top of the food chain, one thing I can be sure of is that that robokitten, which is not aware, can exist only to please me.

That sounds like a far stronger sense of possession than I would lay claim to. I suspect that difference would push us towards quite different views on a number of topics. I wonder how such a difference maps to the powergamer/roleplayer axis in VW? Or the to the famous four Bartle types? Could “Is it OK to kick a robot dog?” be a useful predictor of whether you would enjoy a particular VW?

38.

Hellinar> That sounds like a far stronger sense of possession than I would lay claim to.

If a man built a robot that is truly not aware, it's not much different than a wristwatch. Is it immoral for me to smash my wristwatch? I don't really see a difference.... even if I feel like I'm torturing my wristwatch. That's my business, isn't it?

No harm, no foul. And according to Mithra, I've got a pretty liberal definition of harm.

After all, we are intelligent beings. One of the advantages of being human is that we can improvise and make an object more useful than it seems at first. In my opinion, however you spin it, regulation seems unreasonable where no injury is taking place.

39.

The Mind's I, edited by Douglas Hofstadter (of GEB fame) and Daniel Dennett. No one should have this discussion without reading it, and no one who would care to have this discussion should miss out on it.

Re: the whole 'but we're self aware and they aren't' line - as alluded previously, I have no way of knowing you're self aware - I'm taking your word for it. The Turing test makes more and more sense, and torturing simulations makes less and less - if you think it's a cat you're torturing, then there's no difference.

40.

Fascinating discussion, and amusingly Western. Amusing, at least, until the unreformed passive deconstruction rears its myopic head:

"If you buy a The Mona Lisa, and you wish to destroy it, it is not illegal or immoral. Foolish? Perhaps, but it is your painting...the Mona Lisa is only different from the millions of others in that you've heard of it."

Well, no, at this point in history, the Mona Lisa belongs culturally to at least a billion people, regardless of who 'owns' it. The fact that we've heard of it isn't just a trivial difference--it's the defining difference. Point being, ethics are at least as public as they are private.

"The trouble, I think, is that with advances in technology, suspension of disbelief becomes easier and the bots become uncomfortably realistic" -greglas

In some religions today, a photograph or even a decent pencil rendering of another human is forbidden--largely because it is "uncomfortably realistic". Meanwhile, the rest of us spend hours each day viewing media, watching movies in theaters--precisely because it is realistic in a way we've adapted to. We don't argue the ethical nature of moving pictures, and they add enormously to the collective social experience. I think that the "uncomfortable" preceeding "realistic" is an implicit acknowledgement of the current set of mores' inability to address the issue.

But to get back to the Western/Eastern angle...look at Japanese popular media. You'll find that they've been anticipating and working through some of these 'academic' questions for years. And who makes Aibo, after all? Sony. And Sony will most likely be the first to offer a consumer humanoid robot companion (hair, skin, walking, conversation, etc). We think the idea is humorous, but they're already working on it in earnest.

"My experience is that creative works take on a life of their own after a while...For me, its when you start listening and responding to the work in progress that creation gets really interesting." -Hellinar

"I think that alot of our questions will be answered when cybernetic medicine approaches a level that allows us to 'feel' with replacement limbs, see with electronic eyes, augment our brain function with memory devices and so forth. At that point we may more easily allow that pure machines may feel and experience also, and we may come to more profound perspectives about our own nature as well." -Mithra

Taken together, these sentiments show the way. The future isn't a world of humans and robots as per pulp manga. The future is humans as robots. Cybernetics and bionic extentions are beginning to blur the line. The Elizabethan populace, and most of its intelligentsia, would have been horrified by the "dehumanizing" implications of the Internet. We're bemused at the thought of treating a construct as a peer, but one way or another it's certain to happen. Intelligence has an affinity for intelligence, and whether we build what we now clumsily refer to as 'AI' into humanoid robots, or the architecture of our homes (most likely both), we'll need to end up with an amiable relationship, because in a hyperconnected world, most everything we relate to relates back to us...

41.

Staarkhand wrote:

Re: the whole 'but we're self aware and they aren't' line - as alluded previously, I have no way of knowing you're self aware - I'm taking your word for it. The Turing test makes more and more sense, and torturing simulations makes less and less - if you think it's a cat you're torturing, then there's no difference.
--------------------------------------------------

I haven't read the book you reference, but I'll look it up shortly. I've just read an introduction of the Turing test... while I have to wonder if a computer really has to be able to convince a human that it (the computer) is a person in order to be considered intelligent, I'll submit that it's as good a starting place as any. In my mind, however, artificial intelligence that's as good as a cat's brain would still qualify as AI.

Taking one's word for self awareness should be sufficient, as long one's word is a product of some sort of thought, and not just a pre-programmed response. If it's good enough that you have to ask whether it's real or pre-programmed, then it has passed the Turing test, especially if I'm in the same room with it. And perhaps, in that case, kicking it would be wrong.

In response to as to whether I think it's a cat I'm torturing, I suppose that knowledge (or lack thereof) must affect intention. But if I KNOW it's not a real cat, and I kick what I KNOW is an artificial cat BECAUSE I want to kick a real cat, am I (or rather, should I be) breaking the law? Or am I just engaging in a fantasy? And is it important that my fantasy make sense to you in order that I should be able to engage in it? Remember, we are assuming no injury is taking place.

To put it another way, if I stab a mannequin 57 times with a butcher knife because I think it's a person looking at me funny, have I commited murder? Attempted murder? Assault? Assault with a deadly weapon?

Now, on the other hand, if you don't want to hang out with me because I exhibit strange behavior, that's perfectly understandable. Some things are better kept to one's self.

42.

MM, if you've read about the Turing test, you should take a look at Searle next.

Stabbing a mannequin with the belief that it is a person might constitute attempted murder.

43.

As far as stabbing mannequins or destroying art, there's a difference between morality and crime. In the vein of this discussion, if you stab a mannequin thinking it's a human you haven't strictly murdered anyone, but the moral implication is unchanged.

Heck, although no one can ever agree on morality in itself, figuring out how to enforce morality is more complicated by orders of magnitude. All we can try to do is punish actual behavior, while skirting the slippery slope down to Pre-Crime and ThinkPol.

The original topic of this thread was treatment of animals, viz. virtual ones. Animals and automata share one thing - their legal rights are assigned to them by humans. So far the secret if you're an animal is being fuzzy and a propensity for sitting in laps - scales, antennae and general aloof behavior will get you exterminated or eaten. Expect this discussion to have a wider audience in a few years when voice technology and adaptive neural nets result in AI's that defend their own rights to life and happiness using human language and "emotion". Since humans define intelligence as "things that are like me", realistic skin and a good voice synthisizer will most likely be more convincing than a construct's ability to debate, say, the rights of virtual animals.

44.

Although stabbing a mannequin could be considered attempted murder, any sanctions brought against the perpetrator would be on behalf of the intended victim ... NOT the mannequin. The mannequin physically proxied for the intended victim, but most importantly revealed the perp's real criminal intent toward an actual living person. I don't think this scenario can be used in support of any argument to the extent "we have moral obligations to inanimates because we perceive them to be real". The criminal in this case simply made a perceptual error and its neither here nor there whether the mannequin was destroyed. Furthermore, there is no consideration for whether the mannequin was stabbed ineffectually or completely obliterated. The latter would seem to constitute a 'successful' murder, if this was really about the mannequin at all.

Conversely, I don't think that a person who accidentally killed someone else would be absolved of the crime because they believed they were stabbing a mannequin. The law seems to hold us responsible for all of our actions, even those with unintended outcomes; those who err in that regard are guilty of manslaughter, criminally negligent, etc.

45.

Staarkhand,

I recall a time in my criminology class having told the professor "you cannot legislate morality" - to which he responded, "why not? we do it every day". He was right. I would say most (if not ALL) of our laws are founded in some moral principle. Laws reflect our obligations to each other, both to individuals and to society as a whole, covering everything from murder, to speeding, to paying taxes or protecting the environment.

46.

Re the morality of stabbing mannequins thinking they are people or vice versa, you might find it interesting to look at the concept of moral luck:

http://www.iep.utm.edu/m/moralluc.htm

47.

Mithra,

I agree. We do this all the time, and with fairly limited success. It's happening as I type in Massachusetts, so how could I deny it.

My point was that although crime has a sort of "majority vote" basis in morality, there's still a difference - saying "I won't get tried for murder" and concluding "it's ok" is pretty shortsighted. I don't want to discuss whether I can go to jail for hurting a virtual cat, only if it's right or wrong.

48.

Well we all have to agree that it is wrong to want to be hurtful. The golden rule should always apply in our dealings with people and other living things.

But these dealings aren't with people, they are with robots. And not every robot can claim an injury.

You can say that it's wrong to kick robotkitten if you want. Much of what is moral or not is completely arbitrary. The real question is what are you going to do about it? If there's no injury, I don't think you should do anything.

49.

greglas>might find it interesting to look at the concept of moral luck:

I find this an interesting topic (although I don't know what it has to do with tormenting virtual animals).

If you were driving, fell asleep at the wheel, ran your car into a ditch and were hospitalised, would you get jailed? Probably not.

If you were driving, fell asleep at the wheel, careered off the road down a railway embankment onto a track, where you got out, phoned the police to say your car was on a railway track, then a minute later a passenger train hit your car, derailed, skidded some way along the line into the path of an oncoming goods train resulting in a collision that killed 10 people, would you get jailed? Yes, for five years.

I guess this is a warning for other drivers who fall asleep at the wheel, career down a railway embankment onto a track where a minute later a passenger train hits their vehicle and derails into the path of an oncoming good train killing 10 people.

Richard

50.

Staarkhand, MM :

I think we are in the same camp (more or less). I tend to use crime and immorality interchangeably in my examples since they map quite closely. I don't think there is anything in the public interest that might support a law limiting virtual animal abuse, but then again there are those that say video games caused Columbine (which I disagree with). Again, the concern is with respect to the real people who were injured/killed, not the well-being of the Doom mobs that purportedly brought those kids to the killing state of mind. Supporters of legislation might offer slippery-slope arguments against avatar abuse, but nothing more profound. I guess it depends on the particular context of the activity and whether virtual killing is actually a gateway exercise to more sinister behavior. That would be a hard position to defend given the number of people who play these games and are 'normal'.

As far as the question of whether tormenting virual animals is intrinsically wrong... shooting from the hip, I would probably say No. I'd say its closer to a Symptom of something *already* wrong. If someone violently butchered photos of women in an adult magazine, he's not committing a crime perse, although it may be a predictor of crimes to come. Clearly this person has a problem, but tearing up a magazine itself is not a moral transgression. He's BAD on his own merit, but not for having punched holes in paper. The Columbine boys likely enjoyed playing Doom levels modelled after their school because they were already on the killing path, not the other way around. Going now back to the issue of robots, I think they are generally understood to be unfeeling inanimates, at least in the West, much like photos or dolls, etc. I don't think we really have any moral obligation to them at this point in time, save for their usefulness at predicting future delinquency, which isn't clearly effective to my mind either. Probably generalized edicts are not appropriate for the phenomenon, we should examine each persons motivations case-by-case before we determine concern.

51.

Perhaps this is a little off-topic, but I started thinking about the differences between hypothetically self-aware entities that exist in robot form (those that perceive our world) and those which exist totally within simulated environments (VW's). Does our anthropocentrism lend MORE consideration to the physical robot as a peer than a purely virtual one (say of equal mental sophistication), simply because it exists in the natural world?

I generally don't find Jim Carrey works particularly enlightening, however in The Truman Show, the director states that "we accept the reality that presents itself" as the reason Truman is not expected to discover the false nature of his universe (a large scale TV studio). After thinking on this a bit, it struck me as intensely profound. It would seem to be, that a virtual self-aware entity, existing solely within the context of a VW, would never ever learn the 'true' nature of his universe, save for what insight 'God' allowed (read: VW admins and devs). (Lets call this being a VSAE for short). The VSAE could only ever acquire knowledge about his world to the extent that the VW mechanics allowed. The VSAE could never know or understand the levels of dependency his existence required (i.e. servers, programmers, IT people, internet, electricity, etc).

It begs the question, can our own reality be just as dependent on an as-yet incomprehensible, parent system to which we have no window? If so, how many iterations are there? How many 'child' realities are nested within each other? If we as human beings are capable of creating intelligences that rival, nay, exceed our own, given a long enough timeline, along with VW of ever-increasing complexities, how then can we be sure our own world is not merely the last construct in a series of constructs? Could our Creator be an absentee administrator? The notion almost lends a sense of possibility/hope to someone like myself, who typically disregards the notion of God for lack of evidence and untestability. It never occurred to me until the last few years that 'God' need not exist in the same physical universe as we. In fact, if we take the VW for example, it doesn't make sense that he would exist in 'our' world at all any more than we would exist wholly within our games. Consider for a moment that we may actually live in something akin to The Matrix, except for one critical difference: there are no biological humans in the real world to offer us insight into that parent reality.

Weird stuff. Thoughts anyone?

52.

Let me add one more thing:

If we, as the 'gods' of VWs, prefer to manifest ourselves virtually as we perceive ourselves actually, and furthermore strive technologically to create our own peers, does this not ring familiar to some biblical pronouncements, namely that God created man in his own image? Assuming there is a Creator, either in the traditional religious sense, or in the 'administrator' sense I offered above, can we now know something about God, not merely on the merits of some biblical text, but based on the statistical observations of what we do when we create child realities? I guess I'm asking if its reasonable to assume God (or Gods) may have created our reality to mirror Their reality in some way, much as we were purportedly created in His image, for the exact same reasons we currently strive to mirror our own reality in VW's? Is very interesting to consider there could be infinitely treeing levels of existence with multiple child realities blossoming at every level. :)

53.

Mithra, you might want to look at Decartes and Borges with regard to your 5:10 post.

Richard, I've always thought that moral luck is an evidentiary issue. The best evidence of culpable behavior is the forbidden result. You're right though, we're no longer talking about abuse to robokittens.

Back on topic, this seems relevant:

http://www.ischool.washington.edu/robotpets/preschool/

Results here.

First, what does it mean to morally care about an entity that (as the majority of the children recognized) is not alive? In this sense, a person can “care” very deeply about a car they have owned for decades, and cry when it is finally towed to the junkyard; but that would seem to us a derivative form of caring, supported only by the person’s projection of animacy and personality onto the artifact, concepts which may first have to be developed in the company of sentient others. Second, to the extent interactions with the robot partially replace children’s interactions with sentient others, and as long as the robot only partially replicates the entire repertoire of its sentient counterpart, then such interactions may impede young children’s social and moral development.

Those interested in this thread might also glance at this (which was found on Professor Mishra's syllabus).

54.

Mithra:

Here's another version of the universe-as-simulation idea, rather more pointed than Borges' or Descartes's:

http://www.simulation-argument.com/

The argument here -- originated by Nick Bostrom, an Oxford philosophy professor -- is that given what we know about the evolution of technology and the cosmos, it is in fact much more likely that we *actually* live in a simulated world than that we don't.

Not "what if." Not "let's suppose." The question here is: "Welcome to the Truman Universe, now what?"

55.

Julian Dibbell>Not "what if." Not "let's suppose." The question here is: "Welcome to the Truman Universe, now what?"

Let's hope we don't crash it (or that if we do, they have us backed up).

Unfortunately, this "we're all living in a simulation" argument can be passed upwards indefinitely. If we're living in a simulation, so might be the people who created our simulation, and so might the people who created their simulation, and so on ad infinitum.

We might be living in a simulation, but so what if we can't contact the simulators and they don't want to contact us?

Richard

56.

As for staying on topic:

I still see the moral/ethical issues here as fundamentally related to questions of avatar and other personal rights, rather than of robot rights. In other words, the question is not, "Are you a bad person for torturing a robot kitten?" but "Are you a bad person for torturing someone else's robot kitten?"

If people don't have special feelings about their robot kittens, then it's a simple property issue. But if the sophistication of AI has reached a point where, for legitimate emotional and/or cultural reasons, peope do develop strong feelings about their bots, then it gets trickier. Vandalizing a house is one sort of harm; vandalizing a house of God is another. (Or to take it back to an earlier example: pissing on somebody's wine and crackers is one thing; desecrating the Host is another.)

See also "A Rape in Cyberspace." There Mr. Bungle humiliated his victims by violating their online embodiment -- their avatars. Would his "crime" have seemed as outrageous if he'd just been talking trash at them? Does the peculiar attachment people feel toward certain technological extensions of self (whether bots or bodies) not make a moral difference?

57.

Mithra, have you ever heard of Noctis? It's odd you bring up the whole god thing, because that program simulates a universe that could be explored... stars that you can fly between and planets that you can land on and explore. The graphics aren't so hot, but I think you'll find the concept quite intresting.

While I'm hesitant to jump on this particular bandwagon, I am no more qualified than any of the rest of you to say what is possible. So if you find a way to petition 'God' for and end to disease, world peace, or any of that good stuff, let me know and I'll sign. :)

On moral luck: The true 'morals' rely on the intentions. If I set out to commit act A, but act B happens instead, you will treat me as if act B was what I was trying to do (if that is how you perceive the act), but this is your mistake, not mine. I may benefit or suffer from your making this mistake, but my intention, good or bad, is what makes me moral or immoral. Eg, If someone was convicted of a murder they didn't commit, you wouldn't call that person immoral if you knew the truth, would you? So if moral luck exists, it can only exist to observers of an act, and not that acts participants. (though if I'm doin' time for murder, that is probably cold comfort.)

I don't want to sound like a flake, but with the technology that we have today, I don't see how it's possible that we won't live to see something very matrix-like. Sensory input is the key... and we can make the blind see, and we can control robot arms with our brains. How far off can we be? If we can keep a brain alive in isolation, then isn't that immortality to some degree?

Anyway... sorry if all that is totally irrelevant.

58.

Julian -- I LOVED that "rape in cyberspace" article! Who wrote that? Wasn't it part of a longer book? :-)

I think we were initially focusing on the obligations of the AIBO owner with respect to the AIBO. Using that framework, I think it is clear that I can delete my avatar, just like I can get rid of my car or mistreat my prosthetic limb. The avatar, like the bot or the tin can, is property to me.

I attempted to dodge the cyborg ethics inquiry in the initial post by cordoning off the robo-kitten abuse ethics issue from an avatar rights issue. If you merge technology with personhood, you get a whole new set of problems.

But you're right to call me on that distinction. Perhaps we can't draw a completely clear line between a claim that you've abused *my avatar/me* (the Mr. Bungle situation) vs. a claim that you've abused *my AIBO/me*. Both claims might be understood as simple property issues. But the avatar is arguably a cybernetic extension of the self. Does the AIBO abuse situation raise similar issues?

I wonder if there might, some day in the distant future, be a legal dimension to the question you raise (shudder). E.g., loss of companionship:

Colorado lawmakers are entertaining a bill that would make dogs and cats legal “companions” instead of property, effectively allowing people to sue veterinarians or animal abusers for “loss of companionship.”

http://www.cfif.org/htdocs/legislative_issues/state_issues/animal_cruelty_tort_reform.htm

59.

As for rape in cyberspace....

Things that are wrong in the real world should be wrong in cyberspace too. If my whole argument about robot rights has to do with being able to claim an injury towards one's self, then destroying another's robokitten is at the very least vandalism (ie, yes it is wrong). It's just not the robokitten claiming the injury, it's the owner.

You may notice that self awareness plays a role in our justice system now, breaking out the window of a car is a lesser crime than throwing rocks at a cat.

So to get back to the point: While I am certain that rape in cyberspace is wrong, I don't think it's time yet to punish behavior in cyberspace, as not enough people understand the effect that is has on us, and therefore don't understand the consequences of their actions. That will change over time, and all of these issues will be re-examined by people probably not as bright as us. :)

60.

An interesting point was made:

"If a man built a robot that is truly not aware, it's not much different than a wristwatch. Is it immoral for me to smash my wristwatch? I don't really see a difference.... even if I feel like I'm torturing my wristwatch. That's my business, isn't it?"

I presume this question was meant to be rhetorical. I can't help but answer "It *is* immoral to smash your wristwatch", however.

Immoral and Illegal are very different things. While what is illegal is usually driven by what is immoral, illegality, by its nature, can only deal with what occurs. It runs into an icky morass whenever it tries to bring "intent" into the picture. Morality, on the other hand, is all about intent. The result of the actions are irrelevant.

So let's get back to the wristwatch. You seem to be arguing for a very strong idea of ownership. I disagree with such powerful system. Ownership is not a one way street. When I own something, I also have a responsibility to it. Just because I own a car does not make it right for me to fail to maintain it. We often frown on those who fail to properly maintain a car and drive it into the ground long before its likely expiry date, and look well upon those who keep their vehicles in good condition.

There is no legal requirement to maintain a vehicle (beyond minimal safety standards), but I would say there is a moral one.

"I" does not stop at my epidermis any more than "I" stops at the blood-brain barrier. Mistreating something I own is mistreating myself. Furthermore, mistreating others is mistreating myself.

This doesn't get us any closer to whether we should try and impose sanctions on those that mistreat their robotic kittens. We don't impose sanctions on those that intentionally damage their wristwatches (But we do impose sanctions on those that intentionally damage their own bodies), but that is more a result of keeping the law focussed on more important issues than a lack of moral concern on the part of the populous.

- Brask Mumei

61.

Clearly I have alot of reading to do, though its not surprising to learn there is a body of work on this already. :) Having not read any of it (yet) I'm still ogling some interesting possibilities, namely the ascension (or descension) of awareness from one reality to the next. A creator of a VW should be able to extract a purely virtual entity and embody it as a robot in the creators own world, or, conversely the creator might be able to enter himself into the child reality, either permanently (given the technology) or temporarily, much as we do when we play these games. What boggles the mind is not so much to try to imagine a parent reality, assuming we are in the simulation, but to imagine the realities parent to it.

Anyway, if anyone wants to start a new thread I'll ramble on there; I realize I'm off topic.

62.

Mithra, why are you focused on parent/child relationships for realities? You could set up a logic structure in which "my house" is a child reality of "my street", which is a child reality of "my city"...but that would be a complete mischaracterization of how we actually experience going to work. I push for a less hierarchical conception of the future of virtual realities. For some amusing sci-fi on the subject, try Greg Egan's "Permutation City" :)

And might I say that I find Brask's morality of inanimism fairly fascinating, if a bit shaky...I'd love to see it hold up under scrutiny to provide some agitation to the fading but still strong fashion of postmodernism.

63.

In reference to Brask, again I say that much of what we call moral or immoral is arbitrary. It doesn't matter if what I do is immoral if I'm not going to be punished. Without punishment, we have social acceptance. So do you punish me for breaking my watch (or my robokitten), or not?

64.

Euphrosyne, I think I was allowing that realities might be distinguished from one another by their non-transferrable objects, entities and rulesets. For example, a completely self-aware Everquest NPC exists in a child reality in the sense that, its creation, continued existence and inevitable extinction is dependent upon server hardware, programmers, technicians, etc. within this reality. There are machinations and factors in the parent reality that directly affect the continued existence of the child reality - things which may never be sampled directly by the NPC's own efforts. In fact things like electricity and the internet may well not exist in the world of Norrath. I suppose we could agree that a sentient EQ NPC could be said to exist in our reality along with us, but that would be our god-like perspective, not the NPC's. The NPC could only sample our reality insomuch as we allowed it a window to it. This is what I meant by parent-child reality. When you say your house is a child reality of your street, I believe you employ a different usage of the word. Self-aware beings (namely you) may at your own discretion move easily between your house to your street to your city, and although these locations may be nested logically, I argue they all exist within the same contiguous, common reality. You may transist from location to location and sample each of them without experiencing a change in the natural ruleset of your universe. This is different from the EQ NPC who may wish to visit your local mall. The NPC would need some mechanism by which it may ascend into the parent reality.

I suggested the possibility that realities may tree, but in retrospect all realities would need to predicate upon a single master reality, the True Nature of which may remain forever unknowable. I just wanted to throw out for discussion the possibility that, no matter how much we learn from science about our universe, its at least possible we aren't looking at the Real Thing and may well never know, much the same way an EQ NPC in a closed simulation could never learn anything about our world. I'm excited about it personally, because I had not until now heard a good logical explanation as for why 'God' might exist. Don't get me wrong, I'm not trying to sell the idea of God to anyone, I'm just privately amused that its at least possible to arrive at a religious conclusion by way of science and logic. :)

65.

MM> It doesn't matter if what I do is immoral if I'm not going to be punished. Without punishment, we have social acceptance. So do you punish me for breaking my watch (or my robokitten), or not?

Maybe I’ve been spending too much time in Egypt (A Tale in the Desert), but I am pretty skeptical of legal punishment as the sanction for immoral action. It seems to me like a desperate last resort, rather than the first line of defense. If you get known as the guy who tortures his watch, the punishments you run into are more like not being invited to particular parties, or being skipped over for promotion etc. More subtle than a court summons, but still something of a deterrent.

The ATITD game world is notable for have a system whereby players can introduce laws to modify the game world. The system functions well, as a system for introducing feature requests, which is not what the developers intended. But as a system for delineating moral and immoral behaviour in Egypt, it hasn’t fared so well. Most laws to limit player behaviour have been voted down.

The general feeling is that a law to limit a particular behaviour could cause more problems than it solves. The community is small enough to solve problems face to face. Another aspect I put a lot of weight on, is that laws in Egypt are coded into the server, and therefore enforced. The familiar world convention is that the law will turn a blind eye at appropriate times, so really silly laws can go on the statute books and be safely ignored. I think ATTID is pioneering something really profound here. As more and more objects become imbued with software, and that software enforces legal requirements, the familiar world will look more and more like Egypt. I don’t think our legal system was really built with the expectation that laws would be enforced.

This is all rather off topic though…

66.

Points taken, Mithra, but my emphasis is more on the NPC in your example. The implications of your standpoint are that all possible aspects of Everquest, as a child reality of ours, are knowable to us. But isn't it likely that a "completely self-aware Everquest NPC" would know something about that reality that we never could, or at the very least would be inappreciable to us?

"in retrospect all realities would need to predicate upon a single master reality..."

This is the historical assumption, but contrary to some recent work in quantum physics and cosmology. The neurological debate is as open as it ever was. As you suggest, we may actually end up proving that other realities exist--with the same equations that guarantee we can never learn anything about them, and can never get to there from here (and vice-versa).

67.

Somewhat off-topic, but here's a thought for MMORPG designers who have potentials for cross-platform synergies between toys and games:

What if you could download the "personality" of your SONY AIBO III onto your SONY memory stick and upload it into SONY Everquest III as a pet companion for your avatar?

Same AIBO, right? Just sans the plastic...

People like AIBOs. People like pets for avatars. Just an issue of whether it would be worth the bother and resource drain...

68.

Hellinar,

Legal punishment is the ONLY sanction for immoral activity (in our lifetime, at the very least). I mean, you can choose not to be friends with people because you think they are a jerk or a mean person or whatever, but others could just as easily call you a snob for holding that opinion.

If we place a legal sanction on it, then it is clear that we have a concensus on what is definitely not acceptable. Without some sort of system of laws, you don't have any agreement among among the citizens. Take littering and pornography... there are people who would certainly call both immoral, but there is only a law against one (while there are laws against certain types of pornography, it is clear that there is generally concensus among us, and that's why the law exists in the first place).

I guess what I've missed is that some people would rather discuss their own personal feelings on the issue rather than what we will do as a community when these issues really do come up. I have been arguing from the position of the latter.

69.

MM: You are correct in identifying what is likely behind the disparity of statements here. What we do as a community is an expression of a consensus on morality, and what we pass into law is a consensus of what the community thinks should be enforceable morality.

You are right in identifying that the law is the only robust way we have of looking at consensus morality. It would be foolish in the extreme, however, to conclude that that is all that morality consists of:

"I mean, you can choose not to be friends with people because you think they are a jerk or a mean person or whatever, but others could just as easily call you a snob for holding that opinion."

I don't see how the existance of the "snob" counter response contradicts friendship-revoking as a form of sanctions? It merely shows that by inflicting sanctions on others, I run the risk of having sanctions inflicted on myself.

You aren't going to ever get written down the complex set of behaviours required to avoid moral sanctions from all people (Indeed: the requirements are contradictory in many cases!), but that doesn't mean such sanctions don't exist, and aren't enforced.

So, while we may discuss our own personal feelings, I would not be so quick to dismiss them as "own personal feelings". It is our own moral compasses, averaged into the magnetic domains of our cliques, that roughly align to form the bar magnet that is legality.

- Brask Mumei

70.

Brask, I must say that was a good post. I'll have to think on that.

I'll have to agree that the majority's "own personal feeling" is, in fact, the majority opinion.

However, I must disagree that we have any real sanction without laws (if this is even relevant, given the majority opinion). Without a law, I may smash my robokitten*. You may witness this and decide to sanction me if you like, but it will probably not provide much disencentive to smashing robokittens. What would you do to me, anyway? Refuse to tell me the time? In the United States, and in my perception at least, if it's not against the law, then it's ok (within reason).

Certainly, there are consequences for odd behavior in public. And if I tend to create a nuisance of myself with my robokitten smashing, then it is entirely likely that I won't have many friends. But where the law comes in is here: If I smash kittens in private, that's ok if it's legal. It's not ok if it's not legal.

And in that case, if we are taking away people's right to do things, I think we need to have everyon'e intrests at heart, not just the prudes. Ok, so you nailed me. I'm a libertarian :)

-MM

*Ha! Back on topic!

The comments to this entry are closed.