« 2L's Virtual Land Sales Attract Investors, Controversy | Main | Authorial Play »

Feb 09, 2004

Comments

1.

There are some well-rehearsed moral arguments in the AI community about what happens if you create an actual artificial intelligence (for example, would it be murder to switch off the computer it ran on?). There are also some well-rehearsed religious arguments (for example, could an artificial intelligence exist without a soul?).

The arguments in favour of not mistreating a virtual kitten tend to be of the kind: "If people get pleasure from hurting something they have anthropomorphised, they should not be encouraged to stoke the flames of this pleasure; otherwise, before you know it they could be hurting real creatures".

Richard

2.

Righto... And I think there's something to that argument about not mistreating virtual kittens.

But in the end, that line of thinking leads us inevitably to the "ban GTA" (the game not the blog) and "Doom is responsible for Columbine" camp, doesn't it?

So what's the right answer? When Mary Flanagan was talking at the State of Play conference, she was emphasizing her own interest in creating havoc in virtual play-spaces -- and pointed to a long history of that kind of play. Paper here. I wonder what her feeling are on the abuse of robotic dogs and virtual kittens?

3.

Er, yeah. I hope I don't sound too judgmental when I say that if you've often wished and dreamed you could kick a puppy without breaking any laws and finally, FINALLY there's a commercial on late-night TV that really catches your attention with an announcer stepping on stage with a booming voice saying "WELL NOW YOU CAN!" then you have "issues".

Not that it's wrong to hurt something kitten-like, but that there's something freaky wrong with you for wanting to.

My word.

4.

You may want to check into the Creatures community, where there have often been assertions (fomented in part by the developers themselves, is my impression) that the Creatures are in fact alive by the standard dictionary definitions of the word. It's led to interesting debates over deliberate torture and other such behaviors among players.

5.

Thanks, Raph!

For those interested -- some posts here, here, and here.

6.

My daughter, at 12, quickly figured out how to hack Catz voxel specifications (then stored in text files) to create what may be described in anthropomorphic terms as mutant kitties with stretched bodies and distorted colors and features. She also figured out how to change the sounds. If I’m not mistaken, it was this very practice that lead to OddBallz. At 17 she still likes to show any new friends her creations, though the software is now antique.

She also owns probably 100 stuffed animals and nearly that many anime figurines. None of which she’s ever deformed in any way. You’ll never meet a more gentle soul. She’d be disgusted and appalled at the suggestion that her ‘hacking’ was in any way immoral or indicative of a warped personality. Software is not alive to her; it’s a tool/toy.

Many (most?) in her generation have no problem separating the virtual from the real.

Neopet.com’s economy is so broken that 99% of the pets are starving and have to get their food from the virtual soup-kitchen. I don’t see the SPCA calling the cops on the pet owners anytime soon.

Why do we keep bringing this up? Do we (designers) have some sort of ego/god-complex that makes us want our creations to become real? I’m in no hurry to go there. Thar be dragons.

Randy

7.

Jeff Freeman>Not that it's wrong to hurt something kitten-like, but that there's something freaky wrong with you for wanting to.

Those who enjoy tormenting virtual creatures might respond that they can't help it if there's something freaky wrong with them, and at least this way they can address their issues without being tempted to harm real creatures.

Of course, this same argument could be used to support a proposition that paedophilia in games is OK.

Personally, I have a strong suspicion that the "it'll lead you to do it in the real world" assertion is not the fundamental reason that people object to tormenting virtual pets. Rather, it's that they find the very idea of tormenting virtual pets intrinsically distasteful, and they don't want other people enjoying something that they find distasteful.

Put another way, if you're freaky wrong you don't get to practice your freaky wrongness because it freaks people out.

Richard

8.

Bartle wrote:
>Personally, I have a strong suspicion that
>the "it'll lead you to do it in the real world"
>assertion is not the fundamental reason that
>people object to tormenting virtual pets.

That's not really what I'm saying though.

Not that if you want to torture kittens (and let's say this desire manifests as torturing things that look and act like kittens, but aren't really), then you don't really need to be "lead" anywhere: You are already there.

More to the point:
> Those who enjoy tormenting virtual creatures
>there's something freaky wrong with them, and at
>least this way they can address their issues
>without being tempted to harm real creatures.

If one enjoys tormenting virtual creatures, more power to 'em. But if they torment virtual creatures because what they'd really like to do is to torture real ones, then, er... well let's just say there's a red flag there, for me.

As doign it in order to address their issues, that sounds like therapy of questionable benefit.

9.

If harming a robot kitten would be bad because it is a resonably good representation of the real thing, then where does that leave us with harming avatars? It seems to me that a well-crafted human avatar with a real player controlling it is much more closely representative of the real thing than are robot kittens. Or maybe kittens symbolize innocence in a way that adult human avatars cannot? If you make the avatar a 5-year-old, the moral freighting changes a bit, doesn't it?

--Phin

10.

Phin> If harming a robot kitten would be bad because it is a resonably good representation of the real thing, then where does that leave us with harming avatars?

Well, same place, I guess: It depends on the context.

'Which isn't as satisfying as zero-tolerance laws and such, but what can ya do?

11.

Mind you I'm just thinking out loud and trying to probe along the edges of my own rationale, but war-type games where you kill player-controlled avatars seems like a pretty common context. Is someone playing a nazi soldier killing american troops in Wolfenstien: Enemy Territory less objectionable than harming a robot kitten?

As I think about it, it seems to me that *why* the player enjoys what they are doing is very important to the question of how objectionable it is. If I shove lit firecrackers into the joints of my robot kitten to try to blow its legs off, I may be savoring the thought of torturing kittens or I may be simply having fun testing the structural integrity of a piece of plastic. Either way, I think I'd be uncomfortable with a company that specifically advertised blowing the legs off of their robot kittens. In a similar manner, I am uncomfortable with video games that glorify their gory treatment of human avatars or that reward violence for its own sake. While I'm not promoting zero-tolerance laws, I do think that some game designers are being incredibly irresponsible with their designs and that more responsible designers should probably speak up more often in telling them we think so.

--Phin

12.

To throw in a utilitarian argument (I'm not a utilitarian but I play one on TV): We should be kind to AI agents because we're going to become intimate with them fairly soon. 'Intimate:' close, together, inseparable, mutually vulnerable to, private with, soulmating with, or at least feeling like it.

Designing a very intelligent, very intimate program so that it will always and everywhere be good to us is a very tough problem. The first challenge: we have only a very weak sense of what's really good for us on a deep personal level. We spend a lot of time and effort doing things whose value, from a deathbed perspective, seems very limited. Self-awareness is a hard thing to achieve. Without it, how do we tell AI what to do?

Second challenge: if we knew what we wanted AI to do for us, how can we make sure it does that - especially as its scope of decision-making widens?

From both perspectives, treating AI in ethical terms makes a lot of sense. Teach AI the Golden Rule. Then be good to it, and trust that if it learns how to help us, it will act on that knowledge.

Now for a non-utilitarian take: many religions teach that animals and humans are different, that animals have no soul, have none of the moral agency of humans, and don't go to heaven. But if you wander by a Catholic Church on October 4, you might see an odd gathering of people on the lawn, with their dogs and birds and snakes and whatever other beasts they have brought. It's the Feast Day of St. Francis of