« My space or yours? | Main | Avatarization of you »

Sep 20, 2006

Comments

1.

http://environment.guardian.co.uk/climatechange/story/0,,1875762,00.html

2.

Ok, it didnt like anything about my last copy/paste. Here's the link I meant to put up. Nick Bostrom's simulation argument:

http://www.simulation-argument.com/simulation.html

3.

I've read this argument before somewhere.

Damn, I'll have to look through my paper collection to find out where.

Richard

4.

Ah, ye olde "brain in the tank" fun. I'm not me... I'm a perfect simulacrum of me. One that is indistinguishable in every way from an actual me. Is this technosolipsism perhaps? Feh. I just got an email spam from some guy trying to sell me a "red pill" of some kind. Something about a rabbit hole...

Besides... we can't be living in an historical simulation created by some future society. If we were, the creators wouldn't have been dumb enough to give us a universe with the Fermi Paradox. That's such a dead give-away that there's something "funny" going on. Jeez. Why not make the two-way mirror a bit more obvious, Mr. Bad Cop?

5.

Here's that idea in movie form (I always thought that this was an underrated film):
http://www.imdb.com/title/tt0139809/

The twist was that if you went into that AI world, replaced an AI and then died within it, the AI would overwrite you in RL.

I particularly like the choice of song played over the final credits, which is the Cardigan's "Erase/Rewind."

6.

Yep -- I remember that one. Trippy. Some questionable editing and writing here and there, but at least as good, imho, as Existenz (http://www.imdb.com/title/tt0120907/), which seems to make some lists that 13th floor doesn't. :-)

7.

To a more recent fiction work about it, there's also this book.

8.

Yea, Bostrom.

But let's assume such simulations would tell us increasingly more about simulating and, correspondingly, increasingly less about anything else.

If so, then their future use will not be for the education or exploration or enlightenment of the good but rather for the caging of the bad -- cf the Moriarty episode of Star Trek. The Magic Noose.

And then my bet is that all the tickets sold and the profits reaped and the benefits gained by those who best cage the bad will be, at the end of the simulation day, very much non-simulated.

9.

As Andy says, this smacks of technosolipsism, or maybe a form of techno-gnosticism with a new take on the demiurge. Really, the abstract reads like something from the Journal of Irreproducible Results or maybe from someone who's watched The Matrix (or as Dmitri says, The Thirteenth Floor) too many times. The paper, if intended seriously, is based on a hypothesis so tenuous as to have no meaningful relationship with reality (unless Jenkins inhabits a different reality-simulation than me).

Unfortunately, Jenkins appears to be serious in his contemplation of the probability of and motivation for a hypothetical future civilization creating historical simulations, one of which is us. He easily collapses the gulf between today's virtual worlds and future fully realized physical and psychosocial worlds in much the same way that a 19th Century author might have collapsed the distance between the earth and the moon in proposing that doubtless an advanced civilization would build train tracks between the two.

As part of his general easy leap across a conceptual chasm of unknown dimensions, Jenkins takes ethical standards used for medical research on humans and applies them wholesale to the potential-future research being done on AIs. In doing so he simply elides the stark differences between the two, assuming them to be the same in every meaningful way, and thus sets up a tautology: if AIs are (by some miracle) indistinguishable from humans, then are AIs indistinguishable from humans? This goes nowhere, with the conclusion embedded in the assumptions.

The crux of this argument is reflected in the section on legal rights for AIs, where Jenkins says (p 31)

"The question of whether AI would have the legal status of a person has been considered by many lawyers, legal scholars and computer scientists to date, although not in the context of a historical simulation. Most of these individuals have come to the conclusion that AI would meet the definition of personhood on the basis of the having the attributes of reasoning, self-awareness, communication, a sense of the past and the future, and the ability to experience pain and pleasure."

This argument for 'personhood' is wrong on its face. Not only does it depend on a tautology (given certain hypothetical aspects of some future AI), it assigns moral weight to particular technological features without justification. We might as well say that any artifice with two eyes or two hands is a person, which quickly leads to questions like those of suffrage for mannequins.

To take this out of the hypothetical future, the AI technology that Online Alchemy is developing enables us to create AIs that perceive their surroundings, reason about them, experience nuanced emotions (including but hardly limited to pleasure and pain), have a self-image (if not full-blown self-awareness just yet), communicate, and have memories, opinions, and relationships with both AIs and human-driven characters. Despite this, these AI agents are no way "persons"; there is nothing remotely recognized as human to which legal rights might be attached (that is, even advanced versions of our AI are unlikely to be granted the right to vote, trial by jury, or to act as the attorney-in-fact for another).

Despite the inherent nature of AI as artifice rather than person, a fact which deflates this entire paper, I believe this may still be used to elicit potentially important ethical questions. Given OA's emotional, relational AI, I have thought a great deal about the ethics of enabling people to make and break emotional connections with an AI. Is this, for example, any different from a writer working to forge an emotional link between a person and a character on the page or screen? Does the writer have any duty to not mistreat the characters he or she creates (remember Misery)? Or is there some essential difference that emerges when the character is now no longer passive under the writer's complete control? When an AI can decide on its own what it wants to do and how it feels, when it can feel threatened or consoled or loving toward a human, is there a new term in the ethical equation, even if only on the human's side?

In game terms, would you feel any pangs of guilt about killing an NPC that begged for its life -- not based on a scripted response, but as part of its own emergent emotions and desires, its own intelligence? What if it had first befriended you and then decided on its own to betray you for its own gain? Is there a moral dimension here, or are we in such situations ourselves essentially characters (if not AIs!) as in a play, with no more moral culpability than that?

10.

Well, if there is profit in it, Dave, why shouldn't the bad cage the good too? Aren't the bad more driven by the profit motive? (I had to Google "The Magic Noose.")

Another book reference.

11.

Mike Sellers wrote:

To take this out of the hypothetical future, the AI technology that Online Alchemy is developing enables us to create AIs that perceive their surroundings, reason about them, experience nuanced emotions (including but hardly limited to pleasure and pain), have a self-image (if not full-blown self-awareness just yet), communicate, and have memories, opinions, and relationships with both AIs and human-driven characters.

They will -experience- nuanced emotions, or they will evidence behavior as if they were experiencing the emotions? (Is there a difference? I think there is.)

--matt

12.

Matt, I take it Mike means the latter.

I posted before reading Mike's comments, but I had pretty much the same issues with that paragraph. I'm agnostic as to the possibility of life-like AI at some point, but I don't see any evidence of it at present -- not even remote evidence. I don't drink the Kurzweilian Kool-Aid, of course, so maybe that's where I part ways with others.

I also agree with Mike that AI raises some interesting ethical quandries. In fact, I think gaming raises some interesting ethical quandries -- perhaps that's why Ted's "The Horde is Evil" was one of our most popular posts. We've had a few discussions about the ethics of simulation -- here's an early one from me, here's one from Nick on the simulation of trust. I think we've had some conversations with Ian Bogost about this stuff too, but I can't dig up the thread -- maybe it was a conversation IRL. His work plays in interesting ways, I think, with the ethics of simulation.

13.

Matt, to some degree yours is a philosophical and epistemological question that's only partially open to actual inquiry. Does a dog experience emotions, or do they merely evidence behavior that we interpret as emotion? Damasio and others have addressed this question, going back to the James-Lang theory and incorporating later neurological research. What consensus there is leans to the idea that humans have a mental representation on top of an underlying and cognitively impenetrable neural and chemical state. Damasio calls the mental part 'feeling' and the underlying part 'emotion,' (reversing other authors -- but the general usage is far from standardized), and says that animals have emotions but only have feelings to one degree or another, depending on their neural and cognitive development. In his terms dogs have emotions and some degree of feelings, but not as complicated or nuanced as our own.

So, our AIs don't have the underlying neurophysiological architecture that animals do, but they do have both emotional and feeling aspects. That is, they have emotions that derive from internal motivators and perceptions, can reflect on these (indicating a 'feeling'-type cognitive/mental overlay), and experience the same sort of emotional turbulence and conflict of impulses that we do -- fear, loneliness, pride, calm, anxiety, anger, dismay, attraction, etc. They also evidence their emotions via their behavior, facial expressions, opinions, etc.

I'm not saying this is a complete model of human emotion, but from what we've seen it's a good step forward in an important area that has had little attention paid to it. It also opens up entirely new modeling and AI-reasoning capabilities as well as new types of interactions that are otherwise impossible between NPCs and between NPCs and PCs.

14.

Mike, I want to cheer for you simply because you can use the word "elide" in regular speech.

But whether this fellow's meditations on the implications are impressive or goofy, I still haven't read anyone disproving the fact that we are all AI.

15.

Heh, thanks Dmitri.

But whether this fellow's meditations on the implications are impressive or goofy, I still haven't read anyone disproving the fact that we are all AI.

Do you seriously expect to? While we're at it, no one has disproven the fact that we are all hypnotized space aliens on a tour of lesser civilizations; that you are all figments of my imagination; that Peter Jenkins is an agent of an evil conspiracy sowing discord to throw us off the track of our real psychic imprisonment; or that all of this is some plot by the demiurgic Flying Spaghetti Monster.

We each have to decide what flights of fancy we're willing to entertain as serious scholarship.

16.

Descartes FTW! While I can't disprove it either, I tend to agree with this essay by Thomas Nagel that consciousness as experienced is something more than what empirical observation can provide us with:
http://members.aol.com/NeoNoetics/Nagel_Bat.html

I also misread Mike, I think -- he suggests he agrees with Matt's former proposition (AI experience emotion) whereas I would stick with the latter (AI simulate human emotion).

17.

Hear, hear, Greg! Nagel FTW.

18.

This idea was first disscussed fictionally by Fredrich Pohl in his short story 'Tunnel under the world' (1954), and more exhaustively in Daniel F. Galouye's novel 'Simulacron Three' (1964). Both of the simulations discussed in these two novels were designed for the purposes of market research... ...go figure!

19.

Oh yeah, was just doing a wiki search on Galouye and noticed that 'Simulacron Three' was also published as 'Conterfiet World', which as I recall was the version I read after watching 'Thirteenth Floor' and noticing it was based on that novel.

20.

The underlying premise resembles religion more than science, so I'm pretty sure we're only going to agree if we already agree. For me, Kurzweil has interesting ideas, but, like the existence of God, not falsifiable theories, so they aren't worth worrying about except in terms of secondary effects.

Sentient machines are one of the interesting secondary effects. Kurzweil makes a plausible case that we don't have to engineer these machines -- they could be brute force simulations. What is artificial about a brute force simulation of intelligence?

Anyways, Jenkins doesn't seem to actually conclude anything does he? I kept expecting to read about specific case law relevant to machine sentience. The article just provides a laundry list of things we'll probably screw up.

To Mike Sellers -- I think you're missing a fundamental point: historical simulations do not interact with the basement world. Many of your comments relate to whether AI can achieve parity with human intelligence. That's irrelevant. Does the creator of a world have legal or ethical obligations to the world if its denizens are sentient by their own standards? Would human ethics be different if the average human IQ was 50 instead of 100?

21.

Greg, I'm actually something of an agnostic on the subject of whether AIs "really" have emotions or they merely simulate them. Neither answer is completely satisfying, as both seem to leave unexplored important aspects of the question (does architecture truly matter? vs. how do we know anyone else isn't just simulating emotions?). I haven't been able to come up wtih an answer that is satisfactory to me personally and philosophically; it remains for me a Schroedinger-like uncollapsed function.

Ken, in my comments above I focused in part on human equivalence of AI because that's what Jenkins' argument relies on. If AI is not equivalent to human intelligence (or some other essential quality) then his points of ethics and law are completely meaningless. I spoke to this because the area of the development of believable AI is very important to me, and I had hopes for this paper that were dashed almost instantly.

Does the creator of a world have legal or ethical obligations to the world if its denizens are sentient by their own standards?

This will become an interesting question once you can explain to me exactly what everything after the "if" actually means in the context of a simulated world and its denizens.

22.

"Aren't the bad more driven by the profit motive?"

No, silly. That would, of course, make the bad functional, which is tantamount to good. The bad is/are irrepressibly dysfunctional. The bad produce/s no goods (of which simulation cages would be one) from which profit can be made or with which the system can survive.

There really is a magic noose. How bout that. I was going for the magic circle = magic noose thing, but then there really is no telling what is going to happen when you set all those random words free.

Nature has already tried to simulate itself, btw, when it created representationalism. Don't know how that is going to work out just yet, but, you gotta admit, living inside a simulation would provide a very neat solution to the Fermi paradox.

23.

I disagree heavily with this part:

A future society will very likely have the technological ability and the motivation to create large numbers of completely realistic historical simulations and be able to overcome any ethical and legal obstacles to doing so.

A completely realistic simulation of the world is impossible without actually *being* the world, in which case it is no longer a simulation.

24.

There's some serious scientists who debate over the issue:
http://www.transhumanist.com/volume7/simulation.html
http://www.edge.org/documents/archive/edge116.html
http://www.simulation-argument.com/

25.

Dave -- oh, *that* kind of bad... well, maybe we can agree then.

Btw, I'm not saying or even implying that the simulation argument is not "serious," I'm just saying that I disagree with it. I kind of expected this discussion to split (as it has) as to the plausability of Jenkin's intial premise, just based on past statements people have made in the comments here.

26.

The logic here is pretty infantile:

"A future society will very likely have the technological ability and the motivation to create large numbers of completely realistic historical simulations and be able to overcome any ethical and legal obstacles to doing so. It is thus highly probable that we are a form of artificial intelligence inhabiting one of these simulations."

A future society will also quite possibly develop the technology to travel back in time and replace the sun with a giant flaming tennis ball, but does that make it highly probable that our planet is heated by a giant sporting good?

I'm not dismissing the possibility of us-as-sim, but I am dismissing this argument. Personally, if I had access to that level of detailed simulation, I'd be more interested in modeling probable futures than known pasts.

27.

Good point. Since it is highly probable that an advanced civilization replaced the sun while weren't looking, we should replace the Flying Spaghetti Monster with the Church of the Holy Solar Tennis Ball for "logical" discussions like this.

28.

Ken Fox> Does the creator of a world have legal or ethical obligations to the world if its denizens are sentient by their own standards?

Mike Sellers> This will become an interesting question once you can explain to me exactly what everything after the "if" actually means in the context of a simulated world and its denizens.

I agree which is why I had hoped that Jenkins would focus on relevant case law to help answer the question.

Historical simulations provide an interesting twist on machine sentience since we have total power to inspect/monitor the machine, but not much ability to test the machine without destroying the simulation. The act of testing an actor's sentience may violate its rights if it were found to be sentient. If my children were NPCs designed to test me, is that ethical? What types of tests are permitted?

Perhaps virtual world creators do not want my original question clarified so they have no ethical dilemmas when manipulating their worlds. Hey, it's all just 1s and 0s right? :)

29.

CYLONS FTW!
This is the sillyest bit I've seen on here yet.

A discussion of AI and arguments about sentience of such, are really pretty pointless until WE HAVE AN AI. Until then, this is idle speculation with nothing other than wishful thinking to base it on.

Where's 6 when we need her?

30.

Sevarus, that attitude is why we get craptacular technology and science legislation. It's all fantasy until you read about somebody actually making it work -- then it's "OMG! Technology changes so fast! How can we ever keep up?!"

This stuff isn't hard to imagine. If we can't talk about it on Terra Nova, where can we talk about it?

31.

As many of those who posted comments, I agree that we may well be living in a simulation running on some supercomputer in "a higher level of reality". But I don't think we have enough information to assign any probability to this possibility, and I don't agree with the conclusion that the simulation would probably be terminated as soon as its conscious inhabitants develop the capability to run their own equivalent simulations of "lower levels of reality". This would make the original simulation more interesting, wouldn't it? Creating an endless cascade of realities may even be the *objective* of the original simulation.

32.

Thanks for all the comments. I can't respond to all of them individually. As to whether the theory is a scientific one (as opposed to metaphysical) in the sense of being falsifiable a la Karl Popper, one just has to wait and see what happens around 2050 in order for it to be falsifiable. Therefore, it is a scientific theory. I fully intend to be around in some form or another in 2050, so please check back with me!

Cheers!

Peter

33.

I am totally late to this party, but this might be a good time to mention a blog post I wrote a while back (on Social Study Games) in which I pondered (a bit tongue-in-cheekily) whether God might in fact be a game designer. There are some things I might present differently, especially given current debates about the validity of string theory, but it might be of interest to some of you... Oh, and links may be broken, but you can probably forgive that...

34.

Lisa --

Hey -- yeah, looks like you covered all the bases way before this conversation. :-)

35.

"But philosophers share the general human weakness for explanations of what is incomprehensible in terms suited for what is familiar and well understood, though entirely different. --Nagel"

Love this quote. Reading the entire thread and following the links was worth it just for this gem. Thanks for the link, Greg.

--Phin

36.

The Offical site of Spyware
spyware

The comments to this entry are closed.