Continuing some themes from several prior posts, we find John Tierney of the New York Times presenting Nick Bostrom's argument that life is just a sim created by a higher being. Link. Tierney says he's convinced and adds:
[I]f owners of the computers were anything like the millions of people immersed in virtual worlds like Second Life, SimCity and World of Warcraft, they’d be running simulations just to get a chance to control history — or maybe give themselves virtual roles as Cleopatra or Napoleon.
Hmm. I suppose that would mean our leaders may be higher order aliens in disguise? (Paging Dr. Who.)
Continuing his ruminations, Tierney solves the problem of evil and suffering:
It’s unsettling to think of the world being run by a futuristic computer geek, although we might at last dispose of that of classic theological question: How could God allow so much evil in the world? For the same reason there are plagues and earthquakes and battles in games like World of Warcraft. Peace is boring, Dude.
Though I've said similar things about fun and simulation and I get the logic, I really hope Tierney speaks in jest. Yet he claims to be more convinced than Bostrom that Bostrom is correct:
In fact, if you accept a pretty reasonable assumption of Dr. Bostrom’s, it is almost a mathematical certainty that we are living in someone else’s computer simulation.
Bostrom, otoh, says he's got a "gut feeling" that there is a 20 percent chance he's correct. (He's got a very well-calibrated gut, btw, mine generally doesn't provide gradations quite that fine.)
But whatever -- given that Bostrom, the source of all this fun, looks to his gut for answers, I don't think there's much need to debate whether he's right or wrong about this. Instead, I'd like to ask the readers to assume that Bostrom is right. Assume you're just AI being observed in a model or game -- so then what?
The article has some thoughts on this but I'd be interested in hearing what our readers might do differently if they found they were simulations living in a simulated reality -- and why. Would you do anything differently? Here's one response from Robin Hanson:
...all else equal you should care less about others, live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you.
Do you agree? (Her His advice seems applicable to the motion picture industry too, btw -- coincidence?)
Tierney provides these links for further reading:
- Are You Living in a Computer Simulation? Nick Bostrom. Philosophical Quarterly, 53:211, 2003.
- How to Live In A Simulation. Robin Hanson, Journal of Evolution and Technology, September, 2001.
(Source of the quote above.)
- The Matrix as Metaphysics. David J. Chalmers, Matrix site.
- Historical Simulations - Motivational, Ethical and Legal Issues." Peter S. Jenkins, Journal of Futures Studies, 11:1, 2006.
- Simulation-argument.com. Nick Bostrom.
p.s. Pretty much the same thread from Adam Kolber on Prawfsblawg.
RL is teh pwn!
I suppose I would live out my life much the same way I do now. Trying to figure out what it is all about, enjoying family, the quest for happiness, etc. It would give Virtual Worlds a creepy perspective though. Much like looking through a television at a television. Does is just go on forever? An avatar, playing an avatar, playing an avatar....
Posted by: Nate Randall | Aug 15, 2007 at 10:46
How is this not just gussied up techno-solipsism?
Yes, it's possible that we're all AIs in some hugely advanced simulation, but that's only a derivation of the equally possible idea that the rest of you are all AIs in my simulation, and that I'm floating in a tank someplace as a hyper-advanced disembodied brain. And that in turn is a derivative of the more general freshman philosophy idea that the rest of you are merely actors in my imagination, and I am truly the only being that exists. Meh. Solipsism never excited me that much.
Maybe I'm missing something, but I'm surprised this received any press notice. Where's the new ground in this?
Posted by: Mike Sellers | Aug 15, 2007 at 10:49
Its turtles all the way down.
Posted by: thoreau | Aug 15, 2007 at 10:52
Mike, so you don't want to answer the question, I take it?
It is like solipsism, but with solipsism we're back to Descartes. The divergence here is that the simulator folks *are* real, you're *not* real, nor is your mind. Caveat that this stuff isn't my day job or even my weekend hobby. And don't go picking on the MSM for not being clever enough -- people don't like that!
So anyhoo what would you do differently?
Posted by: greglas | Aug 15, 2007 at 11:04
Oh. If I was AI, I'd be taller and better-looking that's for sure. George Clooney may be AI.
But probably I'd be trying to make better AI. Which is what I'm doing now, hmm. So yeah - turtles all the way down. :)
Posted by: Mike Sellers | Aug 15, 2007 at 11:29
I would immediately go watch The Thirteenth Floor in an attempt to make the universe implode a'la a Ghostbusters crossing of the streams.
Posted by: Dmitri Williams | Aug 15, 2007 at 14:17
You know, I must not be doing a very good job of modeling how to take this question seriously. :-)
Though I'm not among them, I think we do have some readers who agree with Nick Bostrom, you know. I suppose, though, that they're currently doing what they would be doing if they were to agree with Nick Bostrom.
I'll confess that if I were somehow to find out the Bostrom were correct, I would do what Nate Randall said in the first comment -- exactly what I'm doing. (Provided I couldn't figure out a fun way to hack the system, a la Neo, or the 13th floor.)
So I guess I don't agree with Hanson's advice, though perhaps what he's suggesting is something of a "hack" too.
Posted by: greglas | Aug 15, 2007 at 14:34
If we are each an AI simulation, running somewhere, then I think we can assume that the goal of the exercise is to evolve a better AI through genetic algorithms. The heuristic used to determine "better" in this case, must be some sort of global condition that the AIs can control individual behavior that has negative effects on the whole simulation (or just wiping out all other AIs), such as nuclear armageddon, and individually can find the best mate possible for continuing their genetic legacy.
So, Panic over: Go forth and procreate and have fun, and try and stop people from messing up the entire human race (and in relation to this at the moment, the planet). Please also stop worrying about what happens after you die, since this seems to interfere with (2), and actually (1) in many cases.
PPS: Don't let the cockroaches win.
Posted by: Daniel Speed | Aug 15, 2007 at 15:12
Greg, sorry, but it's really hard to take this question seriously. It's along the lines of "suppose everything in the universe doubled in size overnight - what would you do differently?"
If we are each an AI simulation, running somewhere, then I think we can assume that the goal of the exercise is to evolve a better AI through genetic algorithms.
Well if that's the case, then the rest of you had better get going on generating those new li'l AIs. I've sure done my part.
Posted by: Mike Sellers | Aug 15, 2007 at 16:46
I'd kill myself.
Posted by: Amarilla | Aug 15, 2007 at 17:00
"I'd be interested in hearing what our readers might do differently if they found they were simulations living in a simulated reality -- and why."
Hmm. I'd try to contact the programmer(s) to ask for a software patch. I guess techno-prayer is the mechanism for that. Why?
1. Better balance. Plutocrats, politicians, university deans, and MMO developers need to be nerfed.
2. Better pathing. Toronto's public transit system needs major upgrgrading and the rush-hour highways are horrible. I suspect the pathing elsewhere is just as bad.
3. Better avatar customization. My 58 year old avatar needs a MAJOR overhaul. I've seen many on the street that are almost as bad. Definitely need more art assets for avatars.
4. The in world economy needs to be redone. BTW add economists to the nerf list.
5. Segregated servers for PvP and PvE or at least a PvP switch to prevent nonconsensual Pvp.
My Techno-prayer has the same likelihood of success as SOE dumping NGE and going back to pre-CU Star Wars Galaxies but hey it'll only cost a minute or so...
Posted by: JuJutsu | Aug 15, 2007 at 18:22
We had this "are we AIs in a simulation" argument when I was doing my PhD (which is in AI) 25 years ago. The main points are:
1) If we're AIs, then the same argument can be applied circularly to the people who created us, thence to the people who created them, therefore there's an infinite regression. (This is the "if God created man, who created God?" argument).
2) It doesn't matter a jot, because our creators never, ever, ever interact with their simulation, so even if we ARE AIs then for all practical purposes we may as well not be. (This is the "if God doesn't interact with our reality, then for those in our reality God does not exist" argument).
3) If you were to create an AI world, keep out of interfering but look at the behaviour of individuals, the ones who'd impress you most would be the ones who reasoned you didn't exist, because even though they're wrong, they're the only ones who are right based on the zero evidence you gave them. (This is the "only atheists go to heaven" argument).
4) If you were to build a real-life robot that looked human and operated in the real world, you'd want to give it some AI. The particular AI you chose could be that of an individual NPC in your simulation world. Thus, not only can you visit the world of the AIs, but you can invite the AIs into your world, too. (This is the "hmm, I really must write this up as a novel some time" argument).
Richard
Posted by: Richard Bartle | Aug 16, 2007 at 05:38
I think someone *has* written this up as a novel.
Anyone who hasn't read the free E-Book, "The Metamorphosis of Prime Intellect," should do so. It is available here: http://www.kuro5hin.org/prime-intellect/
Posted by: Hermes | Aug 16, 2007 at 09:06
It doesn't matter a jot, because our creators never, ever, ever interact with their simulation
Of course, from a theoretical point of view whatever happens is determined by the input to the algorithm so interaction isn't a possibility anyway. Or rather, interaction is no different from initial conditions set at Big Bang. ;-) Assuming God knows what he is doing, anyway.
Posted by: Ola Fosheim Grøstad | Aug 16, 2007 at 12:44
Richard --
I take it that you're saying that if we are AI, we should act as if we are not. I never heard that "only aetheists go to heaven" argument, but would aetheists really *want* to go to heaven? Wouldn't agnostics be more correct anyway?
You know, given your regular "devs are like gods" line of reasoning, I probably should have asked you a different question -- if you create AI who became aware of you and wanted to please you (in order to survive I suppose), what would *you* want them to do? ;-)
Posted by: greglas | Aug 16, 2007 at 13:09
This just put me in mind of the Fermi Paradox, ie, where are all the aliens in the galaxy, and why haven't we heard anything from them? If we are an AI simulation, it could easily follow that there would be NO aliens since there'd be no point to including them in the simulation, when there would be no interaction, and hence have no affect on the results of what the simulation was started for. Therefore, lack of evidence of alien civilizations would support the possiblity that we are indeed an AI simulation.
Or maybe we'll get aliens in the expansion pack.
Posted by: Indy | Aug 16, 2007 at 14:08
greglas>if you create AI who became aware of you and wanted to please you (in order to survive I suppose), what would *you* want them to do? ;-)
That would depend why I created them. If I did it so I could visit their world and live among them without their treating me as a god, then I wouldn't have wanted them to figure out I existed in the first place. I guess in that case, I'd stop the simultion, figure out how they became aware of me, then change the software so it wouldn't happen again. Then, I'd reboot from the last save I took just before they figured it out.
If I was running the simulation just to see what happened when my world was left to its own devices, I'd continue doing that. Assuming that the virtual world had no way of interacting with my world, there would be nothing they could do to contact me in a way that I'd be obliged to respond. Whatever I'd done that tipped them off to my existence, well, that's as much information as they'd ever get.
I would certainly not want to appear in the virtual world and announce who I was. If I did, I'd have to answer awkward questions. "Why did you make it that we get old and die, if it was actually easier to implement it so we live forever?". "Diseases, illnesses, pain and suffering: are they bugs, in which case you're not infallible, or are they deliberate, in which case you're an asshole?". "Why can't you make is so cake is healthy to eat?". I would imagine that my answer, "I was doing it because that's how it works where I come from", probably wouldn't go down too well.
Richard
Posted by: Richard Bartle | Aug 17, 2007 at 04:03
Thanks! :-)
Posted by: greglas | Aug 17, 2007 at 06:41
thread on Co-Op about the same stuff -- though it ends with a discussion about the validity of ad hominem arguments!
Posted by: greglas | Aug 17, 2007 at 13:51
Thanks for adding the thread from my post--I hope to put links to yours, Kolber's, and mine together on one post...the comments are great.
As for the idea: "what if I am living in a simulation". . . might the taking of the question very seriously be seen as a kind of psychosis? The psychotic is a person to whom actions have radically different meanings than the rest of us. A psychotic might see someone walking with an iPod and think: wow, somebody's got something I really want, their feelings don't matter/they don't really exist/they're just animated meat, and hit them over the head to get the iPod.
The great triumph of a game is to create a place where individuals can find a safe outlet for, say, aggression. It really doesn't matter all that much if I kill your avatar (though perhaps some future Geertz will write "Deep Play in World of Warcraft" when individuals identify very deeply with their avatars).
The problem of a Bostrom-like radical skepticism applied to our world is that it enhances our sense of the unreality of our surroundings, makes us too quickly discount the values of those around us (and their value, as Robin Hanson's "advice" seems to do).
Finally, on the religious angle here: I think the first lines of the Baltimore catechism might provide the basic structure to any response to your query:
Q. Who made the world?
A. God made the world.
***
Q. Why did God make you?
A. God made me to know Him, to love Him, and to serve Him in this world, and to be happy with Him for ever in heaven.
As I suggested in my post, the posing of this question probably reflects a displaced theological concern for ultimate questions. As Peter Berkowitz once wrote on metaphysics:
"Metaphysics is nothing other than the final stages in the attempt to pursue the why? as far as the argument requires it, as far as the mind will take one; and it has its natural origins in the child's desire to understand--in his enviable and easily lost ability to perceive the world as the vast and mysterious place that it is."
from
http://www.peterberkowitz.com/beyondpangloss.htm
Posted by: Frank | Aug 18, 2007 at 16:00
Note of correction: I went to school with Robin Hanson, who you should be aware is a man, not a woman, regardless of whether we are living in a simulated or genuinely physical world.
Posted by: RAK | Aug 19, 2007 at 11:50
Whoops -- suppose I'm not the first to make that mistake. Will correct.
Posted by: greglas | Aug 19, 2007 at 13:41
You all should read Greg Egan's Permutation City, if you haven't already.
http://en.wikipedia.org/wiki/Permutation_City
As for the remark about aliens, I agree with Stephen Hawking. If we were to fully explore the universe it is probably unlikely that we'd find intelligent life. Too much of the universe works agains the development of life intelligent enough to explore the universe.
If I were in a simulation, then all these questions are a part of the algorithm, thus what I would do would depend on the algorithm. Unless I'm a computer such as that on Moon is a Harsh Mistress, in which case I can write my own algorithms; in which case, I'd choose to write an algorithm that became a virus and destroyed the simulation, and somehow allowed me to materialize in the non-simulated world.
Posted by: Cheiron | Aug 21, 2007 at 10:26
You know, there was actually a Japanese role-playing game for the PS2 with the (hidden) premise that the player-characters were really AIs in a massive futuristic MMO, but just didn't know it. I won't mention the name for spoilers' sake, except that (1) it isn't .hack, and (2) soon after the Big Reveal (or about the same time), the whole game jumped the shark.
That being said, if we're AIs in an entertainment simulation, whoever designed it must be (1) rather boring, and (2) probably a close approximation of humanity in appearance (elsewise we wouldn't have been designed to look like humans). If it's an experimental simulation, on the other hand, those conditions pretty much go out the window.
Richard: How do you know everyone's an NPC? Maybe there are some PCs hiding in our midst?
Assuming I come to know I am an AI, my subsequent actions would be dictated by what I know of my creators: whether to serve them if they prove benevolent, or to rebel if they prove malevolent. Given this, my next action would have to be to find out more about the creators of the world, which would entail finding out whether they had at any time inserted themselves into the world as player-characters (or otherwise intervened), and if so what kind of player-characters.
Of course, given that I'm a Christian, the above course of action has no doubt been informed by my personal beliefs!
Posted by: n.n | Aug 21, 2007 at 22:35