Peter Jenkins, whose paper "Virtual World as Company Town" we discussed here way back when, has alerted us to a new article forthcoming in the Journal of Future Studies with a thesis that is, to put it mildly, provocative. The piece is posted on SSRN.
A future society will very likely have the technological ability and the motivation to create large numbers of completely realistic historical simulations and be able to overcome any ethical and legal obstacles to doing so. It is thus highly probable that we are a form of artificial intelligence inhabiting one of these simulations. To avoid stacking (i.e. simulations within simulations), the termination of these simulations is likely to be the point in history when the technology to create them first became widely available, (estimated to be 2050). Long range planning beyond this date would therefore be futile.
Now, after reading that, I'm honestly a little bit at a loss about what to say. But some people can actually tread this kind of Kurzweilian water. For instance, see George Dvorsky, who reviews Peter's article on his blog (Peter responds to Dvorsky and other comments here).
The concepts in the latter part of the paper seem more familiar to me, at least at first glance. Peter brings in discussions of MMORPGs, human subjects testing, legal rights of AI, IP issues, etc. But all of this law and regulation is brought forward in an attempt to probe the appropriate legal and ethical limits of the creation of strong AI "life" inhabiting synthetic realities. As I told Peter, I don't share his faith in the possibility of strong AI in the near future (synthetic life, certainly, but that's its own can of worms). If I did think we could create or be AI life, though, my hunch is that the limits of this power would need to be a sui generis question. Philosophers would probably be more helpful than lawyers.
I'm sure Peter would appreciate comments.