I'm reporting from the Singularity Summit (AI and the Future of Humanity) in San Francisco where a bunch of fascinating luminaries (from MIT, IBM, Google, WorldChanging, etc.) are discussing the possibility that despite a lack of excitement in AI research lately, we might yet invent artificial intelligence that is smarter than we are. (The term singularity, though I'm sure you all know this, was first pulled from the physics/big bang/black hole vernacular and used in this context by Vernor Vinge, then popularized by Ray Kurzweil, who is speaking via video conference tomorrow). Some people get as excited about the Singularity as Christians do about the Rapture, thinking it might solve all of our problems via a positive feedback loop emerging from intelligent systems that are capable of making themselves recursively better. Others refer to it as a nerdpocalypse and tell tales of Hal-like doom and gloom or economic and network catastrophes that will inevitably arise from AIs (either malevolent or just behaving stupidly because of bad programming) running rampant.
It's hard to say whether it's going to be good or bad (most likely somewhere in between), but I'm fascinated by the fact that a lot of this promise appears to be intersecting with virtual worlds, despite my perception that AI in VWs has taken a backseat (relative to non-MMO games) to the desire to build increasingly more complex systems for human-to-human interaction.
Things that are being discussed here:
- The new wave of AI has moved far beyond simplistic and narrow AI with specialized functions to a desire to build artificial general intelligences (AGIs) that are capable of thinking and learning. The key to this desire is that practical learning is a product of experience. AGIs cannot be built, they must be grown. And the perfect place to grow them just might be virtual worlds - AI babies that can be raised by virtual villages. Ben Goertzel (who will be speaking at Virtual Worlds 2007 and whose kids are named Zarathustra, Zebulon and Scheherazade) is running a 15-person company called Novamente that is conceiving such babies for MMOs. One thing I wonder is what people can learn from helping baby AIs learn? And what kinds of attachments might we develop to these children we help raise? (People are apparently making clothes for their roombas - they'd go nuts with AI babies. And what are the hacking possibilities? Whewee!!). Lots of questions here... does this sort of interaction from infanthood guarantee friendly AI, or does it increase the possibilities for contention?
- The likely scenario for really advanced AI is that it will be some blend of humans and brain-computer interfaces... so the AI baby is trained in the VW then uploaded into your implant where it gets to be a little homonculous (like the rat that drives the stupid chef in Rataouille)? Or you upload your consciousness, have some sort of mash-up with your pre-trained AI (you've run raids in WoW with it) and download the whole lot back into your body? The transhumanists think about this stuff lots...
Things that have not been discussed:
- One of our favorite topics here at TN (and mine separately, as well), whether our universe is in fact a big virtual world, and whether we are ourselves AI.
- Whether de facto AGIs exist already, resident in organisms we call collective intelligences, corporations and other self-organizing systems. If so, can they be tweaked, rather than starting with AI babies?
I am almost convinced to get my head frozen so that I can witness this fabulous future. You all can listen to the podcasts and see if you are tempted, too. Cocktails are beckoning and I am not doing this topic justice, but let's discuss!
Isn't this just the usual "pie in the sky" stuff? As you wrote, there haven't been any real advances in AI for some time, but that won't stop the so-called luminaries ranting about their amazing future scenarios..
And since excitement for those has been wearing thin, they now add into the mix another buzzword - virtual worlds!
Posted by: Thomas | Sep 09, 2007 at 05:32
From what I can tell, Singularity depends on the notion that technology trundles along unchecked from feedback mechanisms in the market. "Disruptive technologies" are as likely to experience a crib death due to neglect as fundamentally change society. As an example, many people's response to Windows Vista and Office 2007 as disruptive technologies that break about a decade worth of visual literacy is to avoid using it. I'd put my $1.00 bet against a singularity primarily because the ecosystem of technology adoption is such that so-called "disruptive technologies" are either rejected or appropriated into social systems.
Posted by: KirkJobSluder | Sep 09, 2007 at 10:58
@Thomas: I think I mis-spoke. The issue about a lack of advances in AI is largely a perceptual one. I am told that there have in fact been lots and lots of advances, but the AI community has chosen to be much quieter about them, given the backlash these recent years. Am in fact currently sitting with Reichart Von Wolfsheild, who worked on Boeing's UCAV project (unmanned combat air vehicle)... he says there is super super interesting stuff happening, a lot of which isn't public. And I believe him.
He also points out that we can now purchase robots that do things we aren't willing to do ourselves... like vacuuming. It's a small thing, perhaps, but people used to have to clean their own privies and chamber pots... progress is often slow and incremental, but that doesn't mean it's not progress towards something super great (running water and flush toilets have impacts far beyond the convenience factor). It's been pointed out that the Singularity is not a point in time so much as a process. There will likely be a tipping point, though, and it might seem a lot more clear in retrospect.
The kool-aid here is sweet and refreshing... hard to say no to.
Posted by: Lisa Galarneau | Sep 09, 2007 at 13:27
It does sound like the place is awash in koolaid. The test will be which of the ideas slung around still makes sense in a week or a year.
I continue to believe that we're going to see a revolution in AI and artificial psychology in the coming years -- and virtual worlds will be at the heart of it, as that's the easiest place to embody an AI.
But it's a long, crooked road to get there. Raising "AI babies" may or may not do it -- there's an enormous chasm between teaching an AI to fetch and having it understand simple conversation, much less, say, love. There is no straight line of monotonically increasing learning and cognition between toy-baby and AI-adult.
And, foremost in my mind, there's the funding problem. The market for AI on its own stinks. The market for AI as enabler in a virtual world doesn't really exist (yet). Once the value of personable AI (not just better pathfinding and strafe-shooting) becomes visible, it'll be obvious. Until then, it doesn't exist.
Solving this is perhaps as difficult a problem as the AI itself. But not nearly as sexy as AI babies.
Posted by: Mike Sellers | Sep 09, 2007 at 14:13
A small nit, Lisa, but "Some people get as excited about the Singularity as Christians do about the Rapture" is an awkward analogy. It is mainly Pentecostals who get excited about the rapture, and really only some, and after a huge surge in the 90's, I think the fervor is a bit down on this at the moment even among the ones who do.
Posted by: Herr Ziffer | Sep 09, 2007 at 15:55
Back in the 1980s, I was inspired by the rapid progression of Moore's law and the heady predictions for the future of AI. I chose a career in Computer Science on the basis that it would be a privilege to be working with the promised Ultra Intelligent Machines in 20 years time. Well 20 years on we have machines with undreamed of speed and fantastic graphics but they are still incredibly stupid.
What went wrong? I think AI was never properly defined. If we want robots capable of interacting with human society, it would be quicker to start with humans and genetically engineer a slave race with diminished emotions and no ambitions of their own. Clearly unethical but it shows just how silly the whole idea is. What WE define as intelligence includes the ability to empathise with human beings, and to do this you need to be at least as sophisticated as a human being to begin with!
Posted by: Simon | Sep 10, 2007 at 06:26
Im a real neuroscientist and im telling you singularity stuff is absolute nonsense. We are so far away from understanding the human mind that you woudnt beleive it. Squishy matter is not the same as silicon matter.
Posted by: David | Sep 10, 2007 at 07:41
There is an abstract generalized 'intelligence' - there is no replacement for game theory, arithmetic, or perhaps even praying-mantis kung-fu. There are obviously many different species, and humans use different aspects of intelligence than grasshoppers. Each organism plays a unique role.
Tools utilize the same intelligence as do biological organisms. However, humans decide on the purposes of tools as they invent them. As long as humans continue to be the key inventors of tools, it is difficult to imagine a tool forming into a homunculus with motives, thoughts or emotions.
Humans are likely to remain the dominant species on Earth. If tools are able to redefine themselves without too much limit, in an ecosystem of tools, they will no longer be tools. But why would us humans let that happen? We are more likely to continue to use tools to fulfill or redefine our motives.
Posted by: Ben | Sep 10, 2007 at 09:05
Computers are demonstrably good at dealing with tasks which are calculable based on well-defined inputs and outputs; they can fly jet fighters, play chess well etc. They are however, man-made artefacts and hence any program can always be interrogated to determine what "concepts" it's evolved and what "conclusions" it's come to and how they've been arrived at. Being a closed system, it seems that insights are unlikely to arise from this that haven't already been introduced from an external source (i.e. humans). The world of meaning, inspiration and intuition which humans inhabit I believe will remain unavailable to them. Consider the lowly housefly; it might have a "brain" smaller than a pinhead and yet can demonstrate a complex set of behaviours - it can react rapidly to subtle changes in its environment (try swatting one), find food, find a mate, navigate through the air with extreme precision (it's not bad on foot either). When AI comes close to emulating it I'll sit up and pay a bit more attention.
Posted by: Simon | Sep 10, 2007 at 10:53
Ahem - " What WE define as intelligence includes the ability to empathise with human beings, and to do this you need to be at least as sophisticated as a human being to begin with! "
No you don't, I'm pretty sure my cat is empathic, but it's no way as intelectually sophisticated as I am...
Posted by: Olly | Sep 10, 2007 at 11:18
The world of meaning, inspiration and intuition which humans inhabit I believe will remain unavailable to them.
Intuition and meaning are actually doable, as are other a-rational and emotional components of our experience. Inspiration and insight are a lot harder, but I'm not prepared to say they're not doable.
In my experience, the most difficult aspects of human intelligence to capture are culture, humor, and deceit. Both involve many layers of cognition and meaning, and in the case of humor, a specific neurological base.
Posted by: Mike Sellers | Sep 10, 2007 at 12:04
David wrote:
"Im a real neuroscientist and im telling you singularity stuff is absolute nonsense. We are so far away from understanding the human mind that you woudnt beleive it. Squishy matter is not the same as silicon matter."
A while back on this forum we had this exact same debate. I said back then that building computer AI wasn't possible because no one has a good idea of what human intelligence is in the first place. And figuring out human intelligence isn't the job of computer programmers--that's in the domain of neuroscientists, biologists, etc.
Posted by: lewy | Sep 10, 2007 at 13:25
Cognitive scientists, lewy. Cognition and other psychological functions/states/phenomena are their own area of study. Originally, cognitive science was started with the idea that these could be studied without limiting it to silicon or neurons. Of course, the "functional architecture" -- the neurons or circuits -- turns out to be incredibly important to studying cognition, but it's not necessarily only a biological domain.
That said, David is right: the books and articles the purport to understand how the human mind works are skimming over the wave tops at best. Which isn't to say we can't stick our toes in the water and make some interesting models of our own.
Posted by: Mike Sellers | Sep 10, 2007 at 13:40
One of the problems of the neuroscience and the whole "what is conciousness' thing is the assumption it matters. Indeed this goes well beyond neuroscience - the whole notion of the an observer as part of the collapse of superpositions in quantum physics is deeply problematic.
Does a superior life form to ourselves even need to be concious - is conciousness just an accidental byproduct in the first place? Much of AI is obsessed with faster/better implementations of humans not superior lifeforms.
Nor IMHO should we forget that its very hard to study the human mind as it doesn't come with a debugger and you have deep ethical issues when you edit one. A virtual mind can be debugged, edited and analysed and we may learn much about ourselves from the virtual "mind" we can't practically achieve any other way.
Posted by: Alan | Sep 10, 2007 at 14:41
"I'm pretty sure my cat is empathic..."
Sorry Olly, I was thinking of the old Turing Test definition of intelligence and I did not mean to insult your cat!
It is interesting that we would credit even an insect with some degree of intelligence but not a computer. As David says, the "squishy" attribute seems to be very important: some basic drivers such as hunger and reproduction are perhaps essential factors, along with some ability to meet those needs.
Posted by: Simon | Sep 10, 2007 at 15:02
It's certainly an interesting subject, one that, will be for many the breaking point in what we term artifical. To in fact play God with what would essentially be a sentient being raises the eyebrow for many, yet for more raises the possibility of creating a new life.
Will Robots be walking around the house with dusters? It was pictured in an encyclopedia that was produced in the 50's that was given to me as a child. Though this was supposed to happen during the 1980's.
Not quite there yet, are we.
It is certainly the biggest challenge as far as computer science has, that bar quantum physics and quantum computing. That in itself may lead to advances in A.I. far faster than current technology allows.
Not so long back a team developed a computer programme designed to emulate a portion of the brain of a mouse. Watching synapses, electrons and nerves pulsate in a glory of light. Thinking.....
This required slightly more processing power than the average super cooled desktop PC. Something that if going by Moores law has any credibility on and whom am I to disagree with what is currently happening, then a PC with a thousand cores would be required to do such a task and do just that. The problem here is Moores law examined the speed of a CPU and thermal breakdown. Not compacting 1000 cores on to say a 65nm silicone wafer.
Bare in mind this was a fraction of the mouse brain. Where are we Today, 2007 with quad cores being the latest technology.
If speed could be doubled twice every year, with no threshold, then to accomplish 4 terrahertz of raw CPU performance then it would take a minimum of 32 years to reach this performance. To produce 20% of a mouse brain. Therefore to compute an entire mouse brain would require a mere 160 years of CPU evolution.
Unfortuantly its not that simple, the reason we see multiple cores Today indicate the need to reduce clock speed in favour of additional cores.
Therefore I certainly feel no break through in this will occur until at least quantum computing is mainstream. Smashing current speed barriers and hurdles. Perhaps Einsteins spooky theory may come into play with quantum entanglement being the way forward.
Who knows what the future holds.....but one things for sure. Computer programmers need not learn a new trade. There jobs safe a while yet, I doubt we will be seeing computers create there own programmes for quite some time yet.
When Singularity does happen, I hope the technicians remember to build an off switch on to the "thing".
In the words of Leanord Nimmoy, "Science Fiction, often becomes Science Fact". I really wouldn't like to think what Judgement Day would hold for us all. The Military would be certainly the first to utilize such advances, thats the real scary thought.....
Posted by: | Sep 10, 2007 at 17:26
A newborn baby, less than a week old, who has never seen its reflection in a mirror or glass, will be able to imitate certain facial gestures made in its vicinity; specifically, if you stick your tongue out at an infant, they will do the same thing.
How does this happen? How can a baby who has never seen her tongue understand that Mommy is sticking out her tongue? And how can she know that, OK, yeah... I've got one of those.
Nobody knows.
What concerns me is the confusion of "intelligence" with "decision making ability." It is trivial to build a very, very simple robot/computer that will add ice to a drink if the liquid reaches a certain temperature. It is (to me) unthinkable that a non-human "intelligence" could have thoughts about which beverages are better served cold or hot.
Human intelligence -- not animal intelligence or emotions or empathetic behavior -- has, at its center, the ability to move from information to knowledge to wisdom.
This coffee is cold = information.
Coffee should be served hot = knowledge.
My wife would love a warm-up = wisdom.
Posted by: Andy Havens | Sep 10, 2007 at 17:26
So so true Andy,
"This coffee is cold = information.
Coffee should be served hot = knowledge.
My wife would love a warm-up = wisdom."
Posted by: | Sep 10, 2007 at 17:30
That little triplet isn't outside of the capabilities of AI by any means. The first, as you say, is information. The second is cultural knowledge, easily gained by exposure and association given the right substructures. The third is more difficult, as it includes the first two plus a model of the other person, their emotions and preferences, their current state, our relationship, and a desire for them to approach a state deemed to be more desired on their part (empathy).
I know, we've been lulled by emotionless AI for so long we've made emotions themselves into this other, unknowable thing (hmm, is it coincidence that so many AI researchers have been young single men?), but emotions and affect can be modeled and constructed, such that a desire for a loved one's happiness modifying your actions (to decide that it would make her happy if you warmed up her cup of coffee) is by no means out of the range of possibility.
Posted by: Mike Sellers | Sep 10, 2007 at 19:29
Mike Sellers wrote:
"I know, we've been lulled by emotionless AI for so long we've made emotions themselves into this other, unknowable thing (hmm, is it coincidence that so many AI researchers have been young single men?), but emotions and affect can be modeled and constructed, such that a desire for a loved one's happiness modifying your actions (to decide that it would make her happy if you warmed up her cup of coffee) is by no means out of the range of possibility."
The last time this subject surfaced on TN I used the analogy of the chess playing computer, which grinds through billions and billions of permutations to reach its next move and human players, who don't. The end result may be a machine that can play chess at the highest levels, but it certainly doesn't think like a human being to do so, (maybe, see below). Similarly, you can model emotions and provide a machine with goals like "Keep Marthy happy" but is that any more human than a chess playing program?
From what I understand one of the big debates right now among cognitive scientists (thanks for the reminder) is whether or not "consciousness" is something that will ever be really understood, or whether or not it even really exists in the first place. I'm not being facetious--one theory being kicked around is that all of your decision making is done by the subconscious part of your brain with no input from the self-aware bits. All the conscious mind does is retroactively supply a justification/explanation. By that theory there is no such thing as free will and human beings are really automatons--maybe even AI constructs in somebody else's simulation? At least it should be pretty easy for humans to develop "real" AI in that scenario.
Anyway, I think a real definition of "consciousness" is required before you can judge machine intelligences and render a decision. And a real understanding/definition is I think some years away--if it ever comes at all.
Posted by: lewy | Sep 10, 2007 at 21:02
Similarly, you can model emotions and provide a machine with goals like "Keep Marthy happy" but is that any more human than a chess playing program?
Well that's kind of a brittle, trivial restatement of what I said is possible -- and you're right, if that's all it was it wouldn't even be worth trying.
Debates on the nature of consciousness have raged for millennia; our tools and knowledge are a bit different now, but the basic positions haven't changed. There are significant arguments to be made that consciousness is not fully modelable; in Dreyfus' words, echoing Wittgenstein IIRC, it's a-theoretic and therefore not amenable to artificial development. And of course there's the danger that we tend to see the mind modeled in the current technology of the day, whether it's Freud's hydraulics or the switchboards of the 1960s (though I do think we're closer today with our increased understanding of distributed, non-linear, emergent systems and the like).
Even so, I'm confident that two things are true: first, that we'll approach believable human-level interactions (weak Turing level) in software in the next decade or sooner; and second that we'll begin to get a glimmer of understanding of just how far we are from fully realizing human consciousness in artificial contexts.
The analogy I've used before when talking about our AI work is that I feel like I've waded out to the point in the ocean where my toes barely touch the sand on the bottom, and the complexity of what we're doing threatens to swallow me up... and then I remember that humans really are about as complex as the ocean at its deepest. We've made great strides, but we're nowhere near creating beings that truly match us for cognitive, emotional, and cultural subtlety and complexity.
Anyway, I think a real definition of "consciousness" is required before you can judge machine intelligences and render a decision.
The only possible definition is the one we use all the time: I know it when I see it. That's the heart of the Turing test. We don't need to define consciousness to believe others possess it; why should that change with AIs? If you believe they're conscious... maybe they are.
Posted by: Mike Sellers | Sep 10, 2007 at 22:20
I'm not sure I want any AI "babies" in my Volkswagen.
Posted by: Steen Hive | Sep 11, 2007 at 02:54
you r my hero..thx for ur tips
Posted by: maple story | Sep 11, 2007 at 05:58
Mike Sellers wrote:
> The only possible definition is the one we use all the time: I know it when I see it. That's the heart of the Turing test. We don't need to define consciousness to believe others possess it; why should that change with AIs? If you believe they're conscious... maybe they are.
The Turing test it seems to me is an end-run around a very hard problem. Here's a question which people have been struggling with for thousands of years, which may well be insoluble. Turing's solution is to apply the ultimate duck test. A couple of observations:
1) Hasn't the weak Turing test already been satisfied? This may just be urban mythery, but I seem to recall that back in the 1980's or 1970's a test was conducted in which psychologists could not detect that they were conversing with a program and not a schizophrenic, for whom abrupt conversational shifts are apparently quite normal. As a corollary I have to wonder if a program couldn't do a much better job at carrying on a conversation than someone who barely spoke English, or a feral child like Genie.
2) There are a number of objections to the validity of the Turing test, like the blockhead/Chinese box hypotheses. My chess analogy is really just the blockhead argument. Speaking of chess, I have to wonder if in the past the ability to play a good game of chess might have been offered up as a good test of machine intelligence. The Turing test might be just as flawed.
3) How do human beings know that other human beings possess consciousness? What is the criteria? Because I think there are some borderline cases (vegetative or brain damaged patients, the great apes, etc.) which are actually quite controversial.
Posted by: lewy | Sep 11, 2007 at 09:12
The question isn't if it is possible to fully emulate a human being as we experience such beings, it is possible. Human life is finite, and we judge intelligence by input-output so it is possible since a computer program can be viewed as a a very long finite list of input-output mappings. Proving that such a program must exists can't be hard (in the case of a finite lifespan).
I also agree with Mike, we can find our own models that do interesting things. These models don't have to map onto the exact inner workings of human beings.
"Singularity" is a hype-thing, and I don't think the technology has to be particularly advanced in order to have a meaningful conversation with a computer. I keep having a conversation with google + the huge corpus of the Internet. Pretty braindead AI, but many great conversations.
Posted by: Ola Fosheim Grøstad | Sep 11, 2007 at 10:05
Ola said: "I keep having a conversation with google + the huge corpus of the Internet. Pretty braindead AI, but many great conversations."
Weird way to put it.
If querying and accessing databases counts as a conversation, then the dictionary is an intelligence.
The conversations I have on the Internet are with people, whether directly (blog comments, IM, forums, etc.) or indirectly, by reading what they've written, linked-to, selected for a list, etc. They are *mediated* by the Internet, and by search tools like Google, but the media isn't the intelligence, nor are the tools that access it.
I don't think. But I can't prove to anyone (buy myself) that I'm conscious. Most days I can't even prove it to myself.
[end automated, text-stimulus programmatic response 2,004,119]
Posted by: Andy Havens | Sep 11, 2007 at 11:26
Well, that really depends on how you conceptualize things? For instance, when you communicate with a guide at a historical site, do you communicate with them, or do you communicate with all the authors they have read which their responses are built on?*shrugs*
I don't see any formal difference between a table lookup and a more compact representation. The computer is still a black box to the user, does it matter if it does a look-up? Or rather, aren't all computations look-ups at some level? Isn't our memory and "intuitition" database look-ups?
What matters is that you get the same output for the same input.
(Here the term "input" is meant to include a log of your prior interaction with the computer, of course).
Posted by: Ola Fosheim Grøstad | Sep 11, 2007 at 11:43
AI has been a moving target for a while now... we've assumed that once we can get a mechanical Turk to play chess, we must have emulated all the other parts of intelligence we assume are prerequisites of chess-playing. My own opinion is utilitarian: you have AI once you have something that people perceive as intelligence, whether there's squishy harman brainz behind the curtain, or steampunk grinding gears, or a rat on your wig.
In the limited communication channels of an MMO, we already have gold-farming bots that seem indistinguishable from humans... and humans who don't speak the local language are are indistinguishable from bots (and often get harrassed as a bot would). The Turing Test is attempted many times a day, every day, by gilsellers.
Will MMOs approach their own singularity when you can't tell how many players are actually bots? What happens when the economy is driven by bots that farm, craft and auction? There are already bots that work in teams (tank-priest-nuke) -- what happens when bots decide to sign up more accounts to maximize their fishing? A bots-only guild?
Posted by: Moses | Sep 11, 2007 at 13:40
> Isn't our memory and "intuitition" database look-ups?
That's the million dollar question, and my point is that no one really knows yet. My guess is that until some of the basic questions about human intelligence have been answered research into AI is going to continue to be crippled.
Posted by: lewy | Sep 11, 2007 at 15:59
Ola said: "What matters is that you get the same output for the same input."
But we rarely get the same output for the same input with human intelligence... at least as it concerns everyone in my office trying to decide where to go for lunch.
Is my hand "intelligent" because it turns the pages of the book that I read in order to have some level of contact with someone else who wrote the book? Is my hand in communication with the hand of the author?
I would say, "no." Communication and intelligence involve decision making, understand of reciprocity, environmental variables, etc. Searching the Internet may *allow* me to communicate with others and be in touch with their intelligence, but I don't think that the tool is intelligent.
The Internet is like my hand; it brings intelligence to me or sends it from me. Cut off my hand, I'm just as intelligent, but may need to turn pages with my tongue. Cut me off from the Internet, and I can still complete (though not as easily, of course) many if not all of the same communicative/intelligent agencies.
If you and I speak different languages, we can still communicate, just not as easily; the tools for ready verbal communication aren't there. We can gesture, smile, cry, fight, etc. Put a book in front of me with text in a language I don't speak, and there will be no communication. No exchange of intelligence. Same for a book of pictures of things I cannot recognize at all. Microscopic whoo-zee-whatsis may be highly meaningful to a biologist, but will look like meaningless goo to me.
The hand, the Internet, the language, the picture... these things are the media of intelligence, not the intelligence itself.
Posted by: Andy Havens | Sep 11, 2007 at 22:14
The hand, the Internet, the language, the picture... these things are the media of intelligence, not the intelligence itself http://www.goldsrunescape.com>runescape
Posted by: runescape | Sep 16, 2007 at 07:35
^^^^^^^^^ LOL check the link
Oh god. Even the web-spam bot is becomeing sentient!
Posted by: dmx | Sep 19, 2007 at 19:40
ahaha. I see what its doing. clever.
Posted by: dmx | Sep 19, 2007 at 19:41
I'm not a neuroscientist, a cognitive researcher, an AI developer, an engineer or even a college graduate, so I don't have the luxury of making arguments from authority.
As an ignorant but curious observer, I'll just comment that most discussions on this topic assume some things that seem worth questioning:
1) That there is a distinct, identifiable, discrete boundary condition on the one side of which resides "non-intelligent" and on the other "intelligent" - and that we'll know the precise moment our creations cross it.
I think the Turing Test is responsible for much of this assumption, even though there is nothing in the test itself (which tests for "human-like" intelligence, to be precise) that suggests such a boundary, merely that humans in general, and engineers in particular, like "pass-fail" conditions.
It's as arbitrary as a sobriety test - even though few would argue there is a difference between being drunk and sober. In fact, it reminds me of the phony "macro" vs "micro" evolution argument Creationists throw up in failed attempts to discredit the science behind the theory of biological evolution.
The notion of a continuum of development that leads from ape to human, similar to the continuum of development that leads from tadpole to frog, runs counter to some deep emotional construct humans have, that insists there must be a discrete, definable threshold where "human" emerged. It strikes me as a kind of a Zeno's Paradox mentality.
It is at least possible (and recent studies of animal intelligence suggest) that intelligence (whatever that is, assuming "it" is even singular) is something that exists along a continuum, rather than spontaneously emerging only when a certain level of complexity is reached, in a punctuated-equilibrium kind of way.
At least we owe Alex's memory a momentary flash of doubt about the reality of a rigid and definable boundary.
2) Similarly, there is an assumption that the singularity depends upon the deliberate intent and success or failure of AI developers.
It seems to me that the accelerating pace of progress in multiple and sometimes overlapping areas of scientific and technological development, along with the accelerating pace of raw computational power (also expressed as the increasing energy efficiency of computation), suggests the possibility of things progressing independent of human control or desire.
Whether or not AI specialists engineer intelligence may ultimately be beside the point - or at least the "breakthrough" may be obvious only in retrospect.
3) That developing intelligence depends on reverse engineering the human brain at the deepest, biochemical level - that we must "understand" how our brains work before we can create an artificial one. I'm not sure what the basis for this assumption is, except for a chauvinistic view of human uniqueness.
This bias is at least partially responsible for assumption #1.
It is at least possible that artificial intelligence of some sort may emerge from a complex system that is nothing like the way it evolved in biological systems - and may not even have been designed to be "intelligent" in that sense. At least it seems possible to me, knowing as little as I do about neuroscience or AI engineering.
4) That "intelligence" is not just easily defined, but even inherently definable.
I tend to think it will turn out to be more like pornography - you know it when you see it, but that doesn't make it easy to define. I think we assume "intelligence" means something only because our experience is limited - we live in a world which we perceive is populated by "intelligent" and "non-intelligent" life (even though that assumption is being increasingly questioned, as I mentioned above).
Legal debates of the future are more likely to involve boundary situations when there is a clear population of AI well beyond the boundary - entities that are "obviously" intelligent, even though we can't pinpoint the moment machines/cyborgs crossed that threshold.
5) Speaking of which, Kurzweil and others have pointed out, as is alluded to by the OP, AI may not emerge first or distinctly in machines; it is at least as likely that we will smoothly and gradually augment our own intelligence with cognitive prosthetics, and reach a point where the distinction between meat and machine intelligence is moot - not because we will suddenly upload ourselves, or will suddenly create an intelligent computer, but because the distinction will not make sense. We may reach the point where it is not possible to separate where a "thought" occurs to an augmented human, in cell or silicon. Beyond that point, "artificial" intelligence may emerge in the same way "human" emerged from "prehuman".
6) Finally, whenever I hear someone talk about how "far we are" from some scientific breakthrough or understanding, I am tempted to ask them exactly how "far" we were from a billion web sites in 1988, or from sequencing the human genome in 1989 - or from landing a man on the moon in the 1950's.
Breakthroughs always seem remote before they happen - even seconds before they happen - and obvious after the fact.
We no longer live in a linearly progressing world, but our minds seem evolutionarily resistant to the notion of compound acceleration.
As I said, I'm just a curious and somewhat contrarian layman, so I could be way off, but I find it valuable to question assumptions, particularly when they are widespread.
(Apologies for the long post; to paraphrase Blaise Pascal, I don't have time to write a short one.)
Posted by: galiel | Sep 19, 2007 at 20:59
Part of why we look at human Intelligence for clues, is because we happen to be rather good at "Intelligence". Theres things humans seem to do well, including sometimes solving N(P) type problems in real time that leave me utterly astonished.
The problem that worries me, is that unless we know WHY humans think the way they do, and we are no where NEAR that, is that if we create something smarter than us, we might just not recognise it, and we face the very real danger that it treats us with the same bemused disdain we tend to treat animals. An empathy gap.
Personally I'm a little bit in the doom camp of AI. I don't *like* the fact that we might create something 'smarter' than ourselves that might not recognise us as something worth preserving, both in existance and liberty.
And conversely, assuming we DO create something smarter than us, but shackle it into impotence, have we fully reckoned the ethical and moral consequences of it. Theology has long wrestled the question "What is our rights and responsibilitys under ", but maybe we need to look at that, invert it and ask "What would our ethical responsibilities become if we where to become Gods ourselves.
Asks the ant;- Did God create us knowing one day a physicist might work out how to kill him using some unholy gadget, or did he put in place a big wall to contain us. Does that make God a denier of liberty. Is that a good thing?
*Note: Rhetoric.
Posted by: dmx | Sep 20, 2007 at 03:52
dmx> including sometimes solving N(P) type problems in real time that leave me utterly astonished.
Which NP-complete or NP-hard problems do we solve in real time?
Posted by: Ola Fosheim Grøstad | Sep 20, 2007 at 05:22