I don't know what made me do it. We had been at the dinner party for a while, everybody was having a good time. It was the kind of dinner party where everybody brings a dish, and our dish was Stromboli (pizza bread), and I warmed it up at the party on a cookie sheet that we carried the bread on.
The cookie sheet, loaded with some warm bread, had been in the oven for a while, but unbeknown to the other guests I had turned the oven off a few minutes earlier. I knew the cookie sheet was only warm. I pulled the cookie sheet out of the oven as if it was SCORCHING HOT! I screamed, "IT BURNS, IT BURNS!!", and tried to ease my seemingly searing flesh by trying to hand the warm cookie sheet to my wife who was standing behind me.
At this point, she had a decision to make.
She could do the right thing and share in her husband's agony by taking the scorching pan out of my hands and be burned herself, or she could retreat and create distance from what was obviously about to be a very painful experience. Her reaction was the same as everyone's reaction (yes, I have since done this to other people).
Her reaction was what I called the "mime" reaction. She basically motioned to grab the pan as if there was an invisible 3" force field around it. Not wanting to touch it, but not wanting to let the bread hit the floor either. I'll never forget that tortured look on her face, of wanting to help, to ease the pain, but not willing to endure much of the pain herself. Of course everyone else was quite startled, and it took a while before they got the joke. It took longer for my lovely wife to talk to me again.
I'm not sure what this says about our marriage, but it tells us that while there are aspects of our intelligence that are inate or instinctual, we also contain the ability to override those traits, and that these override abilities increase and decrease when certain emotional thresholds are met. We can reach out and take a pan that we know is going to burn us.
A recent study by psychologists at the University of California Santa Barbara found that Anger Fuels Better Decisions in 3 experiments designed to determine how anger affects our thinking. The study demonstrated how raised levels of anger increased the ability to distinguish between strong and weak arguments. In other words, when certain anger thresholds are met, decisions are filtered differently based on perceived notions about strength of arguments. The antithesis being that when we are not angry, we are more likely to consider many, and sometimes even weaker arguments.
This brings us to our point, Intelligence Seeding. As we begin the next epoch of technology, what I refer to as The Intelligence Age, we will need to decide how we wish to develop our next level of intelligence, the intelligence that will move us towards the Singularity. Do we want machines to develop their own intelligence, or do we want to seed them to be like us?
It's a bigger question than you may think at first, and its why I'm so infatuated with the subject. To consider the task of seeding intelligence based on your own thought processes, you need a good understanding of your own intelligence. I recently tried to model some emotional behavior in my intelligence testing, and was quickly struck by the intracacies of our emotional makeup. Every emotional state we experience has some level of influence over every other brain function.
Anger makes your face red, your eyes bulge, and your blood pressure increase.
Sadness reduces your appetite, your energy, makes you lose focus, and can make you cry.
Fear makes you hypersensative, dialates your pupils, gives you goosebumps.
I propose that emotion models and human intelligence seeding will be some of the first large hurdles of The Intelligence Age. We will need to compare these intelligence models to organic models that are developed without the corruption of human emotion. These organic models will have the ability to discover new types of intelligence, and new types of emotions, emotions that we cannot understand. At some point we will need to chose, because I have a feeling these two seperate races of artificial intelligence arent going to get along very well.
Image is of a tiny sand crab on the beach in Manzanillo, Mexico. These guys blend into the sand so well, you can hardly see them. Ahh, evolution.
The linked article doesn't attempt to explain why we get better at spotting poor arguments when we are angry. Is it a chemical imbalance in the brain? Can we simulate that?
Obviously, from a purely scientific perspective, emotionless argument evaluation will always be better. But just like Dr. Spock and Bones, it's not always something we want.
If we could seed agents to be exactly like humans, would they end up with all the problems that humanity has? If we take away cultural sins, like lust and greed, do you take away important things like drive and determination? If we succeed in finding the right balance, do we reach a point where humanity accepts that artificial intelligence is better than them, and defers all decisions to AI? Could we see AI help guide government policy?
Personally, I'd rather we didn't seed agents in the way you describe. I don't think we have much to learn from these agents, no more so than we have to learn from our own children. I don't think it offers you anything, apart from maybe making an agent in your own image to make decisions for you while you go on holiday!
I think starting without a seed, and then seeing what happens, is more exciting.
Posted by: Syntheticist | Jun 12, 2007 at 20:30
This doesn't sound very scientific. I'd love to see if any extreme emotion -- profound grief, or shock, or joy, could also make better decisions. It might merely be something dumb like more endorphins in the bloodstream or something.
Re: "the intelligence that will move us towards the Singularity."
Uh...so....this is a given, this Singularity stuff?
Posted by: Prokofy Neva | Jun 12, 2007 at 21:33
Bob said: "Anger makes your face red, your eyes bulge, and your blood pressure increase."
Anger may make *your* face red, *your* eyes bulge and *your* blood pressure increase.
For me and my cold-blooded kin, anger makes me get very quiet... very still... and very mean. You will not know when I'm really angry. I may get red, bulgy and pressured when I'm a bit put-off; traffic, dropped glassware, etc. But anger... oh, no. That incites an extremely calm response.
And then I explode your medulla oblongota with my psychic mind weapon.
Posted by: Andy Havens | Jun 12, 2007 at 22:43
Bob,
I have a hard time understanding how you can actually separate emotion and ''intelligence." Emotion provides the motivation. I.e. "I want to do XYZ." For example, "I want to make an argument based on deductive reasoning."
That's why I think it's futile to try to think that emotion exists separate from intelligence. Some people will say that they are creatures of "pure logic" who don't let "emotion" cloud their judgment. The real question is, why do they feel it's important to use "pure logic?"
Posted by: Tim | Jun 13, 2007 at 00:03
Synth, you make some great points, and pose some great questions. It really makes you question the value of humanity doesnt it? We have so many emotional states that seem to have negative consequences, greed, jealousy, envy, desire, your point about seeding those traits is very valid. Someday soon, your questions will be answered.
Prok, I agree. Its not a very scientific study. I think its something that is very hard to observe and study actually. The reason being related to Andy's point, that we all experience these emotions differently, and what makes one person angry doesnt necessarily make someone else angry. In fact, we could probably argue the meaning of the word angry for days.
Andy, Checkout the movie Scanners . I think the correct term for psychic mind weapons is psionics
Posted by: Bob McGinley | Jun 13, 2007 at 00:05
Tim, That's pretty insightful. A couple points. Emotions are very human, very unique per individual, and tend to be variant by large degrees. What if the seeding doesnt really work?
This was my point about allowing machines to develop organic emotions, emotions that 'they' can understand. Not ones that 'we' can understand.
Lastly, there is no such thing as a creature of 'pure logic', that we know of. ;-)
Posted by: Bob McGinley | Jun 13, 2007 at 00:21
IF you want examples of intelligence without human emotions, try your local sociopath. Our prisons and boardrooms are full of them.
Also, the term, as everyone must know, is "mind bullets". :)
Posted by: Ace Albion | Jun 13, 2007 at 03:36
"It really makes you question the value of humanity doesnt it? "
Compared to what ?
Posted by: Amarilla | Jun 13, 2007 at 04:31
If anger causes people to make better decisions, why is there so much violence in the REAL world?
Posted by: Peter Clay | Jun 13, 2007 at 06:19
Bob said: I propose that emotion models and human intelligence seeding will be some of the first large hurdles of The Intelligence Age. We will need to compare these intelligence models to organic models that are developed without the corruption of human emotion.
I agree with you about the emotional models, but perhaps not for the same reasons you have (and btw, these provocative posts of yours are cutting into my work time! :) ).
Contrary to what you call "the corruption [!] of human emotion," I believe effective emotional models in intelligent agents are necessary because they are necessary for us -- we operate emotionally, and anything seemingly intelligent that doesn't is oddly hobbled.
I've seriously wondered in the past if characters like Mr. Spock and HAL 9000 have thrown actual AI research off by decades: had we not had these fictional examples in our heads, we might have discovered the primacy of emotions in intelligence long ago.
Syntheticist said Obviously, from a purely scientific perspective, emotionless argument evaluation will always be better.
But that's not the case, though we may well have been all but convinced by our highly rational, emotion-devaluing pedagogical and cultural traditions that it is. Damasio shows in his work with neurologically impaired patients how those without the experience of emotion are unable to plan and decide effectively, and how they are at a terrible disadvantage in everyday social interactions.
Back to Bob: These organic models will have the ability to discover new types of intelligence, and new types of emotions, emotions that we cannot understand.
I'm not sure what this means -- it's fun to imagine the existence of colors we've never seen for example, or to talk about entirely new kinds of intelligence or emotions. But we barely understand the intelligence and emotions we have; why (much less how) would we attempt to create forms even less understandable, if this proposition even makes sense?
I think where you're going with this is to allow agents to grow their own emotional responses to situations they experience as a separate branch of evolution. If you give them the equivalent of stimulating and depressive neurotransmitters and perceptual networks (e.g. epinephrine, nor-epinephrine and an analog of the thalamus and reticular formation) you may be able to engender some kinds of un-conscious emotional responses, similar to what you can see in a hydra or a sea cucumber... maybe an aplysia, but probably not a dog. Much less a human, with our more complex and cognitively mediated "feelings" accompanying the underlying emotions.
...while there are aspects of our intelligence that are inate or instinctual, we also contain the ability to override those traits, and that these override abilities increase and decrease when certain emotional thresholds are met.
This kind of thing can be seen in many, many circumstances, and shows how emotion is non-propositional (aka a-rational), and yet can be both a motivator of and component in considered, logical planning and thinking. IMO any emotional model worth its salt has to show how emotion filters perception, affects cognitive planning (cognitive narrowing due to panic, for example, similar to but different from the anger effect you mention), alters expectation and explanation of outcome, and can lead to actions that people otherwise would not take.
I'd be interested in talking about emotional models you've considered - OCC or otherwise - or whether you're going essentially evolutionarily emergent and model-less at this point. I'm not a fan of using OCC as a base for modeling myself, but haven't found a lot of other viable models (so we've grown our own).
Posted by: Mike Sellers | Jun 13, 2007 at 09:41
Facile.
Posted by: | Jun 13, 2007 at 09:51
We already have alien intelligences around us. If you want to know what effect emotion has on reasoning, without the bias of human sympathy, why not examine our primates or cetaceans?
Robotics advanced by leaps and bounds when they tried imitating insects instead of humans. Maybe we're better served by a dog's sense of loyalty and motivation in our machines than by a human-like dedication.
Posted by: Moses | Jun 13, 2007 at 11:44
Primate emotions are very similar to human ones, which isn't really a surprise. Cetacean emotions... I don't know if anyone knows, but I expect they're big on practical jokes.
If you want alien, the octopus is about as close as we get given their neurology, morphology, and environment. But they're not exactly voluble with their emotions.
Posted by: Mike Sellers | Jun 13, 2007 at 13:02
With regard to the role of emotions, the role of decision making seems to reside in our limbic system. There was a case study of a patient with a damaged limbic system, but an intact cortex. The person could make excellent lists of all the things he needed to do that day, but could never decide which one to start with. Apparently our emotions (via the limbic system) provide our cortical logic system with the impetus to act. So if the "pure logic" Spock-like individual ever did exist, they could never make a choice as what to act on, but be a great list maker.
With regard to an AI model from our nervous system I have thought that a digital system overlaying a analog system, both interpenetrating each other would be an interesting model. The digital would be analogous to our brain's grey matter with the short axons and localized interactions, whereas the analog system would be analogous to the long mylinated axons and holistic interactons of the white matter neurons. I have always wondered if it was the interaction of these two systems that may have allowed the emergent of consciousness to arise, which has remained beyond our empirical tools and is today still a mystery.
Finally, to add a touch of sociality to this model, since sociality is critical for species where there are two sexes... if you can't form a relationship at least once in your life, you will have no genetic input. Even the shrew overcomes his and her distain for other shrews to come together to mate... but that's it. What may provide some of our sociality is the ability to empathize and this may be the result of mirror neurons. Mirror neurons were first discovered by Rizzolatti in his research with rhesus monkeys. These neurons fire when a monkey does an action, like breaking open a peanut shell, and the same neurons in another monkey mirrors these even though the other monkey is only watching the peanut breaking action. They even fire when the observer monkey hears a peanut being opened.
Finally, with regard to the "alien intelligences around us" in the primates and cetaceans, those intelligences are different from ours in degree and not in kind and perhaps not as alien as Descartes tried to lead us to believe. They have different cognitive balances than our species but they share many similarities as well. Most of the members of our species have this extreme laterialization in our brains, which is dominated by a sequential cognitive process (grey matter). The primates and cetacean are more balanced in their cognitive processes. For example, chimpanzees are geniuses compared to our species when it comes to reading nonverbal communication, an ability that comes from our simultaneous cognitive processing. Nonverbal communication can be instanteous (e.g. micromomentary movements) and is typically spatial as well. Sequential processing has a hard time with this type of information.
Finally, my limbic system is letting my cortical system know that it should get back to work...
Posted by: Roger | Jun 13, 2007 at 13:11
@ Mike... would inking or changing the color of your entire integument be considered voluble? Of course I've only met a few octupi. :)
Posted by: Roger Fouts | Jun 13, 2007 at 13:32
A paesano just told me : " a color is what i can see ". And a scholar : " all you can do is to TRANSLATE a spectrum of vibrations into another range - a perceptible one - ,and in this case we have no other colors but the ones we already know of; but the problem is : is it only the " information " a basic brick ? Or is it the " interraction " too ? Anything non-interacting , does it even exist, afterall ? Not toward others, but to itself ? And if yes,do we care ? ". Me thinks that applies to AI, VW and all. You cannot see what you cannot see . Humans are not gods.The only " creation " available to us is the procreation , and the higher possible goal is to understand ourselves ; anything else is just subsequent . Sure, the " mimmick " is a tool. But you cannot mimmick an alien ; you have to see it first. And when you see it, that's not an alien anymore, but a part of the perceptible and interactionable nature. Well, " octupi " are nice and interesting ; we have a lot to learn about ourselves and about the world we're living in, yet we lose time and effort trying to make AI and VW. Or to discuss the " possibility " and " opportunity ". Jeez.
Posted by: Amarilla | Jun 13, 2007 at 19:02
Roger beat me to the draw. Bob you need to do some more research into the function of the brain. I would write more, but my limbic system wants to go watch youtube, and the part of my brain that explains the world wants to go looks at the latest thread from Robert Bloomfield.
I apoligize for being a bit flip, but it does strike me that your models don't reflect a good understanding of neurobiology. I don't think you can have a really solid discussion of intelligence unless you are as knowledgeable as possible about the brain. And we still don't know much about that, even the people who do know a lot don't know very much.
Posted by: Tom Hunter | Jun 13, 2007 at 21:00
Amarilla said: The only " creation " available to us is the procreation
Not so. Tolkein defined the term SubCreation, which he defined as creating a 'Secondary World' that the reader (or in our case, player) can at least temporarily inhabit. He said,
This is the kind of immersiveness for which most of our virtual worlds aim, and for which some of us (at least) believe, we need believable, if artificial, inhabitants to bring them to life. When these individuals live and think and feel and socialize with each other and with you, that's subcreation at its best.Roger: octopodes aren't voluble, but you're right, they do communicate irritation, curiosity, and anger. They're not as quiet as say the inoffensive sea hare which has a somewhat more limited emotional vocabulary, nor as silent as the truly laconic if paradoxically big-mouthed T. gigas.
Posted by: Mike Sellers | Jun 13, 2007 at 22:40
Re: seeding
If I'm reading it right, you're talking about giving machines desires, which are at the basis (in my limited, non formally trained understanding) of emotions and motivations. So the question then is, how do you give awareness to a machine? How do you make the machine aware that it might no longer exist if someone turns it off?
Posted by: Tim | Jun 13, 2007 at 23:37
Mike Sellers says:
"Amarilla said: The only " creation " available to us is the procreation
Not so. Tolkein defined the term SubCreation, which he defined as creating a 'Secondary World' that the reader (or in our case, player) can at least temporarily inhabit. He said,
the story-maker's success depends on his ability to make a consistent Secondary World which your mind can enter. Inside it, what he relates is 'true', it accords with the laws of that world. You therefore believe it, while you are, as it were, inside."
But yes, exactely " so ".
That is not any sort of creation, but a projection and a mimmick. Experimenting with the known objects ,subjects and tools.
And about octowhatever : the only acces you have to other specie is using the empathy . If you extrapolate what you know about the human ( as much as we know - ant the extrapolation is the only tool available in this matter ), then you will face the fact : in order to be able to understand , to interpretate , to aknowledge anything meaningful about the " alien " , you have first to " contact " and interact it at the first basic level : emotions. It's the very level where " communication " begins ;you will not learn anything of any value from a Chinese book just seeing its pictures and without to understand the ideogrames; unless , ofcourse , you were interested in chemistry and physics . The same applies to humans interactions.You cannot denie the fact that the emotions are " the living father and ruler " of intelligence ; or, sure, you can, but it wont lead you to anything valuable for you of for others. Except the fun you might have , doing so; fun, wich is a valuable emotion and a necessary tool and meaning.
Posted by: Amarilla | Jun 13, 2007 at 23:39
Heavy-handed yet impenetrable. No mean feat.
Posted by: Thomas Malaby | Jun 14, 2007 at 00:19
Mike, I believe effective emotional models in intelligent agents are necessary because they are necessary for us.
I agree, I just wanted to posit the other side and expose some of the potenial qualities of a more loosely coupled emotion model. One of the things I discovered in the work I'm doing is that we have too many emotions and variances to easily map to in a simulation model. Without getting too deep, I started playing with blank emotion emitters, that agents could define dynamically, depending on existing emotional relationship thresholds. Not that its anything to hang a hat on, but thats where I got the idea of organic machine emotions.
Sorry about taking away from your work time, I know the feeling. For what its worth, you were doing it to me a while ago.
Posted by: Bob McGinley | Jun 14, 2007 at 01:42
Roger,
I have thought that a digital system overlaying a analog system, both interpenetrating each other would be an interesting model.
I have thought about this as well. I think we are starting to see the first stages of these types of technologies with defibrilator and pace maker implants. I think we are even putting electro stimulators in the brains of Parkinson's patients. Wont be long before we are altering the neural pathways of schizophrenics.
I also think at some point in my lifetime that we will go the other way, that we will be able to download our memories and experiences to digital devices. But I'm afraid none of us will get any work done if start that one today.
Tom, no apologies necessary.
it does strike me that your models don't reflect a good understanding of neurobiology
Thats impressive considing you havent seen my models, but I promise to study up. My greatest strength isnt what I know, its what I want to know. You can quote me on that.
Posted by: Bob McGinley | Jun 14, 2007 at 02:06
Bob: One of the things I discovered in the work I'm doing is that we have too many emotions and variances to easily map to in a simulation model.
Hmm. Well, we've spent quite a bit of time on this and have, I think, come up with a way to maintain the entire (and hugely multivariate) space of human emotion, while also finding ways to project this onto something easily and qualitatively comprehensible. I'm not quite ready to talk about it in a forum like this though.
Sorry about taking away from your work time, I know the feeling. For what its worth, you were doing it to me a while ago.
Happy to help. :) Seriously, I'm glad to see topics of interest to all sorts of people interested in different facets of virtual worlds here.
Posted by: Mike Sellers | Jun 14, 2007 at 08:36
"How do you make the machine aware that it might no longer exist if someone turns it off?"
Why would YOU be the one making that ? Let it trigger itself. Enough input, complexity and interactions and , voila ! Oops, forgot : and a bit of divinity required.
Posted by: Amarilla | Jun 14, 2007 at 09:51