« You mean, like, you can make money at this? | Main | Lost in v-Space »

May 31, 2005

Comments

1.

To be fair to Hawkins' book, he does spend some time talking about motivations and emotions (or the lack thereof) in his theory.

Ted: "There's a reason why the most popular fantasy environments are strictly pre-industrial."

Because electricity is evil? Perhaps a simpler explanation is that developers are, well, lazy (or at least, derivative)?

2.

Regarding emotions in AI:

Okay, I haven't read Hawkins' book. But I have to say that the idea of human-like AI without emotions is, well, ludicrous. This is an idea based less on science than on old science fiction ideas (from 2001 to Star Trek to "A.I.") -- fiction not surprisingly written by intellectual men enculturated in a society that has rewarded linear lockstep reasoning over emotionality. Even in the psychological and AI scientific communities, we're only now gaining any serious understanding of the classifications and utility of emotions. Until recently this was too qualitative an area for real study, and was seen as the poor sibling to reasoning anyway.

As one counterpoint to Hawkins, it's worth reading Damasio's accessible Looking for Spinoza, especially his discussion of how people without the ability to reason emotionally (no, that's not a paradox) are all but incapable of making even the most trivial of decisions. At a slightly more technical level, Affective Computing (NB: that's Affective, not Effective) by Roz Picard at MIT is also well worth reading on this subject.

Emotion and affect are already making their way into AI used by the US military in training simulations, and will be seen in games soon too. My bet is that once you've interacted with an AI-driven NPC with actual emotions, and who can read your emotions as well, you won't be much interested in going back to the paper-thin NPCs in other games.

3.

For those of you looking to comment on Hawkins' without reading it . . . don't. (Sorry, Mike, not trying to ping on you, just hoping that more smart people will read his book) It's a fast read and I promise that it won't be what you expect.

His reasoning about memory as the basis for intelligence is quite plausible even without emotion. In fact, look at the company he's founded to take advantage of this idea.

4.

Cory>
"Because electricity is evil? Perhaps a simpler explanation is that developers are, well, lazy (or at least, derivative)?"

...Or, how many crafting categories players might insist be required to comprise a car? Perhaps the organic, natural fare is just plain easier to model on a level acceptable to players...

Which brings up an interesting question. Are machines only interesting because of their details? And nature interesting for its apparent simplicity?

5.

My view on the popularity of the fantastic as a game millieu is that the computer environment is experientially magical: it is about the production of arbitrary effects from arbitrary actions (defined programmatically.) This experiential element lends itself naturally, to being thematized in games as fantasy. Eddo Stern alluded to this in a paper he presented at the Computer and Digital Games Conference in Finland, in 2002, though I think that it hasn't been completely worked out yet. It's the nature of the player-relationship with game-spacethat makes magic seem organic and normal - the magical phenomenal rests on the technological/programmatic noumenal; implementing a simulation of a technology in the game actually adds another layer of artificially.

6.

My first instinct, as soon as I realized you were making a case against Hawkins, was to look for a response from Bartle. I'd actually recommend soliciting one, if that's appropriate: he IS in this business from the AI perspective, after all.

That said, I was in line to buy the first books after Hawkins' talk way back in October, so while I'm a bit rusty, I'm a bit familiar with what he's said. And in short, I think you're doing exactly what he doesn't want you to do.

Yes, we have fully integrated electricity and cars into our daily lives. While I am no Luddite, I do think it's important to point out that we live a different way now than we did 150 years ago. We have gained much, but we have also lost much. There's a reason why the most popular fantasy environments are strictly pre-industrial.

I think it's an important point that I use the exact same argument (borrowed from Bartle, I think... no, borrowed from YOU, Ted, from your MDC 2003 presentation) when I tell people, "Hey, virtual worlds are up and coming."

Fantasy involves dreaming. It involves crazy stuff that extends beyond the mere limitless sky into the castles in the clouds, flying continents, space stations, unknown regions, alien races. But that takes effort to imagine; effort most people would rather not make (graphical v. text). So what happens? Like Cory said, developers (read: people in general) are lazy. The most popular fantasy environments are made by people who don't feel like stepping drastically out of the old myths and played by people who don't want to have to imagine anything spectacularly new.

Their other choices are modern or futuristic. Neither are as easy as pre-industrial. And you know this. It's got nothing to do with the happy-feely nature of olden times. The proof for that lies in the extreme preponderance of things like... elves. Nature spirits. Organic landscaping. Which is in turn countered by an economic question: why then are those very popular worlds also rich in corporate structures with profit-maximization as a primary aim? Why do they act the same?

Hawkins says machines will generally not have human emotions, but of course they already do.

No, they don't. Your examples were glossed over design mechanisms; neither of those examples suggested actual emotion, because they're scripted behaviors: no intelligence. Their intent is to graft a stereotype in the player's mind (big nasty or mommy) onto the computer-controlled avatar. They do not think. They exhibit behaviors that are mapped by our brains as the result of human emotions. Re-read Chapter 2: he talks about why intelligent behavior != intelligence. AI, thus far, has consisted of "make the robot appear to be human". Turing. Hawkins is tossing it out.

You're putting a really negative spin on what is completely up to the future to decide. Emotions aren't the basis of decision-making. Value functions are. Emotions are factored in, of course, but hell, if there are no emotions, then they plainly aren't factored in.

Hawkins is talking utilizing the neocortical model to construct machines capable of developing programmable thought processes without the inherited biological burden human beings are forced to deal with, like eating, sleeping, having sex.

Try this on for size:
http://yudkowsky.net/tmol-faq/tmol-faq.html

7.

Sorry I have not read the book, so I’ll be generic.

I’m not sure about AI and emotions, there’s a large diversion into the philosophy of mind and a bunch of other subjects which I think is best avoided, but there is certainly a lot to think about with respect to computers and attitudes.

I’ve written elsewhere (Computer Ethics - Just Science Fiction? Philosophy Now #23 Spring 1999) that we need to be careful about normative values and computing. This is because attitudes get integrated into computing in at least three ways:

1) the code, specifically the scope of decisions that are made and the criteria that are used to make those decisions
2) the context in which the computing is cited
3) the social attitude towards computing

A simple example is a system that decides whether someone is eligible for a home loan.

(1) it is easy to see how the decision structure of the program could disadvantage a given group, such as a racial group by using selection criteria that, when taken together, favour a give group and disadvantage another, even though no individual on the project was them selves being overtly racist.

(2) such systems are often implemented as part of an efficiency raising or cost reduction program, as part of this it is often the case that staff are de-skilled and disempowered.

(3) computing systems are often seen as both authoritative and morally neutral.

Taking these three factors together one can see how technology can impact society strongly through the imposition of attitudes, even though the encoding and impact of those attitudes is pretty much invisible to everyone involved in the construction and implementation of the system.


My thoughts have not really moved on to emotion, but given the issues with attitude that, on the whole, we fail to realise let alone tackle, I’m somewhat wary about the next steps.


I’d also note that while AI might not be all pervasive in the west, decision making system on chips (IC – integrated circuits) are. There is talk above about the internal combustion engine, well modern cars have ICs (things like fuel injection system ensuring maximum efficacy, if memory serves), so do may other things with so called embedded devices.


An interesting branch of computer ethics that this brings up is when, if ever, artificial agents become moral agents. One would think that emotion was a component of this (I’m sure there are some Star Trek episodes that deal with this too) though there are some interesting alternative theories out there.

8.

William Huber> This experiential element lends itself naturally, to being thematized in games as fantasy. Eddo Stern alluded to this in a paper he presented at the Computer and Digital Games Conference in Finland, in 2002, though I think that it hasn't been completely worked out yet.

The paper is here. Good stuff.

I was just reading it last week, actually. As he notes, the idea of technology as magic & vice versa is a pretty old idea -- I'm sure there are plenty of good books and articles on the interplay. At the Command Lines conference, we were talking about it a bit. The mad scientist = the evil witch, etc. Sandra Braman was talking about sotftware "demons" and the move toward increasing pan-demonium, which kind of fit as well. One thing I thought was kind of interesting (and that Stern mentions) was how well this fit into the progression from mere player to "wizard" in MUDs, where being a wizard was essentially a grant of admin-type powers.

In keeping with Cory's directive, I'll say nothing about the book. :-)

9.

Generic AI point: whether am artificially intelligent being needs emotion or not depends on what you mean by "intelligent".

I haven't read the Hawkins book (it isn't even in the pile of 13 books on my desk waiting to be read) but AI researchers have tackled this before (eg. here - not that I've read that either). The consensus seems to be that if you don't give your AI emotions, you have to give it something else instead.

Richard

10.

I assume that when we talk about giving AI emotions what we are really talking about is that we are making certain information available to its decision making processes? The only thing that a computer is capable of processing is information reducable to storable data. So what is the binary representation of anger? Or sadness?

Regardless of how an AI has been programmed to make its judgements, it still has to be programmed to do so. An AI can only be harmful to us if programmed to do so or enabled to do so (with or without the designer's awareness).

The only thing that I fear is an AI being made or enabled to harm us that is also capable of deceiving us. Considering the openness of the ability to create software, it would be unreasonable to say that, given enough time, there would *not* be an AI with that description. The Matrix might be a kind of inevitable conclusion of mankind after all.

11.

Jim Self wrote:

I assume that when we talk about giving AI emotions what we are really talking about is that we are making certain information available to its decision making processes? The only thing that a computer is capable of processing is information reducable to storable data. So what is the binary representation of anger? Or sadness?

The reductionist argument misses the point in silicon just as it does in neurology. That is, the experiences of anger and sadness have some representation at the neural level in humans, but this doesn't make them any less real.

Whether an AI with emotions actually has emotions or only does a really good job of simulating them is a philosophical question (a troubling one though from both sides). How do you know anyone else really has emotions? Do you consider your dog to have emotions -- or, like Damasio, to you differentiate between a dog having emotions vs. feelings? When you see an actor on a movie screen, is that individual really experiencing the emotions they're showing? How do you know, and how does this change your conception of what they and others truly feel?

Regardless of how an AI has been programmed to make its judgements, it still has to be programmed to do so.

Some AIs can learn from their experience and observation, superceding any prior programming. With learning and emergent goal and behavior strategies, it's no more true that an AI can act "only as it's been programmed to act" than that your dog or child is similarly limited.

The only thing that I fear is an AI being made or enabled to harm us that is also capable of deceiving us.

You'll be glad to know then that human aspects such as humor, deception, and aesthetics are some of the most difficult areas to crack in AI. To date I don't believe any of these has been substantially solved. Having poked at the are of AI deception myself, I can tell you it makes most of the other issues we've confronted pale in comparison.

12.

Several correspondants (from Castronova on down) seem to have missed a fundamental distinction, between what we could call "Artifical Human-like Personality", e.g. Game AIs, and "Natural Machine Intelligence" i.e. the sort of intelligence that would arise in devices as complex in the right ways as the human brain.

It's obvious that the controlling software for a fire-breathing munchkin in a game, no matter how well it takes advantage of machine intelligence, is likely to have a programer-determined personality to match the game objectives. But that sort of artificial personality is exactly what Hawkins is not discussing.

His book is specifically an argument against traditional A.I. approaches, so parrotting traditional A.I. maxims is no response.

I'd be willing to entertain an amusing argument about whether humans will even tolerate intelligent, dispassionate decision making; or other intelligent, informed discussions; and I vehemently disagree with Mr. Hawkins about whether machine intelligence is a threat. But please do not spend too much time chasing straw windmills in circles because you missed that distinction.

13.

http://www.joechat.com/wwwboard/messages/819.html higherloosertaste

14.

http://www.annbaldwin.com/forum_one_msg/7494.htm currentmontgomerypause

15.

chansons tunisienne
chansons yves duteil
chansons zouk
classement des chansons
clip chansons
dassin chansons
developpement logiciel
extrait chansons
genie logiciel
graveur dvd logiciel
immobilier logiciel
integration logiciel
joe dassin chansons
le roi soleil chansons
les chansons
les nouvelles chansons
logiciel
logiciel 3d
logiciel a telecharger
logiciel access
logiciel acrobat
logiciel album photo
logiciel antivirus
logiciel architecte
logiciel architecture

The comments to this entry are closed.