The inventor of my Treo wrote a persuasive book about how brains think, with one scary chapter about how to make machines think the same way. What's scary about it is this: Hawkins doesn't seem to grasp the way AI and people interact outside the context of handhelds. It gets intimate, you know. And that makes a big difference.
We've talked about AI before here. In fact, it was the subject of the first post. Cory has cited the Hawkins book a couple of times, and properly so, for its new interpretation of thinking. Fascinating stuff.
My concerns are with policy implications.
In the Future of Intelligence, chapter 8, Hawkins specifically addresses fears that intelligent machines will be a menace of some kind (pp. 213-217). He says:
"These fears rest on a false analogy. They are based on a conflation of intelligence - the neocortical algorithm - with the emotional drives of the old brain - things like fear, paranoia, and desire. But intelligent machines will not have these faculties."
Well, no. No they won't. I mean, unless we build them in. But we won't do that, will we?
Think about it. Of course we will!
1. A machine with any kind of decision-making authority has to know how to weigh alternatives. Later in the chapter Hawkins has machines driving cars, running air traffic control, even conducting diplomatic negotiations! In terms of economic theory (a decision science that such machines are sure to rely on), you need to know what the objective function is. When is the car best serving us, when it hits our 2 children, or when it hits the schoolbus containing 50 children of strangers? How could a machine handle the Cuban Missle Crisis if it did not have a brinksmanship pattern to rely on, built I guess by repeatedly playing chicken games with its own utter destruction (coded as the worst of all events) at stake? The very process of steeping the AI with the patterns of judgment it needs will endow it not with our old brain, but with all the decision metrics driven by it. It won't be an anxious, narcisstic machine, no. It will only act like one.
2. The future of human-machine interaction is not my Treo, useful though it may be; we're going to dance together in games. Indeed, in MOGs we will see integrated machine/human societies. And if you look at the shoots of that AI now springing up, you see some machines that are emotionally flat, as Hawkins predicts. That door on Sneed's Trading Post: It just opens when you click it, and closes automatically 7 seconds later. No old brain in that, is there. But how many AI cycles do such bots occupy in a contemporary MMORPG? Not many. No, most of the cycles go to huge, fire-breathing monstrosities. Big Nasty is the paragon of old-brainism: Paranoid. Angry. Unreasonable. Just loves to make you unhappy. And that's all by design. Meanwhile, back in Safetown, Arch-mage Mentoria does nothing but wish you well and praise your accomplishments over and over, just like Mommy: Overprotective, indulgent, spoiling Mommy. All by design.
Hawkins says machines will generally not have human emotions, but of course they already do.
Now here's why the whole thing is scary. Hawkins says, why worry about machines so much:
"When the industrial revolution came along, we feared electricity (remember Frankenstein?) and steam engines...But electricity and internal combustion engines are no longer strange and sinister. They are as much a part of our environment as air and water."
Yes, we have fully integrated electricity and cars into our daily lives. While I am no Luddite, I do think it's important to point out that we live a different way now than we did 150 years ago. We have gained much, but we have also lost much. There's a reason why the most popular fantasy environments are strictly pre-industrial.
With each advance of technology, we surrender something. The feel of grass on our feet. Walking. Living next to the same people for decades. Being with relatives when they pass away. Eating food that we've raised ourselves. Feeling self-reliant. Knowing we have a firm place in the social hierarchy.
No, I am not saying these are all great things, or that they did not have attendent costs, or that development is, in general, bad. I am also not saying that it is today impossible to have these things. My point is only that we just don't live that way any more, and that there were some good things that have passed away. Electricity and the internal combustion engine took away ancient modes of life and gave us this, whatever you might call it. Electricity and engines are indeed very much like air and water to us now. Can't live without them, really. And that is a very, very dramatic change in human living.
If AI is like electricity and engines, it too will dramatically change how we live. Many good things will be gained, but some good things will be surrendered, too. We should absolutely be, if not fearful, at least prudent. We could have done cars and electricity a different way. Los Angeles did not have to turn into what it is today. We really should think about how to build intelligent machines. Hawkins does not realize that we will share society with them, and for social reasons, we need to think about how to get along.