Two thousand years ago, the Greek philosopher Socrates argued that the book would destroy people’s ability to reason. Why? Because Socrates believed in dialog, in conversation, in debate. But with a book, there is no debate: the written word cannot talk back. Today, the book is such a symbol of learning and knowledge that we laugh at his argument. But take it seriously for a moment. Socrates was absolutely correct that we learn and perform best through questioning, through discussion and debate, through the mental activities called “reflection.” Despite Socrates’ claims, books do lead to reflection, for in classrooms book content is debated and discussed, and in everyday life books are discussed through discussion with friends, within articles in periodicals, the content of our various media, and conflicting views presented by other books. The conversations and debates may take place over months or years, but they do take place.
With technology, however, there is no way to debate or discuss. Technology simply acts, without discussion, without explanation. We are given no choice in the matter. Even if we are able to discuss the actions at a later time, this after-the-fact reflection is of little use, for the moment of decision has come and gone. With the arguments in books, time is not critical. With the actions of our automobiles – or even our household appliances – within a few seconds or minutes the deed is done, and no amount of discussion or reflection afterwards can change anything.
Socrates may have been wrong about the book, but he was certainly right about technology. The technology is in control, performing its actions without consultation or explanation, instructing us how to behave, similarly without advice or consultation. Is the technology to be trusted? Perhaps. But without dialog, how are we to know?
Both as a business executive and as a chair of university departments, I learned that the process of making a decision was often more important than the decision itself. When a person makes decisions without explanation or consultation, then people neither trust nor like the result, even if it were the identical course of action that would be taken after discussion and debate. Many business leaders like to make the decision and be done with it. “Why waste time with meetings,” they ask, “when the end result will be the same?” Well, the end result is not the same, for although the decision itself is identical, they way that it will be carried out and executed, and perhaps most importantly, the kind of responses that will be made if things do not go as planned, will be very different with a collaborating, understanding team than with one just following orders.
Tom dislikes his navigation system, even though he agreed that at times it would be useful. What if navigation systems discussed the route and made it easy to change it or get explanations for why one particular one is recommended over another? Sure, systems allow a high-level choice of such things as “fastest,” “shortest,” “most scenic,” or “avoid toll roads” but even if the person makes those choices, there is no discussion or interaction about why a particular route is chosen, no understanding of why the system thinks route A is better than route B. Does it take into account the long traffic signals and the large number of stop signs? How, actually, does it decide which route is faster? What if the two routes barely differ, perhaps just by a minute out of an hour’s journey, we are only told the fastest: we aren’t told that there are any alternatives, ones we might very well prefer despite a slight cost in time. There is no way of knowing: whether dumb or smart, correct or not, the methods remain hidden, so that even were we tempted to trust the system, the silence and secrecy promotes distrust, or at least, disbelief.
Notice that things do not need to be this way. Some navigation systems do present drivers with alternative routes, displaying them both as paths on a map and as a table showing the distance, estimated driving time, and cost, allowing the driver to choose. Here is how this might work.
Suppose I wished to drive from my home in Palo Alto, California to a city in Napa Valley. The navigation system would present me with two choices:
From Palo Alto, CA to St. Helena, Ca
1 Hour 51 Min.
Via San Francisco Bay Bridge
2 Hours 14 Min.
Via Golden Gate Bridge
My wife and I recently made drove this trip, with my navigation system insisting on directing us via route 1. My wife suggested we go via the Golden Gate Bridge, route 2, even though it was slightly longer and slower. We weren’t in a rush and route 2 was more scenic and also avoided rush hour traffic in San Francisco. My system offered no alternative: “Want to go to St. Helena? Then listen to what I say.” It didn’t matter that we preferred a different route. We weren’t given a choice.
But this time I ignored my car and listened to my wife. Problem is, we didn’t want to turn off the navigation system because once we crossed the Golden Gate Bridge, we would need its help. So we took the path toward the Golden Gate Bridge and ignored the navigation system’s pestering during our entire passage through San Francisco. The system was continually fighting us, repeatedly urging us to turn left, or right, or even to make a U-turn. There was no way to explain that we wanted the alternative route. It was only after we were on the Golden Gate Bridge that the system gave in, or, more precisely, that is when its automatic route computation finally selected the path we were actually taking, and so from then on, its instructions were useful.
Suppose we had been given the two choices first? We would have all been happier.
This interaction with the navigation system is an excellent example of the issue: Intelligent systems are too smug. They think they know what is best for us. Their intelligence, however, is limited. They lack all of the information required to make appropriate choices. And moreover, I believe this limitation is fundamental: there is no way a machine has sufficient knowledge of all the factors that go into human decision making. But this doesn’t mean that we should reject the assistance of intelligent machines. Sometimes they are useful. Sometimes they save lives. I want the best of all worlds: the intelligent advice, but with better interaction, with more choices available. Let the machine become socialized, let them get some manners and most importantly, some humility. That’s what this book is about.
If the car decides to straighten the seat or apply the brakes, I am not asked or consulted, nor am I even told why. The action just happens. The car follows an authoritarian style, making decisions and allowing no dissent. Is the car necessarily more accurate because, after all, it is a mechanical, electronic technology that does precise arithmetic without error? No, actually not. The arithmetic may be correct, but before doing the computation, it must make assumptions about the road, the other traffic, and the capabilities of the driver. Professional drivers will sometimes turn off the automatic equipment because they know the automation will not allow them to deploy their skills. That is, they will turn off whatever they are permitted to turn off: many modern cars are so authoritarian that they do not even allow this choice.
The technology is in control, performing its actions without consultation or explanation, instructing us how to behave, similarly without advice or consultation. Is the technology to be trusted? Perhaps. But without dialog, how are we to know?
More and more, our cars, kitchens, and appliances are taking control, doing what they think best without debate or discussion. Now, it might very well be true that my car – and my wife – were correct, and my assurances to my wife a delusion on my part. The problem, however, is not who is right or wrong: the problem is the lack of dialog, the illusion of authority by our machines, and our inability to converse, understand, or negotiate. Machines, moreover, are prone to many forms of failure. As we see in Chapter 4, unthinking obedience to their demands has proven to be unwise.
When I started writing this book, I thought that the key to making machines better co-players with people was to develop better systems for dialog. We needed better tools for conversation with these smart, automated systems, and in turn, they needed better ways to communicate with us. Then, I thought, we could indeed have machines that were team players, that interacted in a supportive, useful role.
I now believe I was wrong. Yes, dialog is the key, but successful dialog requires a large amount of shared, common knowledge and experiences. It requires appreciation of the environment and context, of the history leading up to the moment, and of the many differing goals and motives of the people involved. But it can be very difficult to establish this shared, common understanding with people, so how do we expect to be able to develop it with machines? No, I now believe that this “common ground,” as psycholinguists call it, is impossible between human and machine. We simply cannot share the same history, the same sort of family upbringing, the same interactions with other people. But without a common ground, the dream of machines that are team players goes away. This does not mean we cannot have cooperative, useful interaction with our machines, but we must approach it in a far different way than we have been doing up to now. We need to approach interaction with machines somewhat as we do interaction with animals: we are intelligent, they are intelligent, but with different understandings of the situation, different capabilities. Sometimes we need to obey the animals or machines; sometimes they need to obey us. We need a very different approach, one I call natural interaction.