1 Cautious Cars and Cantankerous Kitchens: How Machines Take Control



Download 100.29 Kb.
Page1/7
Date26.02.2016
Size100.29 Kb.
  1   2   3   4   5   6   7

Don Norman November 13, 2006

Cautious Cars, Cantankerous Kitchens




1 Cautious Cars and Cantankerous Kitchens: How Machines Take Control

Donald A. Normana1




Two Monologs Do Not Make a Dialog

Where Are We Going? Who Is to Be in Charge?

The Rise of the Smart Machine

Thinking For Machines Is Easy, Physical Actions Are Hard; Logic Is Simple, Emotion Difficult

Communicating With Our Machines: We Are Two Different Species

References and Notes



I’m driving my car through the winding mountain roads between my home and the Pacific Ocean. Sharp curves with steep drop-offs amidst the towering redwood trees and vistas of the San Francisco Bay on one side and the Pacific Ocean on the other. A wonderful drive, the car responding effortlessly to the challenge, negotiating sharp turns with grace. At least, that’s how I am feeling. But then, I notice that my wife is tense: she’s scared. Her feet are braced against the floor, her shoulders hunched, her arms against the dashboard. “What’s the matter?” I ask, “Calm down, I know what I’m doing.”



But now imagine another scenario. I’m driving my car through the same winding, mountain road. But then, I notice that my car is tense: it’s scared. The seats straighten, the seat belts tighten, and then the dashboard starts beeping at me. I notice the brakes are being applied, automatically. “Oops,” I think, “I’d better slow down.”
Do you think example of a frightened automobile fanciful? Let me assure you it isn’t. The behavior described in the story already exists on some high-end luxury automobiles. Even more control over driving exists in some cars, while yet more is being planned. Stray out of your lane and some cars balk: beeping at you, perhaps vibrating the wheel or the seat or flashing lights in the side mirrors. One car company is experimenting with partial correction, partially steering the car back into its own lane. Turn signals were designed to tell other drivers that you were going to turn or switch lanes. Today, they are the means for telling your own car that you really do wish to turn or change lanes: “Hey, don’t try to stop me,” you tell your car, “I’m doing this on purpose.”
I once was a member of a panel of consultants, advising a major automobile manufacturer. Each panel member started with short talk, explaining his or her point of view. I told the stories above, about how I would respond differently to my wife and to my automobile. “How come,” asked fellow panel member Sherry Turkle, an MIT Professor who is both an authority on the relationship of people to technology and a friend, “how come you listen to your car more than your wife?”
“How come?” indeed. Sure, I can defend myself and make up rational explanations, but those all miss the point. As we start giving the objects around us more initiative, more intelligence, and more emotions and personality, what does this do to the way we relate with one another? What has happened to our society when we listen to our machines more than people? This question is the driving force behind this book.
The answer is complex, but in the end, it comes down to communication. When my wife complains, I can ask her why and then either agree with her or try to reassure her, but also through understanding her concerns, modify my driving so that she is not so bothered. When my car complains, what can I do? There is no way to communicate with my car: all the communication is one way.
This is the way with machines. Machines have less power than humans, so they have more power. Contradictory? Yup, but oh so true.
Who has the most power in a negotiation? In business negotiations between two powerful companies, if you want to make the strongest possible deal, who should you send to the negotiating table: your CEO or an underling? The answer is counterintuitive: It is the underling who can make the best deal.
Why? Because no matter how powerful the opposing arguments, no matter how persuasive, no matter even if your representative is convinced, the weak representative has no choice. Without the power to make a deal, even in the face of powerful, convincing arguments, the weak negotiator can only say, “I’m sorry, but I can’t give you an answer until I consult with my bosses,” only to come back the next day and say, “I’m sorry, but I couldn’t convince my bosses.” A powerful negotiator, on the other hand, might be convinced and accept the offer, even if later, there was regret.
Mind you, successful negotiators understand this bargaining ploy and won’t let their opponents get away with it. When I told these stories to a friend who is a successful lawyer, she laughed at me. “Hey,” she said, “if the other side tried that ploy on me, I’d call them on it. I won’t let them play that game with me.”
But with machines, we can’t bargain and in most cases we really have no choice: we are in the midst of a task, and when the machine intervenes, we have no alternatives except to let the machine take over. This is how machines get so much power over us. Because we have no recourse, because we can’t talk back or reason with them. It is obey or ignore, not discuss and modify. And sometimes we are not even given the choice of ignoring: it is “obey.” Period.

“Do you like your new car?” I asked Tom, who was driving me to the airport following a lengthy meeting. “How do you like the navigation system?”

“I love the car,” said Tom, “but I never use the navigation system. I don’t like them: I like to decide what course I will take. It doesn’t give me any say.”

Notice Tom’s predicament. He asks the navigation system for a route. It gives him one. Sounds simple – human-machine interaction: a nice dialog. A conversation if you will. But look again at what Tom said “It doesn’t give me any say.” And that observation goes to the heart of the matter. We fool ourselves if we believe we communicate with machines. Those who design advanced technology are proud of the communication capabilities they have built into their systems. The machines talk with their users, and in turn their users talk with their machines. But closer analysis shows this to be a myth. There is no communication, not the real, two-way, back-and-forth discussion that characterizes true dialog, true communication. No, what we have are two monologs, two one-way communications. People instruct the machines. The machines signal their states and actions to people. Two monologs do not make a dialog.


In this particular case, the use of the navigation system is optional, so Tom does have a choice: because his navigation system doesn’t give him enough say over the route, he simply doesn’t use it. Maybe that’s how we should all react to these power grabs by our machines: just say no! Alas, my lawyer friend had the power to force the other side to play by her rules. Our machines do not always allow that option. Moreover, sometimes the machine’s actions are valuable: in the case of automobile driving, they might save our lives. In the case of the home, they might make our lives easier. Even the navigation systems, flawed though they might be, often are of great value. The question, then, is how can we change the way by which we interact with our machines, the better to take advantage of their strengths and virtues while simultaneously letting us opt out of behavior we don’t want or don’t like. Today, we have no power to force changes. Machines (and their unseen, hidden designers) have all the power.
As our technology becomes more powerful, more in control, its failure at collaboration and communication becomes ever more critical. Collaboration requires interaction and communication. It means explaining and giving reasons. Trust is a tenuous relationship, formed through experience and understanding. With automatic, so-called intelligent devices, trust is sometimes conferred undeservedly. Or withheld, equally undeservedly. The real problem, I believe, is a lack of communication. Designers do not understand that their job is to enhance the coordination and cooperation of both parties, people and machines. Instead, they believe that their goal is to take over, to do the task completely, except when the machine gets into trouble, when suddenly it becomes the person’s responsibility to take command. Often, the person is unable to take control effectively, either because the requirement was not noticed, or the person still maintained an irrational faith in the machine despite its failure, or there was simply not enough time to understand the complexity of the situation and correct things before the damage was done. Weirdly, when the machine fails and humans are quickly required to take over, when they fail to avert tragedy, the accident is blamed on human error. Human error, when it was the machine that had failed? Yup.
Why do I pay more attention to my automobile than to my wife? I don’t – it just looks that way. When my wife points out a problem, she is often correct, but I can query her, find out what she has noticed, and then either take the appropriate action or discuss the situation with her. When my automobile does the same thing, no discussion is permitted. The car has reached a conclusion, and right or wrong, it has started its response. I have no way of discussing the issue with the car, no way of even finding out precisely what it was that caused the car’s behavior: all I can do is accept it.



Share with your friends:
  1   2   3   4   5   6   7




The database is protected by copyright ©essaydocs.org 2020
send message

    Main page