The Decisions Myth: Leaders, organizations I. The Decider. In an infamous quote, President Bush in April 2006 defended his choice to retain Donald Rumsfeld as Secretary of Defense despite his obvious failures in Iraq:
I hear the voices and I read the front page and I know the speculation.
But I'm the decider and I decide what's best. And what's best is for
Don Rumsfeld to remain as the secretary of defense.
Mr. Bush, however, had it wrong. Ultimately voters in a republic are the deciders, and later that year they decided his approach wasn’t working. Republicans lost control of Congress; Rumsfeld’s resignation was accepted in November 2006.
In public agencies we make decisions in a complex environment involving laws, regulations, multiple stakeholders, political expectations, and concern about the media depiction of such decisions.
II. The Authorizing Environment. Leaders in any organization have limits on their authority. Mark Moore of Harvard University termed these limits the “authorizing environment” or AE. This refers to all the sources of authority with which the leader/manager acts, both formal and informal. An AE provides legitimacy and support to the decisionmaker—in the short run. (Poor results will make any decisionmaker lose legitimacy eventually). The Formal sources of authority are the laws and regulations which establish the powers of the manager, as well as formally established hierarchies that limit freedom of action. Informal sources are the wider set of influences which shape the manager’s capacity to exercise power. These include factors such as the leader’s reputation and charisma, interest groups, civil society, the media, political leaders, etc. Subordinates are also part of the AE; resistance to a decision and poor implementation can doom a policy.
In any organization, public or private, anyone expected to make decisions must have a clear comprehension of their organizing environment.
Political decisions vs. org decisions
III. Limits to Human Decisionmaking The Rational Model suggests that humans can collect data on a situation, craft a series of options, determine a decision rule to apply, then use that rule to select the best of the available options.
In practice, our ability to make “rational” decisions is limited. Herbert Simon’s Administrative Behavior documented the limits on our ability to collect, analyze data and act on it. There is, for example, a generally recognized limit of about 7 decision factors that most people can take into account before they go into overload. Our memories are limited and it is often very difficult to a) figure out what a good outcome might look like or b) properly calculate the probability of such an outcome. (consider, for example, your retirement options……). Instead, Simon argued, humans “satisfice.” We absorb and interpret as much data as we can, then make the best decision possible under the circumstances, one we believe will lead to a satisfactory outcome. This is a model of bounded rationality.
After Simon, James March and others continued to assault the notion of “rational” decisionmaking, suggesting that:
Weick’s analysis of the Mann Gulch fire of 1949 (in which 13 firefighters were killed in Montana).
* It is difficult to assess risks. Humans are quite poor at it. We exaggerate some and hugely underestimate others.
* Stigma can be attached to some important options, especially in a crisis. President Bush’s recent Iraq decisions are primarily designed to avoid the stigma of defeat. In firefighting, it can be perceived as “weak” to drop your equipment and run, even if it may save your life.
* Decisionmaking occurs within a social context. The small firefighting groups of the Mann Gulch fire were simply not prepared to make quick life and death decisions.
* Decisionmaking depends upon clear communication. People with important data may be shut out for social reasons, communication can break down, and any pretense of “rational” decisions being made is lost.
* New situations can be very disorienting. The feeling that you are in a completely new context can throw comfortable maxims and decision criteria out the window. Suddenly the Mann Gulch fire crews, comfortable working on small fires, were caught in a very large conflagration, and were clueless. Their training had not prepared them.
IV. Myth of The Great Decider The data are clear: In general, decisions made by groups are better than decisions made by individuals. Groups bring in more diverse views and are more likely to include people with data and opinions that challenge the status quo. Often such people and their data are screened out within a hierarchy; new ideas can threaten power relationships. Also, the more diverse the group (in terms of race, ethnicity, etc.) the longer it will take for the group to become effective, but the better its decisions are likely to be.
Irving Janis, however, pointed out that groups can engage in “groupthink,” a decision-making process in which group members go along with what they believe is the consensus. This usually occurs in highly cohesive groups that are insulated from outside experts and gadflies. The closer and more insular they are, the less likely they are to raise questions to break the cohesion. Moreover, strong leadership leads to groupthink, since the leader is more likely to promote his/her own solution.
To avoid groupthink, Janis argued for a series of steps: assigning group members the role of “critical evaluator” so they can freely air doubts and objections; having the leader avoid expressing opinions that could bias the range of options and data analyzed; use a set of independent groups to analyze a problem; bring in trusted outsiders to validate data and arguments; and formally designate a devil’s advocate.
V. The Challenger Launch Decision Vaughan’s classic study of the Challenger disaster meticulously analyzed the organizational cultures of NASA and Morton Thiokol, the government contractor collaborating on the space shuttle program. There were well known engineering flaws with the famous “O-rings” on the shuttle. The complex rules, regulations and decisionmaking “trees” of both organizations were no match for the cultural factors that led to the fatal decision to allow the Challenger launch. Competition for scarce resources shaped a structure and culture that accepted risk-taking and corner-cutting as norms that influenced decision-making. Small, seemingly harmless modifications to technical and procedural standards became routinely accepted despite warnings from a few agency misfits. Each time irregularities occurred, experts believed that they could account for them, so that O-ring damage and soot blow-by became the norm rather than signals of danger. On launch day, the low temperatures known to lead to O-ring failure were explained away. Vaughan termed this process “normalization of deviance.”
References Groupthink, Wikipedia.
Vaughan, Diane 1996.. The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA. Chicago, Ill.: University of Chicago Press, 1996. xv + 575 pp. Illustrations, bibliography, index. $24.95 (cloth), ISBN 0-226-85175-3.
Weick, Karl E. (1993). "The Collapse of Sensemaking in Organizations: The Mann Gulch Disaster". Administrative Science Quarterly38: 628–652.