Search the Site

Launching Into Unethical Behavior: Lessons from the Challenger Disaster

Ann E. Tenbrunsel is a professor of business ethics at the University of Notre Dame. Max Bazerman is a professor of business administration at Harvard Business School. The guest post below is adapted from their new book Blind Spots: Why We Fail to Do What’s Right and What to Do About It.
 
Launching into Unethical Behavior
By Ann E. Tenbrunsel
and Max H. Bazerman
The 25th and last flight of the shuttle Endeavour has come and gone. Which means there’s just one shuttle flight left: July 8’s Atlantis launch will be the 135th and final mission for the program, 30 years after the first shuttle test flights occurred.

Shuttle liftoff from Cape Kennedy, FL (Photodisc)

For anyone who was around on Tuesday, January 28, 1986, it’s difficult to watch a shuttle launch without remembering  the Challenger disaster, when the space shuttle disintegrated 73 seconds after launch, killing all seven crew members. While the most commonly referenced explanation for what went wrong focuses on the technological failures associated with the O-rings, an examination of the decision process that led to the launch through a modern day “behavioral ethics” lens illuminates a much more complicated, and troubling, picture. One that can help us avoid future ethical disasters.
On the night before the Challenger was set to launch, a group of NASA engineers and managers met with the shuttle contracting firm Morton Thiokol to discuss the safety of launching the shuttle given the low temperatures that were forecasted for the day of the launch. The engineers at Morton Thiokol noted problems with O-rings in 7 of the past 24 shuttle launches and noted a connection between low temperatures and O-ring problems. Based on this data, they recommended to their superiors and to NASA personnel that the shuttle should not be launched.
According to Roger Boisjoly, a former Morton Thiokol engineer who participated in the meeting, the engineers’ recommendation was not received favorably by NASA personnel. Morton Thiokol managers, noting NASA’s negative reaction to the recommendation not to launch, asked to meet privately with the engineers. In that private caucus, Boisjoly argues that his superiors were focused on pleasing their client, NASA. This focus prompted an underlying default of “launch unless you can prove it is unsafe,” rather than the typical principle of “safety first.” (See Boisjoly’s detailed take of what happened here, from the Online Ethics Center for Engineering).
The engineers were told that Morton Thiokol needed to make a “management decision.” The four senior managers present at the meeting, against the objections of the engineers, voted to recommend a launch. NASA quickly accepted the recommendation, leading to one of the biggest human and technical failures in recent history.
An examination of this disaster through a modern day “behavioral ethics” lens reveals a troubling picture of an ethical minefield loaded with blind spots that are eerily similar to those plaguing contemporary organizational and political decision processes:

To those of us who study behavioral ethics, the statement, “We need to make a management decision” is predictably devastating. The way we construe a decision has profound results, with different construals leading to substantially different outcomes. Framing a resource dilemma in social versus monetary goods, despite identical payoffs, produces substantially greater resource preservation. Ann and her colleague Dave Messick, from Northwestern’s Kellogg School of Management, coined the term “ethical fading” to illustrate the power that the framing of a decision can have on unethical behavior.
We found that when individuals saw a decision through an ethical frame, more than 94% behaved ethically; when individuals saw the same decision through a business frame, only about 44% did so. Framing the decision as a “management decision” helped ensure that the ethics of the decision—saving lives—were faded from the picture. Just imagine the difference if management had said “We need to make an ethical decision.”
Despite their best intentions, the engineers were at fault too. The central premise centered on the relationship between O-ring failures and temperature. Both NASA and Morton Thiokol engineers examined only the seven launches that had O-ring problems. No one asked to see the launch date for the 17 previous launches in which no O-ring failure had occurred. Examining all of the data shows a clear connection between temperature and O-ring failure, with a resulting prediction that the Challenger had greater than a 99% chance of failure. These engineers were smart people, thoroughly versed in rigorous data analysis. Yet, they were bounded in their thinking. By limiting their examination to a subset of the data—failed launches—they missed a vital connection that becomes obvious when you look at the temperatures for the seven prior launches with problems and the 17 prior launches without problems.
Chances are that the reward system at Morton Thiokol also contributed to the disaster. Most likely, managers at Morton Thiokol were rewarded for pleasing clients, in this case NASA. When it became clear that NASA didn’t like the “don’t launch” recommendation, Morton Thiokol managers consciously (or subconsciously) realized that their reward was in jeopardy. And they reacted to protect that reward. This works out well for the behaviors that are rewarded; not so well for behaviors—such as safety or ethical decisions—that are not.
Management teams at Morton Thiokol and NASA appear to have utilized a powerful but deadly technique, which we refer to in the book as the “smoking gun.” They insisted on complete certainty that low temperatures and O-ring failure were related; certainty that was impossible, due not only to the bounded limitations of the engineers, as described above, but also to the statistical improbability that such data would ever be possible.
Smoking guns are used to prevent change and reinforce the status quo. The tobacco industry used it for years, insisting that the evidence linking smoking to cancer was inconclusive. We see the same strategy used today by climate change deniers. In the case of Morton Thiokol, the smoking gun was particularly effective because it was used in combination with a very specific status quo: “Launch unless you prove it is unsafe to do so.” And that combination proved to be deadly.
There are parallels between the fatal Challenger launch decision and more “ordinary” unethical behavior in corporations, politics and in society. We see them when we look at the way decisions are framed: “No harm intended, it’s just business,” or “That’s the way politics operate.” We see similarities in the limits of analysis, examining the legal but not the ethical implications of a decision. We see the power of rewards on Wall Street, where shareholder value is focused on to the exclusion of nearly everything else. And we see the smoking gun strategy utilized over and over, in statements from corporations that “the impact of a diverse workforce are decidedly mixed,” to politicians and Supreme Court justices claiming there is no clear evidence to suggest that their financial and social relationships bias their judgments. If we realize the power of these hidden forces and identify our blind spots, we can usually stop ethical disasters before they launch.


Comments