Skip to main content icon/video/no-internet

Normal Accident Theory

Normal accident theory describes organizations and technologies that are so complex that accidents are to be expected as a normal outcome. In Normal Accidents, Charles Perrow examined such high-risk technologies as nuclear power, petrochemical plants, and air travel (as well as the organizations managing them), and concluded that it is the combination of complex interaction and tight coupling among the subparts of these systems that makes them accident-prone, regardless of the precautions taken.

Different technologies are marked by varying types of interaction among the parts. At one end of the spectrum lie systems that are linear, where each event in the sequence of production or operation is only linked to those steps immediately preceding and following it. This implies that operators and designers of such systems can easily comprehend what would happen if something were to go wrong with a given part. The opposite type of interaction is complex. In complex systems, connections between parts are multiple, indirect, and unclear. As a system moves toward the complex end of this spectrum, the ability of operators, managers, or even designers to predict or understand possible interactions becomes increasingly difficult.

In addition, technological systems may be tightly or loosely coupled, a term that refers to the amount of slack between components or subsystems. The characteristics of tight coupling include time-sensitive processes, invariant sequences, and a unique production path. In terms of explaining the causes and scale of accidents, tight coupling is important because it reduces the ability of operators (or the system itself) to recover from a failure, thereby making minor breakdowns more likely to become catastrophic accidents.

In Perrow's analysis, falling at the extreme of either of these variables does not necessarily spell doom for an organization; it is only in the combination of tight coupling and complex interaction that the danger of a system accident arises. System accidents are labeled as such because they result directly from the structural makeup of the system (organizational and technological), rather than from the skills, attention, or motivation (or lack thereof) of those operating it.

In response to the normal accidents literature, a number of scholars have attempted to look into the theoretical reasons why many organizations using high-risk technology, which Perrow would predict to have had catastrophic accidents, have not witnessed such accidents. Specifically, analysts have looked at what they have termed high-reliability organizations—organizations that must be highly reliable in order to provide their most basic services or accomplish their most fundamental tasks.

Applications of normal accident theory (and its counterpart) to political processes have been limited, but include a few important examples. Scott D. Sagan's work on the prevalence of near-accidents with nuclear weapons during the Cold War demonstrates the usefulness of these approaches to important problems of political science.

JordanBranch

Further Readings and References

La Porte, T. R., & Consolini, P. M.Working in practice but not in theory: Theoretical challenges of “high-reliability organizations.”Journal of Public Administration Research and Theory123–49 (1991).
Perrow, C. (1999). Normal accidents: Living with high-risk technologies. Princeton, NJ: Princeton University

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading