Bookshelf

Lessons from the Scariest Design Disaster in American History

Designer and journalist Cliff Kuang shares an excerpt from “User Friendly”

Welcome to Bookshelf, a new series featuring books written by Googlers. For our first installment, Senior Staff Designer Cliff Kuang shares an adapted selection from User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play*. Written with Robert Fabricant and described by the* New York Times as “a tour de force,” the book is a must read for anyone working in UX today.

On March 28, 1979, the United States came within thirty minutes of unthinkable doom, after a near-meltdown of Reactor Two at the Three Mile Island nuclear power plant in Middletown, PA. As investigators later discovered, the 150-tons of uranium within the reactor core had hit an astounding 4300°F. At just 700°F hotter, it would have melted through the eight-inch steel reactor containment shell, boring through the earth until hitting bedrock beneath the Susquehanna River, and blowing radioactive steam geysers into the air just 90 miles away from Philadelphia.

At the time, bureaucratic poobahs blamed the near-miss on a favorite scapegoat that we still reach for when something catastrophic happens, whether it’s a crash in a self-driving car or a botched missile warning in Hawaii: user error. But the fact is that TMI wasn’t a user error; the people operating the reactor never had a chance. TMI was a failure of design, and one that never should have happened.

The problems began with the reactor’s control panel, which, after a clog in the plant’s cooling system, lit up in a storm of warning lights and klaxons. There were literally hundreds of alarms and lights going off at once. The operators duly rushed to turn them off, trying to figure out the problem. A fog of confusion—what is happening?!—already started to descend. The panel itself gave no clear sense of how the plant actually worked, clustering bits of information in meaningless ways. For example, the panel indicating reactor leaks was next to the one announcing elevator problems; investigators later found that a red light could mean fourteen different things, some bad, some good. Faced with that rat’s nest of conflicting signals, none of the operators could discern the cause and effect of all the ministrations they attempted to address a once-minor clog.

But maybe the most egregious design error was even more fundamental. There was a release valve at the top of the reactor cooling system. In the control room, that valve was so important that it got its own light, and its own switch. As it turns out, the light was wired merely to the switch, not the valve. Therefore it could only relay whether someone had flipped the switch—and not whether, in fact, the valve had actually been closed. Put another way, that single, idiotic light was only capable of marking the intent to close the valve—not the actual closing of that valve. So when the operators went to check, the light simply lied. And because of that single piece of misbegotten feedback, that valve stayed open for hours as water boiled away from the reactor core, sending the temperature inside on its terrifying path to 5000°F.

What happened at TMI caused a sad cascade of events. Together with the wildly misinformed disaster movie The China Syndrome, the accident at TMI helped end political support for nuclear power in American—which experts still point out is one of the safest, most reliable sources of clean energy. So as we look back at the clean energy that might have been, it’s safe to say that TMI was the biggest design failure in US history. It’s also the most instructive. The failures at TMI teach us volumes about what good design actually means: If the feedback in an app or gadget isn’t just right, you’ll never understand how it works. But more than just apps or gadgets, it’s the feedback mechanisms all around us that teach how the world works. Those mechanisms show us what’s important. They give us agency, when the world would otherwise be just meaningless lights, lying to us about what they do.

In the wake of the accident in 1979, Reactor 2 at Three Mile Island was shuttered and sealed. But Reactor 1 quietly continued to operate until 2019. With input from the famed design guru Don Norman, it was even retrofitted to be easier to use. A couple years ago—when I was writing User Friendly: How the Hidden Rules of Design are Changing the Way We Live, Work, and Play—I wanted to learn exactly how it had been redesigned to prevent the waves of confusion that had swamped Reactor 2. It took me months to convince the power plant’s PR minders that I wanted to visit not to see what remained dangerous but to see what had been fixed. Finally, I went.

You approach the Three Mile Island power plant on a winding two-lane road, and then the trees open up and start offering flickers of a riverbank. Then, finally, you see two gargantuan cooling towers on a tiny island of their own. They are three hundred feet tall, huge, out of scale with everything else. When I went, steam was rising from Reactor 1, in cottony plumes, unfurling at a shambling tempo. Next to it, the tower of Reactor 2 stood quiet and streaked with rust, a dead sentry for thirty years. The scene produced an eerie, strangely beautiful effect. Impossibly expensive to tear down, Reactor 2 loomed like a monumental postmodern sculpture. It was the literal warning shadowing the reality.

The operative cooling towers at Three Mile Island in 2015, before the plant was recently decommissioned.

Photo courtesy of the author

Across the road was the ho-hum, squat brick building where workers train. Inside, it was appointed in institutional drab: orange-brown-gray carpet, public-school-style furniture made from chrome and chipboard.

The only reason this plant came to Pennsylvania at all was because of organized crime. In the 1970s, Metropolitan Edison had at first tried to build it in New Jersey. The Mob, which ran thick in the local unions, threatened to sabotage the work site unless it got the customary one percent kickback on the total building cost—which amounted to $7 million against $700 million. The power company pressed ahead anyway and began laying the foundations. Then, while a crane was lowering the 700-ton reactor core into the ground, some unknown construction worker dropped a literal wrench into the crane’s workings. The message was clear: Pay up, or else the plant would be sabotaged in ways no one would ever know until it was too late. The power company promptly abandoned the site and decamped for a little spit of land in Pennsylvania. As a result, Reactor 2 was reconfigured at Three Mile Island within a mere 90 days, for a site it was never meant to occupy. For those who worked there, Reactor 1 always performed beautifully. Reactor 2 remained a tetchy, temperamental beast.

The simulated control room felt like a movie set. The banks of control panels were painted in industrial green and the rows of lights shrouded in protective cowls made it look like Mission Control for an Apollo flight, albeit smaller. My tour guide was the man who ran the room, a friendly, slightly built engineer with wire-rim glasses and decades on the job. He wore sturdy brown walking shoes and clothes that were scrupulously beige, making him look like an extra from a period movie about the Space Race. He was in charge of simulating the beginning of the end of the world, to see how the workers respond. The room was an exact replica, down to the switch, of everything within the control room at Reactor 1.

The control panel in the training room at Three Mile Island, in 2015.

Photo courtesy of the author

In the wake of the accident at Reactor 2, a slew of subtle but powerful changes worked their way across the industry. Standing in the simulated control room, I could see them all. For one, from the back of the control room, everything was visible—there were no hidden indicator lights to be forgotten behind a panel. The room was easily navigable. The lights here were consistent: When everything checks out, the lights glow blue.

Normal isn’t why we were there, though, and it wasn’t why this room was designed. The engineer stole away into an observation room at the back, and then came out to announce that the reactor had shut itself down. Just like it would have when the clog happened at Reactor 2. A bank of lights went off, but the effect was muted. Contained, so that you could focus on what exactly was happening.

For the people who were assembled in the control room at Three Mile Island in 1979, all these problems—erroneous feedback, controls that were inconsistent and impossible to navigate—added up to a greater problem. The men on duty literally couldn’t imagine what was going wrong, because the machines wouldn’t let them. They had no mental model showing how all these disparate and strange events might be connected, which would have helped them deduce what was going on.

Mental models are nothing more and nothing less than the intuitions we have about how something works—how its pieces and functions fit together. They’re based on the things we’ve used before; you might describe the entire task of user experience as the challenge of fitting a new product to our mental models of how things should work. To take one simple example, we have expectations of how a book “works”: It has pages of information, laid out one after the other in a sequence; to get more information, you turn the page. One key to the enduring success of the touchscreen Amazon Kindle lies in how well it has remapped that mental model; just as you turn a page in a book, you “turn” a page by swiping at an ebook.

When we can’t assume how a gadget works, we use feedback to form a hazy mental model of its logic. But the most literal way to develop a mental model is to draw a picture. Looking around the Reactor 1 control room now, you can see it’s been remade to create a mental model of the entire reactor. Even for a neophyte like myself, the major pieces of the system are easy to imagine. The room simply mirrors the reactor’s design. Each control panel represents a discrete system—for example, the secondary circulation system or the reactor core—so that when you look around, you can see how they all link up, each flowing from one to the next. The reactor has been mapped to the room—just the same way you find the burners on your stove mapped to their corresponding dials, or the controls on your car’s driver’s seat mapped to buttons that resemble the seat’s parts. All of it was meant to create a durable picture in the minds of its operators.

But the most curious thing I noticed was the way that workers had been trained to interact inside this little bubble of precision. When workers went to confirm some crucial reading, they went in pairs: One did an action, the next one confirmed the action; the first one confirmed the action had been done right, then the second one did as well. This process was meant to eliminate one egregious error that happened in 1979, when one worker went to the back of a control panel and misidentified the gauge that might have revealed an open valve. Instead, the two workers followed the same steps built into any working button: The user pushes it, the button acts, then the button confirms that the action’s done, by feedback. Buttons almost always engage with a satisfying click, telling you they’ve been pressed. In the reactor control room, that feedback comes from the second worker saying an action’s done. But it’s the same idea.

It is a strange kind of world we live in, where to make sure that men make no mayhem with a machine, they’re made to behave like buttons. But then, it’s maybe not surprising on deeper reflection: Feedback is what turns information into action. Buttons, in turn, have become the connection point between our will and the user-friendly world. Embedded in them is a fundamental truth about how our minds make sense of things. As banal as buttons may seem, properly viewed they can also seem like everything. The point arrives from surprising places, all the time. For me, the strangest was when my wife told me that her psychologist said the secret to having a productive argument with your spouse is to listen to what they have to say, repeat what you just heard, then finally have your spouse confirm that’s what they meant. Push the button, provide feedback, confirm the action. Like a button. Feedback precedes any influence we might wield upon the world. Design is nothing more—and nothing less—than creating artifacts imbued with the shared understanding that feedback allows.

Adapted from User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play by Cliff Kuang with Robert Fabricant. Published by MCD, an imprint of Farrar, Straus and Giroux November 19th 2019. Copyright © 2019 by Cliff Kuang and Robert Fabricant. All rights reserved.