The Power Of Mental Models: How Flight 32 Avoided Disaster

The Power of Mental Models: How Flight 32 Avoided Disaster

On a sunny morning in 2010, Qantas Airways flight 32 taxied onto a runway in Singapore, requested permission to begin the eight-hour flight to Sydney, and lifted into the bright sky.

The following is excerpted from the book, Smarter Faster Better: The Secrets of Productivity in Life and Business by Charles Duhigg.

A few minutes after takeoff, the pilot, Richard de Crespigny, activated the plane’s autopilot. When the plane reached 7,400 feet, however, the pilots heard a boom. Then there was another, even louder crash, followed by what sounded like thousands of marbles being thrown against the hull.

A red alarm flashed on de Crespigny’s instrument panel and a siren blared in the cockpit. Investigators would later determine that an oil fire inside one of the left jets had caused a massive turbine disk to detach from the drive shaft, shear into three pieces, and shoot outward, shattering the engine. Two of the larger fragments from that explosion punched holes in the left wing, one of them large enough for a man to fit through. Hundreds of smaller shards, exploding like a cluster bomb, cut through electrical wires, fuel hoses, a fuel tank, and hydraulic pumps. The underside of the wing looked as though it had been machine-gunned.

The plane began to shake. De Crespigny reached over to decrease the aircraft’s speed, the standard reaction for an emergency of this kind, but when he pushed a button, the auto-thrust didn’t respond. Alarms started popping up on his computer display. Engine two was on fire. Engine three was damaged. There was no data at all for engines one and four. The fuel pumps were failing. The hydraulics, pneumatics, and electrical systems were almost inoperative. Fuel was leaking from the left wing in a wide fan. The damage would later be described as one of the worst midair mechanical disasters in modern aviation.

De Crespigny radioed Singapore air traffic control. “QF32, engine two appears failed,” he said.

Less than ten seconds had passed since the first boom. De Crespigny cut power to the left wing and began anti-fire protocols. The plane stopped vibrating for a moment. Inside the cockpit, alarms were blaring, and in the cabin, panicked passengers rushed to their windows.

The men in the cockpit began responding to prompts from the plane’s computers, speaking to one another in short, efficient sentences. De Crespigny looked at his display and saw that twenty-one of the plane’s twenty-two major systems were damaged or completely disabled. The functioning engines were rapidly deteriorating and the left wing was losing the hydraulics that made steering possible. Within minutes, the plane had become capable of only the smallest changes in thrust and the tiniest navigational adjustments. No one was certain how long it would stay in the air.

One of the copilots looked up from his controls. “I think we should turn back,” he said. Turning the aeroplane around in order to head back to the airport was risky. But at their current heading, they were getting farther away from the runway with each second.

In past decades, as computerised automation has increasingly entered our workplaces and the information revolution has remade our lives, the importance of managing our attention has become even more critical.

“You can think about your brain’s attention span like a spotlight that can go wide and diffused, or tight and focused,” David Strayer, a cognitive psychologist at the University of Utah, told me when I was reporting my book about the science of productivity, Smarter Faster Better: The Secrets of Productivity in Life and Business. Our attention span is guided by our intentions. We choose, in most situations, whether to focus the spotlight or let it be relaxed.

“But then, bam!, some kind of emergency happens — or you get an unexpected email, or someone asks you an important question in a meeting — and suddenly the spotlight in your head has to ramp up all of a sudden and, at first, it doesn’t know where to shine,” said Strayer.

Unless, that is, you’ve trained yourself how to respond.

In the late 1980s, a group of psychologists at a consulting firm named Klein Associates began trying to figure out why some kinds of people are so good at staying calm and focused amid chaotic environments — why some people, in other words, are better at directing the spotlight inside their heads. One researcher, Beth Crandall, began visiting neonatal intensive care units, or NICUs. A NICU, like all critical care settings, is a mix of chaos and banality set against a backdrop of constantly beeping machines and chiming warnings. Many of the babies inside a NICU are on their way to full health; they might have arrived prematurely or suffered minor injuries during birth, but they are not seriously ill. Others, though, are unwell and need constant monitoring. What makes things particularly hard for NICU nurses, however, is that it is not always clear which babies are sick and which are healthy. Seemingly ok preemies can become unwell quickly; sick infants can recover unexpectedly. So nurses are constantly making choices about where to focus their attention: the squalling baby or the quiet one? The new lab results or the worried parents who say something seems wrong? Crandall wanted to understand how nurses made decisions about which babies needed their attention, and why some of them were better at focusing on what mattered most.

Most interesting to Crandall were a handful of nurses who seemed particularly gifted at noticing when a baby was in trouble. They could predict an infant’s decline or recovery based on small warning signs that almost everyone else overlooked. Often, the clues these nurses relied upon to spot problems were so subtle that they, themselves, had trouble later recalling what had prompted them to act. “It was like they could see things no one else did,” Crandall told me. “They seemed to think differently.”

One of Crandall’s first interviews was with a talented nurse named Darlene, who described a shift from a few years earlier. Darlene had been walking past an incubator when she happened to glance at the baby inside. All of the machines hooked up to the child showed that her vitals were within normal ranges. There was another RN keeping watch over the baby, and she was observing the infant attentively, unconcerned by what she saw. But to Darlene, something seemed wrong. The baby’s skin was slightly mottled instead of uniformly pink. The child’s belly seemed a bit distended. Blood had recently been drawn from a pinprick in her heel and the Band-Aid showed a blot of crimson, rather than a small dot.

Something about all those small things occurring together caught Darlene’s attention. She opened the incubator and examined the infant. The newborn was conscious and awake. She grimaced slightly at Darlene’s touch but didn’t cry. There was nothing specific that she could point to, but this baby simply didn’t look like Darlene expected her to.

Darlene found the attending physician and said they needed to start the child on intravenous antibiotics. All they had to go on was Darlene’s intuition, but the doctor, deferring to her judgment, ordered the medication and a series of tests. When the labs came back, they showed that the baby was in the early stages of sepsis, a potentially fatal whole-body inflammation caused by a severe infection. The condition was moving so fast that, had they waited any longer, the newborn would have likely died. Instead, she recovered fully.

“It fascinated me that Darlene and this other nurse had seen the same warning signs, they had all the same information, but only Darlene detected the problem,” Crandall said. “To the other nurse, the mottled skin and the bloody Band-Aid were data points, nothing big enough to trigger an alarm. But Darlene put everything together. She saw a whole picture.” When Crandall asked Darlene to explain how she knew the baby was sick, Darlene explained that she carried around a picture in her mind of what a healthy baby ought to look like — and the infant in the crib, when she glanced at her, hadn’t matched that image. So the spotlight inside Darlene’s head went to the child’s skin, the blot of blood on her heel, and the distended belly. It focused on those unexpected details and triggered Darlene’s sense of alarm. The other nurse, in contrast, didn’t have a strong picture in her head of what she expected to see, and so her spotlight focused on the most obvious details: The baby was eating. Her heartbeat was strong. She wasn’t crying. The other nurse was distracted by the information that was easiest to grasp.

People like Darlene who are particularly good at managing their attention tend to share certain characteristics. One is a propensity to create pictures in their minds of what they expect to see. These people tell themselves stories about what’s going on as it occurs. They narrate their own experiences within their heads. They are more likely to answer questions with anecdotes rather than simple responses.

Psychologists have a phrase for this kind of habitual forecasting: “creating mental models.” Understanding how people build mental models has become one of the most important topics in cognitive psychology. All people rely on mental models to some degree. We all tell ourselves stories about how the world works, whether we realise we’re doing it or not.

But some of us build more robust models than others. We envision the conversations we’re going to have with more specificity, and imagine what we are going to do later that day in greater detail. As a result, we’re better at choosing where to focus and what to ignore.

Even before Captain Richard Champion de Crespigny stepped on board Qantas Flight 32, he was drilling his crew in the mental models he expected them to use.

“I want us to envision the first thing we’ll do if there’s a problem,” he told his copilots as they rode in a van from the Fairmont hotel to Singapore Changi Airport. “Imagine there’s an engine failure. Where’s the first place you’ll look?” The pilots took turns describing where they would turn their eyes. De Crespigny conducted this same conversation prior to every flight. His copilots knew to expect it. He quizzed them on what screens they would stare at during an emergency, where their hands would go if an alarm sounded, whether they would turn their heads to the left or stare straight ahead. “The reality of a modern aircraft is that it’s a quarter million sensors and computers that sometimes can’t tell the difference between garbage and good sense,” de Crespigny later told me. He’s a brusque Australian, a cross between Crocodile Dundee and General Patton. “That’s why we have human pilots. It’s our job to think about what might happen, instead of what is.”

After the crew’s visualisation session, de Crespigny laid down some rules. “Everyone has a responsibility to tell me if you disagree with my decisions or think I’m missing anything.”

“Mark,” he said, gesturing to a copilot, “if you see everyone looking down, I want you to look up. If we’re all looking up, you look down. We’ll all probably make at least one mistake this flight. You’re each responsible for catching them.”

So when the pilots flying Qantas 32 started seeing emergency warnings erupt on their instrument panels, they were somewhat prepared. In the twenty minutes after the turbine disc punched a hole in the wing, the men inside the cockpit dealt with an increasing number of alarms and emergencies. The plane’s computer displayed step-by-step solutions to each problem. The men relied on the mental models they had worked out ahead of time to decide how to respond. But as the plane’s problems cascaded, the instructions became so overwhelming that no one was certain how to prioritise or where to focus. De Crespigny felt himself getting overwhelmed. One computer checklist told the pilots to transfer fuel between the wings in order to balance the plane’s weight. “Stop!” de Crespigny shouted as a copilot reached to comply with the screen’s command. “Should we be transferring fuel out of the good right wing into the leaking left wing?” A decade earlier, a flight in Toronto had nearly crashed after the crew had inadvertently dumped their fuel by transferring it into a leaky engine. The pilots agreed to ignore the order.

De Crespigny slumped in his chair. He was trying to visualise the damage, trying to keep track of his dwindling options, trying to construct a mental picture of the plane as he learned more and more about what was wrong. Throughout this crisis, de Crespigny and the other pilots had been building mental models of the Airbus inside their heads. Everywhere they looked, however, they saw a new alarm, another system failing, more blinking lights. De Crespigny took a breath, removed his hand from the controls and placed them in his lap.

“Let’s keep this simple,” he said to his copilots. “We can’t transfer fuel, we can’t jettison it. The trim tank fuel is stuck in the tail and the transfer tanks are useless.

“So forget the pumps, forget the other eight tanks, forget the total fuel quantity gauge. We need to stop focusing on what’s wrong, and start paying attention to what’s still working.”

On cue, one of the copilots began ticking off things that were still operational: Two of eight hydraulic pumps still functioned. The left wing had no electricity, but the right wing had some power. The wheels were intact and the copilots believed de Crespigny could pump the brakes at least once before they failed.

The first aeroplane de Crespigny had ever flown was a Cessna, one of the single-engine, nearly noncomputerized planes that hobbyists loved. A Cessna is a toy compared to an Airbus, of course, but every plane, at its core, has the same components: a fuel system, flight controls, brakes, landing gear. What if, de Crespigny thought to himself, I imagine this plane as a Cessna? What would I do then?

“That moment is really the turning point,” Barbara Burian, a research psychologist at NASA who has studied Qantas Flight 32, told me. “Most of the time, when information overload occurs, we’re not aware it’s happening — and that’s why it’s so dangerous. So really good pilots push themselves to do a lot of ‘what if’ exercises before an event, running through scenarios in their heads. That way, when an emergency happens, they have models they can use.”

De Crespigny, in other words, was prepared to pivot the mental model he was relying upon, because he knew that the models he had worked out ahead of time were insufficient to the task at hand. De Crespigny asked one of his copilots to calculate how much runway they would need. Inside his head, de Crespigny was envisioning the landing of an oversized Cessna. “Picturing it that way helped me simplify things,” he told me. “I had a picture in my head that contained the basics, and that’s all I needed to land the plane.”

If de Crespigny hit everything just right, the copilot said, the plane would require 3,900 meters of asphalt. The longest runway at Singapore Changi was 4,000 meters. If they overshot, the craft would buckle as its wheels hit the grassy fields and sand dunes.

“Let’s do this,” de Crespigny said.

The plane began descending towards Singapore Changi airport. At two thousand feet, de Crespigny looked up from his panel and saw the runway. At one thousand feet, an alarm inside the cockpit began screaming “SPEED! SPEED! SPEED!” The plane was at risk of stalling. De Crespigny’s eyes flicked between the runway and his speed indicators. He could see the Cessna’s wings in his mind. He delicately nudged the throttle, increasing the speed slightly, and the alarm stopped. He brought the nose up a touch because that’s what the picture in his mind told him to do.

“Confirm the fire services on standby,” a copilot radioed the control tower.

“Affirm, we have the emergency services on standby,” a voice replied.

The plane was descending at fourteen feet per second. The maximum certified speed the undercarriage could absorb was only twelve feet per second. But there were no other options now.

“FIFTY,” a computerised voice said. “FORTY.” De Crespigny pulled back slightly on his stick. “THIRTY . . . TWENTY.” A metallic voice erupted: “STALL! STALL! STALL!” The Cessna in de Crespigny’s mind was still sailing toward the runway, ready to land as he had hundreds of times before. It wasn’t stalling. He ignored the alarm. The rear wheels of the Airbus touched the ground and de Crespigny pushed his stick forward, forcing the front wheels onto the tarmac. The brakes would work only once, so de Crespigny pushed the pedal as far as it would go and held it down. The first thousand meters of the runway blurred past. At the two-thousand-meter mark, de Crespigny thought they might be slowing. The end of the runway was rushing toward them through the windshield, grass and sand dunes growing bigger the closer they got. As the plane neared the end of the runway, the metal began to groan. The wheels left long skid marks on the asphalt. Then the plane slowed, shuddered, and came to a stop with one hundred meters to spare.

Investigators would later deem Qantas Flight 32 the most damaged Airbus A380 ever to land safely. Multiple pilots would try to re-create de Crespigny’s recovery in simulators and would fail every time.

When Qantas Flight 32 finally came to a rest, the lead flight attendant activated the plane’s announcement system.

“Ladies and gentlemen,” he said, “welcome to Singapore. The local time is five minutes to midday on Thursday 4 November, and I think you’ll agree that was one of the nicest landings we have experienced for a while.” De Crespigny returned home a hero. Today, Qantas Flight 32 is taught in flight schools and psychology classrooms as a case study of how to maintain focus during an emergency. It is cited as one of the prime examples of how mental models can put even the most dire situations within our control.

Mental models help us by providing a scaffold for the torrent of information that constantly surrounds us. Models help us choose where to direct our attention, so we can make decisions, rather than just react. We may not recognise how situations within our own lives are similar to what happens within an aeroplane cockpit. But think, for a moment, about the pressures you face each day. If you are in a meeting and the CEO suddenly asks you for an opinion, your mind is likely to snap from passive listening to active involvement — and if you’re not careful, a cognitive tunnel might prompt you to say something you regret. If you are juggling multiple conversations and tasks at once and an important email arrives, reactive thinking can cause you to type a reply before you’ve really thought out what you want to say.

So what’s the solution? If you want to do a better job of paying attention to what really matters, of not getting overwhelmed and distracted by the constant flow of emails and conversations and interruptions that are part of every day, of knowing what to focus on and what to ignore, get into the habit of telling yourself stories. Narrate your life as it’s occurring, and then when your boss suddenly asks you a question during a meeting or an urgent note arrives and you have only minutes to reply, the spotlight inside your head will be ready to shine the right way.

To become genuinely productive, we must take control of our attention; we must build mental models that put us firmly in charge.

“You can’t delegate thinking,” de Crespigny told me. “Computers fail, checklists fail, everything can fail. But people can’t. We have to make decisions, and that includes deciding what deserves our attention. The key is forcing yourself to think. As long as you’re thinking, you’re halfway home.”

From the Book, SMARTER FASTER BETTER by Charles Duhigg. Copyright © 2016 by Charles Duhigg. Reprinted by arrangement with Random House, an imprint of The Random House Publishing Group, a division of Penguin Random House, Inc. All rights reserved. Illustration by Sam Woolley.

Have you subscribed to Lifehacker Australia's email newsletter? You can also follow us on LinkedIn, Facebook, Twitter and YouTube.

Trending Stories Right Now