Home

Magnificent Desolation

Spectacular mechanical feats beget spectacular mechanical failures

Some months ago I saw a link on Twitter to a YouTube video that caught my attention. It was a computer-animated re-creation of the sinking of the Titanic in real time, all two hours and 40 minutes of it.

I did not watch the whole video, but I skipped around and watched parts, interested especially in the few interior views where you can watch the water level slowly rising at an angle in the white-painted hallways of the lower decks, and later, in the ballroom and grand staircase, as wicker chairs bob around.

The strangest thing about the video is that it includes no people — no cartoon passengers. There is no violin music, no voiceover. The ship is lit up, glowing yellow in the night, but the only sound, save for a few emergency flares and engine explosions, is of water sloshing into and against the ship. The overall impression is of near silence. It’s almost soothing.

This is true until the last few minutes of the video, when the half-submerged ship begins to groan and finally cracks in half. Only then, as the lights go out and the steam funnels collapse, do you hear the sound of people screaming, which continues for another half-minute after the ship has disappeared. A caption on the screen reads: “2:20 — Titanic is gone. Rescue does not arrive for another hour and 40 minutes.” A few (apparently empty) lifeboats are seen floating on the calm black ocean under a starry sky. Then, another caption: “2:21 — Titanic is heard beneath the surface breaking apart and imploding as it falls to the seafloor.” The video ends on this disturbing note, with no framing narrative creating a pseudo-happy ending.

It’s terrifying, how quickly an ordered structure dissolves. Where does it all go? Buildings, like anything, are mostly empty space

I was suddenly obsessed with the story of the Titanic. I rewatched the James Cameron movie (still ridiculous, still gripping); I read a Beryl Bainbridge novel (Every Man for Himself) based on the night of the sinking; I read thousands of words on Wikipedia and what you might call fan sites, if you can be a fan of a disaster, reading lists of “facts” and conspiracy theories. I watched a documentary about a weird newish theory of the root cause of the disaster: One scientist thinks that a sudden and extreme drop in temperature caused a kind of mirage illusion on the horizon that obscured the iceberg from the men in the lookout until they were nearly upon it. The same illusion could, in theory, explain why a nearby ship (the S.S. Californian) did not clearly see that the Titanic was in danger. It is, of course, just a theory.

Even if you’ve read some history of the Titanic, even if you’ve never seen the movies, the Hollywood version of the narrative has a lot of pull — and that narrative puts the blame on hubris. Call it the Icarus interpretation: Blinded by a foolhardy overconfidence, we flew too close to the sun, melting our wings, et cetera. It’s the easiest explanation, appealing in its simplicity, its mythic aura, and not without truth.


When I ran out of freely available Titanic material, I moved to other disasters. I had a sudden overwhelming desire for disaster stories of a particular flavor: I wanted stories about great technological feats meeting their untimely doom. I felt addicted to disbelief — to the catharsis of reality denying my expectations, or verifying my worst fears, in spectacular fashion. The obvious next stop was 9/11.

9/11 is, so far, the singular disaster of my lifetime. People who were in New York City at the time always comment on how “beautiful” and “perfect” that September morning was, with “infinite visibility” — pilots call those conditions “severe clear.” As I recall, it was a bright blue day in Houston too. I was driving from my apartment to the Rice University campus a couple of miles away when I heard the reports of a plane hitting one of the Twin Towers on the radio. I continued driving to school, parked my car in the stadium lot, and went into the student center, where a few people were watching the news on TV, with that air of disbelief that can appear almost casual.

The live footage of a massive steel skyscraper with smoke pluming out of a hole in its side was shocking, but I felt it dully; shock is marked by either incomprehension or denial. I don’t remember truly feeling horror — that is, understanding — until people began to jump from the buildings. They were almost specks against the scale of the towers, filmed from a distance, but you knew what they were. They became known as the “jumpers”: people trapped in the upper floors of the building, above the plane’s impact and unable to get out, who were driven to such desperation from the extreme heat and lack of oxygen that they broke the thick windows with office furniture or anything else they could find and jumped to the pavement hundreds of stories below. Leslie E. Robertson, the lead structural engineer of the towers, later wrote that “the temperatures above the impact zones must have been unimaginable.” Their bodies were heard landing by those nearby and those still in the buildings.

The jumpers’ experience is exemplified by one Associated Press photo dubbed “The Falling Man.” It depicts a man “falling,” as if at ease, upside-down and in parallel with the vertical grid of the tower. (It’s a trick of photography; other photos in the series show him tumbling haphazardly, out of control.) The photo was widely publicized at first, but met with vehement critique. It seems that some people found this particular image too much to take, an insult to their senses. And though the jumps were witnessed by many, the New York City medical examiner’s office classifies all deaths from the 9/11 attacks as homicides. Of course, they were forced, forced by suffering — but they were also voluntary. It seems akin to a prisoner held in solitary confinement or otherwise tortured killing themselves — murder by suicide.

When I think of the jumpers, I think of two things. I think of images of women covering their mouths — a pure expression of horror. They were caught on film, watching the towers from the streets of Manhattan. I do this sometimes — hand up, mouth open — when I see or read something horrible, even when alone. What is it for? I think, too, of the documentary about Philippe Petit, who tightrope-walked between the tops of the towers in 1974. At the time they were the second tallest buildings in the world, having just been surpassed by the Sears Tower in Chicago. It was an exceptionally windy day (it is always windy at 1,300 feet) and when a policeman threatened him from the roof of one building, Petit danced and pranced along the rope, to taunt him. This still seems to me like the most unthinkable thing a man has ever willingly done. The jumpers did what he did, but worse. Death was not a risk but a certainty; they jumped without thinking. It’s more horrible to contemplate than many of the other deaths because we know the jumpers were tortured. Death is fathomable, but not torture.

A documentary on YouTube called Inside the Twin Towers provides a minute-by-minute account of the events on September 11, re-enacted by actors and intercut with interview footage from survivors. One man who managed to escape from the North Tower — he was four floors below the impact — recounts a moment when he opened a door and saw “the deepest, the richest black” he had ever seen. He called into it. Instead of continuing down the hall to see if anyone was there, he retreated back to his office in fear. He says in the film, “If I had gone down the hallway and died, it would have been better than living with this knowledge of, Hey, you know what, when it came right down to it, I was a coward. And it was actually our two co-workers down that hallway, on the other side, that ended up dying on that day. And I often think now, Perhaps I should have continued down that hallway.”

This is a classic case of survivor’s guilt, sometimes known as concentration-camp syndrome: the sense that your survival is a moral error. Theodor Adorno, in an amendment to his famous and somewhat misunderstood line about poetry after Auschwitz, wrote:

Perennial suffering has as much right to expression as a tortured man has to scream; hence it may have been wrong to say that after Auschwitz you could no longer write poems. But it is not wrong to raise the less cultural question whether after Auschwitz you can go on living — especially whether one who escaped by accident, one who by rights should have been killed, may go on living. His mere survival calls for the coldness, the basic principle of bourgeois subjectivity, without which there could have been no Auschwitz; this is the drastic guilt of him who was spared. By way of atonement he will be plagued by dreams such as that he is no longer living at all.

This common syndrome, along with post-traumatic stress disorder, goes some way toward explaining why so many Holocaust survivors commit suicide.


There is survivor’s guilt, but there is also survivor’s elation, survivor’s thrill — a thrill felt only by those a little farther from disaster. The September 24, 2001, issue of the New Yorker included a symposium of responses to the attacks. A few were able to acknowledge the element of thrill in our observation. Jonathan Franzen wrote:

Unless you were a very good person indeed, you were probably, like me, experiencing the collision of several incompatible worlds inside your head. Besides the horror and sadness of what you were watching, you might also have felt a childish disappointment over the disruption of your day, or a selfish worry about the impact on your finances, or admiration for an attack so brilliantly conceived and so flawlessly executed, or, worst of all, an awed appreciation of the visual spectacle it produced.

I find Franzen’s moral hierarchy here questionable, that “worst of all” most puzzling. Because to me, more than worry or admiration (!), the most natural and undeniable of reactions would seem to be awe.

It’s the spectacle, I think, that makes a disaster a disaster. A disaster is not defined simply by damage or death count; deaths by smoking or car wrecks are not a disaster, because they are meted out, predictable. Nor are mass shootings generally considered disasters. A disaster must not only blindside us but be witnessed in public. The Challenger explosion killed only seven people, but like the Titanic, which killed more than 1,500, and like 9/11, which killed almost 3,000, the deaths were both highly publicized and completely unexpected.

It’s comforting to believe disasters result from some fixable “fatal flaw,” and are not an inevitable part of the unfolding of history. We can’t imagine all possible futures

All three incidents forced people to either watch or imagine huge man-made objects, monuments of engineering, fail catastrophically, being torn apart or exploding in the sky. These are events we rarely see except in movies. The destruction of the Challenger and the World Trade Center are now movies themselves, clips we can watch again and again. The proliferation of camera technology, including our cell-phone cameras, makes disaster easier to witness and to reproduce; it may even create a kind of cultural demand for disasters. Also on film are reaction shots: We get both the special effects and the human drama.

Roger Angell’s version of survivor’s thrill in the same issue is less chastising:

When the second tower came down, you cried out once again, seeing it on the tube at home, and hurried out onto the street to watch the writhing fresh cloud lift above the buildings to the south, down at the bottom of this amazing and untouchable city, but you were not surprised, even amid such shock, by what you found in yourself next and saw in the faces around you — a bump of excitement, a secret momentary glow. Something is happening and I’m still here.

Angell, here, is saying this is not an aberration; it is the norm. It is one of the horrible parts of disaster, our complicity: the way we glamorize it and make it consumable; the way the news turns disasters into ready-made cinema; the way war movies, which mean to critique war, can only really glorify war. And we eat it up.

We don’t talk about it now, but I always found the Twin Towers hideously ugly, in a way not explainable by their basic shape — they are long rectangular prisms, nothing more. Perhaps that was the problem. In the past, anything so large (the Eiffel Tower, the Titanic, the Empire State Building) had usually attempted to be beautiful and usually succeeded. These other structures still appear beautiful. How could anyone have ever found or ever in the future find the Twin Towers beautiful? They seem designed only to represent sturdiness, like campus buildings in the brutalist tradition that were mythologized to be “riot-proof.”

A friend, a New Yorker, disagrees. She tells me the buildings “did amazing things with the light.” Another, also from New York, says they were sexy at night. But all skyscrapers are sexy at night, from below if not from afar, by virtue of their sheer dizzying size, their sheer sheerness, sheer as in cliffs. They stand like massive shears, stabbed into the sky.

Despite their imposing, even ominous height, the towers fell in less than two hours; the Titanic took only a little longer to sink. But that happened gradually. When you watch a building collapse, it seems like it suddenly decides to collapse. It’s a building, and then, it’s not a building, just a crumbling mass of debris. There seems to be no transition between cohesion and debris. It is terrifying, how quickly an ordered structure dissolves. Where does it all go? Buildings, like anything, are mostly empty space.


In the vocabulary of disaster, one very important word is “debris,” from the French debriser, to break down. A cherishable word, it sounds so light and delicate. But the World Trade Center produced hundreds of millions of tons of it. The bits of paper falling around the city led some people to mistake the initial hit for a parade.

In space flight, or even on high-speed jets, tiny bits of FOD, or “foreign object debris,” can cause catastrophe. Space food is coated in gelatin to prevent crumbs, which in a weightless environment could work into vulnerable instruments or a pilot’s eye. A small piece of metal on the runway could get sucked into a jet engine and cause it to fail.

The Challenger explosion, like the sinking of the Titanic, is usually chalked up to hubris. But if hubris is overconfidence, the explanation is unsatisfying. Engineers at NASA’s Marshall Space Flight Center knew that the O-ring seals, which helped contain hot gases in the rocket boosters, were poorly designed and could fail under certain conditions, conditions that were present on the morning of the launch. The O-rings were designated as “Criticality 1,” meaning their failure would have catastrophic results. But the engineers did not take action to ground all shuttle flights until the problem could be fixed. As the very first sentence in the official Report of the Presidential Commission on the Space Shuttle Challenger Accident puts it: “The Space Shuttle’s Solid Rocket Booster problem began with the faulty design of its joint and increased as both NASA and contractor management first failed to recognize it as a problem, then failed to fix it and finally treated it as an acceptable flight risk” (italics mine).

What shocks me most when I read about the space program is the magnitude of the risks. The Challenger exploding on live TV in front of 17 percent of Americans was unthinkable to most of those viewers but not unthinkable to workers at NASA.

From what I understand, NASA has always embraced a culture of risk. In his memoir Spaceman, astronaut Mike Massimino, who flew on two missions to service and repair the Hubble telescope, recounts the atmosphere at NASA after the space shuttle Columbia broke up on reentry in 2003:

When I walked in I saw Kevin Kregel in the hallway. He was standing there shaking his head. He looked up and saw me. “You know,” he said, “we’re all just playing Russian roulette, and you have to be grateful you weren’t the one who got the bullet.” I immediately thought about the two Columbia missions getting switched in the flight order, how it could have been us coming home that day. He was right. There was this tremendous grief and sadness, this devastated look on the faces of everyone who walked in. We’d lost seven members of our family. But underneath that sadness was a definite, and uncomfortable, sense of relief. That sounds perverse to say, but for some of us it’s the way it was. Space travel is dangerous. People die. It had been 17 years since Challenger. We lost Apollo 1 on the launch pad 19 years before that. It was time for something to happen and, like Kevin said, you were grateful that your number hadn’t come up.

In other words, the culture of risk at NASA is so great that in place of survivor’s guilt there is only survivor’s relief.

But knowing the risks and doing it anyway must entail some level of cognitive dissonance. This is apparent when Massimino writes that “like most accidents, Columbia was 100 percent preventable.” This is hindsight bias; only past disasters are 100 percent preventable. The Columbia shuttle broke apart due to damage inflicted on the wing when a large chunk of foam insulation flew into it during launch. This was observed on film, and ground crew questioned whether it might have caused significant damage. However, the insulation regularly broke apart during launches and had never caused significant damage before. Further, NASA determined that even if the spacecraft was damaged, which they had no way of verifying, there was nothing that the flight crew could do about it, so they didn’t even inform them of the possibility of the problem.

When Columbia came apart during reentry, disintegrating and raining down parts like a meteor shower over Texas and Louisiana, an investigation was launched. At first, no one believed that the foam could have done enough damage to cause the accident. It was “lighter than air.” As Massimino writes, “We looked at the shuttle hitting these bits of foam like an 18-wheeler hitting a Styrofoam cooler on the highway.” Not until they actually reenacted the event by firing a chunk of foam at 500 miles per hour toward a salvaged wing and saw the results did they accept it as the cause of the disaster. Anything going that fast has tremendous force. This was not like the failure of the O-ring; the risks of the insulation were not understood. Or, more properly, they were simply not seen — it’s basic, though unintuitive, physics. The same type of accident is 100 percent preventable now only because the disaster happened, triggering a shuttle redesign. When redesigns cost billions of dollars, if it isn’t broke, they don’t and probably can’t fix it.


The problem with the concept of hubris is that it lets us off too easy. It allows us to blame past versions of ourselves, past paradigms, for faulty thinking that we’ve since overcome. But these scientists we might scoff at now were incredibly smart and incredibly well-prepared. The number of things that didn’t go wrong on numerous space missions is astounding. It’s easy to blame people for not thinking of everything, but how could they think of everything? How can we?

Not knowing the unknowable isn’t hubris. There is real danger in thinking, We were dumb then, but we’re smart now. We were smart then, and we are dumb now — both are true. We do learn from the past, but we can’t learn from disasters that do not yet have the capacity to happen. While disasters widen our sense of the scope of the possible, there are limits. We can’t imagine all possible futures. Yet we call this hubris. Perhaps it’s comforting to believe disasters are the result of some fixable “fatal flaw,” and not an inevitable part of the unfolding of history.

Disasters always feel like something that happens in the past. We want to believe that better engineering will save us. But we can’t even hold on to what we already know

To say there are limits to technological progress — we can’t prepare ourselves completely for the unforeseen — is not to say progress is impossible, but that progress is tightly coupled with disaster. (As French cultural theorist Paul Virilio famously said, “The invention of the ship was also the invention of the shipwreck.”) Not until we experience new forms of disaster can we understand what it is we need to prevent. If this is true, overreliance on the explanatory power of hubris is itself a form of hubris, a meta-hubris, since it assumes a position of superiority.

And can we, in any case, have progress without hubris pushing us forward with partial blinders? Don’t we need hubris to enable and justify advances in technology? NASA seems to take hubris in stride; they see occasional disaster as the fair cost of spaceflight.

In his “Letter From a Birmingham Jail,” Martin Luther King, Jr. warned of “the strangely irrational notion that there is something in the very flow of time that will inevitably cure all ills.” You could say the same of technological progress; it is tempting to believe that progress occurs on a linear curve, such that eventually all problems will be solved, and all accidents will be completely preventable. But there’s no reason to assume the curve of progress is linear, that the climb is ever increasing.


I want to come back to the Titanic, and some common misconceptions. One is that there were not enough lifeboats on board for frivolous reasons — because proprietors felt they would look unattractive on deck, or because they were regarded as mere symbols, serving only to comfort nervous passengers on a ship designers believed was literally unsinkable. This isn’t the case. Rather, the thinking at the time was that the safest method of rescue, in the event of an emergency, was to ferry passengers back and forth between the sinking ship and a rescue ship. Because the Titanic would sink slowly, if at all, for some time it would actually be safer on the ship than in a lifeboat. Therefore the lifeboats didn’t need to accommodate the entire capacity of the ship in one go.

So why did the Titanic sink so fast? The surprising truth is that if the ship had hit the iceberg head on, instead of narrowly missing it at the stern and then scraping along its side, it would not have sunk. The ship was capable of sustaining huge amounts of damage from an impact like an iceberg — it could stay afloat if four of its 16 watertight bulkheads were flooded. But the iceberg tore into the ship in such a way that five compartments were damaged. This event was not, realistically, foreseeable; no iceberg in history had done that kind of damage to a ship, and none has done that kind of damage since. It was, in essence, a freak accident.

There are echoes of this in the World Trade Center’s collapse. It’s well known that the buildings were designed to survive the impact of an airplane. However, they were envisioning outcomes like a small, slow-flying plane hitting a tower by accident — in fact, a bomber flying in near-zero visibility had hit the Empire State Building in 1945 — not a modern jet being flown purposely into the tower at top speed. Still, there was a false sense of security. After the first impact, the PA system in the building told people to remain at their desks when of course they should have been evacuating. Some building staff also told workers it would be safer to stay where they were.

Is this hubris, or something else? Disasters always feel like something that happens in the past. We want to believe that better technology, better engineering will save us. The more information we have, the safer we can make our technology. But though it’s hard to accept, we can never have all the information. In creating new technology to address known problems, we unavoidably create new problems, new unknowns. Progress changes the parameters of possibility if it changes anything at all. In fact, this is something we strive for — to innovate past the event horizon of what we can imagine. Hubris feeds on itself, is self-sustaining. And with so much that is inaccessible, unknowable, and changing all the time, we can’t even hold on to what we already know.


As they stepped out of the lunar module and began their moon walk, Neil Armstrong said to Buzz Aldrin, “Isn’t that something! Magnificent sight out there.” Aldrin’s cryptic, poetic response was “Magnificent desolation.” I think of this quote when I see footage of disasters. Especially after years of buffer, years of familiarity, have lessened the sting, it’s easy to see these events as, in their way, magnificent. Magnificent creations beget magnificent failures. It is awesome that we built them; it was awesome when they fell. Horror and awe are not incompatible; they are intertwined.

Is it perversity or courage that allows some people to admit to survivor’s thrill? On the afternoon of September 11, I remember meeting my then-boyfriend on campus for lunch. He was a contrarian type, but nonetheless his reaction disturbed me — he was visibly giddy, buzzed by the news. It’s not that I don’t believe others were excited, but no one else had revealed it. In 2005, before the levees had broken in New Orleans, my roommate asked if I wasn’t just a little bit disappointed that Katrina hadn’t turned out as bad as predicted. Just hours later she regretted saying it.

Often, when something bad happens, I have a strange instinctual desire for things to get even worse — I think of a terrible outcome and then wish for it. I recognize the pattern, but I don’t understand it. It’s as though my mind is running simulations and can’t help but prefer the most dramatic option — as though, in that eventuality, I could enjoy it from the outside. Of course, my rational mind knows better; it knows I don’t want what I want. Still, I fear this part of me, the small but undeniable pull of disaster. It’s something we all must have inside us. Who can say it doesn’t have influence? This secret wish for the blowout ending?

Elisa Gabbert is the author most recently of The Word Pretty. Her next collection, The Unreality of Memory & Other Essays, will be out from FSG Originals in August 2020.