Home

Negative Space

Regardless of new means of media fakery, we still depend on context to assess them

On November 22, 1963, Abraham Zapruder arrived at Dealey Plaza in Dallas, Texas, with an 8-millimeter camera. At about half past noon he captured the most comprehensive footage of the only presidential assassination ever filmed in American history. It is perhaps the paradigmatic example of bystander footage, but its existence has deepened rather than resolved the mystery surrounding the event. More than half a century later, opinion remains divided on what this footage shows and whether it substantiates or undermines the official narrative.

As video technology has developed and nearly everyone carries a recording device with them, such footage has become commonplace, not only galvanizing protest movements and upsetting political races, but serving as a daily staple of local news broadcasts and dominating social media feeds. We increasingly see the world through these cameras’ eyes. The ostensibly raw clips often have a sensationalistic value, anchored in their apparent lack of post-production refinement or professional editing: They are presumably exciting to watch because their form suggests they weren’t faked; they, like Zapruder, captured news in its spontaneous occurrence.

The concern about the end of reality, common to coverage of deepfakes and related simulation tech, presumes somewhat complacently that unfaked footage can’t be used to inspire spurious conclusions

But with digitization has also come the spread of easy-to-use editing software, which would seem to threaten the aura of spontaneity that attaches to “raw footage.” The recent advent of “deepfakes” and generative audio technology, like Lyrebird’s vocalization AI, foments the potential for widespread media hoaxing, undermining the faith in amateur footage and jeopardizing the information or inspiration one might take from it. Not only could such technology produce everything from creepily personalized ads to blackmail material to clips meant to inspire public uprisings and economic disruption, but — more troubling for commentators like the Atlantic’s Franklin Foer — it could potentially cast doubt onto all forms of video evidence. Foer draws a direct line from this possibility to the imminent “collapse of reality.”

Will new kinds of hoaxes really lead to a generalized epistemic nihilism? The long history of fakery suggests that hoaxes in earlier media spurred new forms of corroboration and critical engagement. The concern about the end of reality, common to coverage of deepfakes and related simulation tech, presumes somewhat complacently that not only were media representations once inherently “real” and undistorted, but also that unfaked footage can’t be used to inspire spurious conclusions. Even the most far-fetched conspiracy videos on YouTube tend to draw from real rather than fabricated material. And despite widespread phones and dashboard- and body-mounted cameras, law enforcement officials rarely face meaningful penalties for their crimes. Video alone is inadequate to generate accountability.

Videos of police misconduct are critical, however, in galvanizing movements against racist policing and brutality. This suggests that video is less effective as incontrovertible evidence than as an impetus for emotional engagement. Video’s ability to stimulate and arouse does not necessarily translate into an ability to persuade, however; they are as likely to reinforce attitudes as change them.

The emotional investments an audience brings to a video, which will shape how they interpret it, may be easier to manipulate than pixels and waveforms. Even the most consequential or inflammatory of videos are subject to different readings from different perspectives. Despite the abundant footage of police violence in recent years, overall confidence in law enforcement has held remarkably steady. Meanwhile, many people who viewed the misleading video purporting to prove that Planned Parenthood illegally sells tissue from aborted fetuses reported that it made them more supportive of both the organization and abortion in general.

It would be a mistake to let these new forms of fakery become an alibi for ignoring subtler modes of manipulation

The contemporary information ecosystem makes it very difficult to assess context: who is doing what, why is something being seen, who else is seeing it, and so on. On social media, we engage unwittingly with bots and foreign agents provocateurs. Phones and inboxes are bombarded with robocalls and spam emails from unknown origins. On streaming music apps, fake artists vie for attention alongside real ones. On dating sites, there seem to be plenty of catfish in the sea. Political campaigns, organizations, and commentators are bankrolled by dark money. AI is being trained to converse more humanly, while battalions of humans are obscured to make their work seem automated. If you don’t know who is doing something, it becomes impossible to understand why they’re doing it. It would be a mistake, though, to let these new forms of fakery create the impression that the world beyond them is otherwise epistemically secure. Nor should they become an alibi for ignoring subtler modes of manipulation. Such cover absolves us of manipulating one another through the conversational and aesthetic choices we make, and it obfuscates the control various gatekeepers exert over readers, viewers, and users.

New vernacularized video-editing capabilities don’t pose a new problem; they reinforce old ones regarding this fundamental issue of context. After all, any fraudulent content can be dismissed due to lack of corroborating evidence or debunked with counterfactual documentation. As communities become atomized and information siloed, content is liberated from original context, and we may rely more heavily on the content itself to demonstrate its trustworthiness. But as much as we may fantasize about media that can transcend context and impose its truth on everyone who sees it, the reality remains that all documents, real or contrived, are shaped by the frameworks in which they are received, which will remain subject to manipulation along any number of axes. Manipulation is not at the level of the image itself so much as the level of who sees it, with whom, and with what other surrounding things provide structure for it. Documents themselves don’t establish trust or a consensus reality.

The panic around mimetic tech implicitly perpetuates the fallacy that showing something factual necessarily leads viewers toward the truth. But curated “facts” are as potentially damaging as falsified evidence, and more insidious. It is possible to create a video of Martin Luther King Jr. expressing approval for slavery that looks as real as any actual footage, but such a video would be relatively easy to discredit with even a cursory understanding of King’s life and work. More nefarious is the selective transmission of his writings and work that obfuscate his most radical beliefs. A capitalist system sublimates his anti-capitalist theories; a racist system puts forth the notion that racism exists only in the past, that King already solved it.

Curation about what to cover and how to cover it can manipulate as much as any deepfakery. But it is also inescapable, whether it’s a newspaper deciding what to cover, a social media platform using algorithms to decide what to display, or a married couple deciding which details about their day to share with one another. Meaning is not discovered, but rather created and shaped through those different contexts and layers of curation. As contexts proliferate, we need to develop stronger faculties for assessing them and how they interrelate. In the meantime, we can start with a base skepticism of all media, not because it is fake but because we can never see the whole room through a keyhole.

Adam Clair is a writer currently based in Philadelphia. He tweets infrequently at @awaytobuildit.