It was inevitable that something like the “Drunk Nancy Pelosi” deepfake would circulate, and it’s spurred a range of responses from tech companies (YouTube took it down; Facebook didn’t) and press commentators (for many of whom it has functioned as a Rorschach for their attitudes toward journalism as civics and deplatforming as censorship). Casey Newton offers a round-up of a lot of opinions here, before endorsing the idea that social media platforms should intervene quickly and pre-emptively with users who try to share deepfakes, kind of like platforms do with those who post suicidal or self-harm content. This approach tries to salvage Facebook’s business model by seeing pathological users as the real problem and making them accountable for the platform’s affordances.
It seems more constructive to focus on the business model: In this Atlantic piece, Ian Bogost made the case that nothing on Facebook can be considered fake, because “a business like Facebook doesn’t believe in fakes. For it, a video is real so long as it’s content. And everything is content.” He further points out that video is never purely documentary — it never simply represents “what happened,” as if there were even a simple answer to that or a singular point of view from which it could be unambiguously recorded. Instead, video — or any media in general — is production of new “facts,” new content that truly exists in the world and has real effects. “The purpose of content is not to be true or false, wrong or right, virtuous or wicked, ugly or beautiful,” Bogost writes. “No, content’s purpose is to exist, and in so doing, to inspire ‘conversation’ — that is, ever more content. This is the truth, and perhaps the only truth, of the internet in general and Facebook in particular.” Content is content, attention is attention, data is data.
That fits with Facebook’s admission that “We don’t have a policy that stipulates that the information you post on Facebook must be true.” To phrase that slightly differently, Facebook is generally not interested in protecting users from what they want. It regards itself as a communication platform rather than a publisher; in its view, holding it accountable for the deceptive lies it circulates would be like blaming the phone company for lies someone told you during a phone call. Farhad Manjoo argues in this New York Times op-ed that we shouldn’t give Mark Zuckerberg any more power by encouraging him to effectively act as censor, though he already has that power anyway. As Newton points out, “doing nothing is an editorial judgment, too.”
Manjoo’s op-ed is more interested in changing the subject: If we want to condemn broadcasters for “fake” content, he argues, we should pay more attention to Fox’s networks, which manipulate video all the time, Rock Bottom–style (“Fox Business misleadingly spliced together lots of small sections of a recent news conference to make it look as if Pelosi stammered worse than Porky Pig”) and is a de facto far-right propaganda channel: “Social networks are at least experimenting with ways to mitigate their negative impact on society,” Manjoo asserts, somewhat dubiously, but what he follows with seems right. “But we don’t have much hope nor many good ideas for limiting the lies of old-media outlets like Fox News, which still commands the complete and slavish attention of tens of millions of Americans every night, polluting the public square with big and small lies that often ricochet across every platform, from cable to YouTube to Facebook to Google, drowning us all in a never-ending flood of fakery.”
Rather than contrast broadcast social media and Fox, it’s better to trace their similarities. When political “deepfakes” (though some are apparently calling the Pelosi video a “shallowfake,” which is maybe more appropriate since it is not necessarily trying to be convincingly realistic) circulate on social media, they are tracing a similar network of sympathetic users, fascists and proto-fascists who are looking less for factual information than emotional gratification. This means it is counterproductive to fact-check altered videos, or add a disclaimer, as Facebook eventually decided to do. The videos are circulated not to try to trick people but to entertain them with their very fakeness. Calling attention to their fakeness may emphasize rather than diminish their impact, which is to convey a sense of permission to ridicule and slander and degrade enemies with any and all tactics, in a spirit of camaraderie and shared contempt.
Political deepfakes convey the message that it is okay to be openly fascist — it’s okay to delight in defamation as long as it’s politically motivated, demeaning those who need not be treated with respect or decency. The thrill of deepfakes derives precisely from seeing that performed: watching people be treated as fair game. Pointing out they are fake reinforces their cruelty, the violence with which they are treating the false “surface” reality that fascists seek to override. In Theodor Adorno and the Century of Negative Identity, historian Eric Oberle notes that “the symbolism of fascism is one of abrogating the law to restore its emotional power as immediate rather than mediated justice.” Deepfakes, in that sense, convey a higher sense of “justice” to their intended audience, that enemies will not be protected by the law or by tech companies, which are in the service of power and profit. Circulating them is an attempt to collaborate with that power, be included in its in-group, be protected from similar violence that the deepfake itself is evidence of — any representation is now possible and everyone is newly vulnerable, so some may feel that new forms of submission are now required to be marked as safe.
We should treat political deepfakes not as disinformation — an attempt to mislead rational people with falsehoods — but as fascist propaganda, which, as Adorno argues in “Anti-Semitism and Fascist Propaganda,” is not about facts and arguments but emotionally engaging rituals, the sort that induce viewers to tune in to Fox News or open up Facebook. Fascist propaganda, Adorno writes, “builds up an imagery of the Jew, or of the Communist, and tears it to pieces, without caring much how this imagery is related to reality.” The political deepfake is an extension and a literalization of that process. It reveals directly, without argument, what fascists want to represent as what “everyone knows to be true.” They are a suitably fake image for our customarily fake feelings about the world.
Adorno insists that “the sentimentality of the common people is by no means primitive, unreflecting emotion. On the contrary, it is pretense, a fictitious, shabby imitation of real feeling, often self-conscious and slightly contemptuous of itself.” I don’t know what he means by “real feeling” in that, but the rest I take to mean that none of our emotions are “pure” and unconditioned by prejudice, reaction, fantasy, or cliché. We feel the feelings we are primed to feel by various genre narratives and the normative assumptions circulating in our social milieu. Fascist agitators target those feelings and attempt to ratify them and inflate them and give them more urgency and stakes so that prejudice is experienced not as shameful and irrational but as a kind of liberation, a relieving freedom of expression that reveals what is supposedly instinctively known but suppressed.
Adorno argues that fascist propagandists who “know no inhibitions in expressing themselves … function vicariously for their inarticulate listeners by doing and saying what the latter would like to, but either cannot or dare not.” Doctored videos make plain that lack of inhibition and give viewers an opportunity to participate and share them and experience the emotional release while still having the cover that they were taken in or were merely participating in a joke or were just keeping up with a “conversation” or “debate” or “controversy” already taking place. In Adorno’s reading, this is a “reversion toward a ritualistic attitude in which the expression of emotions is sanctioned by an agency of social control.” In the case of the Pelosi video, Facebook is the agency of social control (it administers social connection, directs and measures flows of affect toward different forms of content, and famously purports to manipulate users emotions at scale); its toleration of the Pelosi video is the sanction for the emotions stoked by the fascist agitators responsible for it. The ritual is what we do when we interact with social media platforms.
Fascist oratory, Adorno notes, “does not employ discursive logic,” but conveys associations that one can grasp spontaneously. The listener “has no exacting thinking to do, but can give himself up passively to a stream of words.” Deepfakes aspire to work in the same way, creating a participatory and defamatory spectacle of a stereotype that consumers can simply ingest. In other words, deepfakes perform rhetorically what algorithmic targeting achieves at the meta-content level: They presume a viewer who enjoys experiencing themselves as an object rather than a subject, as someone who is catered to and confirmed in their prejudices rather than expected to exercise agency. “Totalitarianism regards the masses not as self-determining human beings who rationally decide their own fate and are therefore to be addressed as rational subjects,” Adorno writes, “but that it treats them as mere objects of administrative measures who are taught, above all, to be self-effacing and to obey orders.” That is how algorithmic systems (including most social media platforms) aspire to treat us as well. But within the context of consumerism, self-effacing obedience translates as pleasurable convenience, a sense of superiority that you are enjoying frictionlessness while “they” are dealing with the hassles. After all, obedience is extremely convenient; nothing is easier than doing what you are told. It may be the case that being interpellated as a behavioristically controlled non-subject by algorithmic systems positions us as proto-fascists, eager to embrace obedience as a form of privilege.
In a larger sense, as Natasha Lennard points out in this interview, “none of us are totally free from the micro-fascisms permeating life under capitalism.” It would be a mistake to see the circulation of deepfakes as simply a phenomenon that the “masses” are prone to fall for, while we rational individuals rise above. Rather they are symptomatic of a larger vulnerability — the mounting irrationality of rationalizing systems like social media — that fascist agitators have learned to exploit. So platforms ostensibly meant to provide a sense of access and control over sociality and promote universal connection become means for sowing discord and division, depleting agency and transforming sociality into rote mechanistic responses and clicks
Adorno suggests that a certain feeling of elitism is consistent with fascist propaganda, a sign of susceptibility: “propaganda functions as a kind of wish-fulfillment … People are ‘let in,’ they are supposedly getting the inside dope, taken into confidence, treated as of the elite who deserve to know the lurid mysteries hidden from outsiders.” This again links to deepfakes’ overwriting of reality to expose (construct) a “hidden” reality that can be shared among insiders. Though the Pelosi video was shared and seen millions of times, it was by and large experienced through a highly personal and personalized interface, which contributes to the intimacy of viewing it, the sense of being among those with privileged access or with the appropriate connections and determination to see what is really going on — not what is merely documented and verified by mainstream outlets, but what is reserved for the select believers.
Deepfakes come across as more true than authenticated documents, because the level at which “truth” is decided is in the viewer’s experience, not in a relation between a document and what really occurred. The Pelosi video sharpens antagonisms and therefore feels “true” to the reality of conflicting ideologies and understandings of the world. The clips are verified spontaneously by an emotional reaction. They are more “true” the more they outrage the outsiders, the enemies, the cowards.
In the mid 20th century, a strong sense of individuality was often promoted as the antidote to fascist agitation, which promised, in Adorno’s account in “Freudian Theory and the Pattern of Fascist Propaganda,” a “pleasurable experience for those who are concerned to surrender themselves so unreservedly to their passions and thus become merged in the group and to lose the sense of the limits of their individuality.” Individuality here is figured as a set of limits, a disposition of self-control rather than gratification, which exceeds and dissolves the self. But it seems that individuality under consumerism changed from a sense of being under control to a demand for more and more expression — to be the object of extreme personalization rather than the subject of rational self-mastery. Tech has promoted delusions of unlimited agency and magic at the fingertips of every individual, but these are beginning to be exposed as assaults on the integrity of one’s identity, and one’s sense of control over it. Instead of agency, social media platforms ensnare us in an opaque system of fate, whose flows and boundaries and linkages seem dictated by certain yet unknowable forces that exceed any individual effort to rationalize them. It may be easier to give in and trust to spontaneous and superficial correspondences between what we feel and what we hope is true, and believe that these are flashes of insight rather than intimations that individual identity is the deepest fake of all.