Home

Plausible Disavowal

Why pretend that machines can be creative?

Recently, a work of AI-generated art, a portrait called Edmond de Belamy, was auctioned for $432,500. It was based on code written by Robbie Barrat, a researcher who didn’t see any of that money. As this Verge article by James Vincent details, Barrat’s code was adapted by a group of French students who managed to get the art world to pay them for what it produced. In a sense, the artwork was this performance: not the AI-generated image but the way it found itself into a Christie’s auction and a lot of media coverage. The image itself — a fairly unremarkable portrait; not one of those disturbing Google Deep Dream images with eyeballs and dog faces emerging from plates of spaghetti — is more like residual documentation.

Work like Barrat’s and that of Janelle Shane — a researcher who writes the AI Weirdness blog — puts a friendly face on machine learning, highlighting its generative fun side: how you can use neural nets to make improbable Halloween costumes or designer clothes, or to play flarf-style language games. In an interview with Arabelle Sicardi, Barrat says, “working with AI and generative art is nice because people can’t really misuse your software or your results.” (Which seems like a generous thing to say when your work has been hijacked and auctioned off.)

If AI made the image, auctioning it off is not an unfortunate commercialization of human aesthetic creativity but the act that breathes “art” into something otherwise merely machinic

I find this kind of work irresistible. I like how I can feel surprised by it, how I read it as unmotivated. It never comes across as trying too hard; instead I can adopt a kind of patronizing attitude toward the machines. Aren’t they cute? The way the systems  “learn,” often staged in write-ups of these projects as a series of clumsy steps toward coherence, comes across as a kind of serendipity, an accidental teleology. It’s not aesthetic purposefulness per se but some kind of deeper destiny being put on display.

Of course, human intention is still driving these projects, but it is abstracted a step away from the output. Barrat suggests that AI can “augment artists’ creativity” by producing “surreal” combinations that the artist can then sift through or refine. “A big part of my role in this collaboration with the machine is really that of a curator, because I’m curating the input data that it gets, and then I’m curating the output data that the network gives me and choosing which results to keep, and which to discard.” He can adjust the data sets and parameters until the output is suitably familiar or surprising or some surreal blend of both. Sicardi suggests that the machine can overcome pockets of resistance in the artist’s mind: “When you actually put an algorithm in your hands, it forces you to create versions and derivatives. It draws conclusions you wouldn’t have considered, because it lacks the context that may inhibit you.” AI programmers are then in the paradoxical position of producing intentional accidents — works that reflect their sensibility or their sense of rightness without their having to directly create them. Moreover, they feel right because they surprise the artist/researcher with their fittingness even as they continue to seem like they just happened. The works thereby embody a sense of plausible disavowal: It was what I was going for but not really, the machines took it somewhere no one could expect.

Algorithms generally are deployed for disavowal: as if they could eliminate bias or at least distract from it. They obfuscate the human input into a particular decision-making process to make it appear more objective. This typically means that the source of bias is displaced into the data — what was chosen to be collected and fed to the algorithms,  and what assumptions have governed the programmer’s coding. Algorithmic processing and machine learning can make it appear as though the systems decide for their own reasons, reproducing the biases of the past as if no one is responsible for them, as if they are inherent. This, in the view of AI researcher Ali Rahimi, makes machine learning into a kind of alchemy.

Vincent attributes the French students’ art world success in part to “their willingness to embrace a particular narrative about AI art, one in which they credit the algorithm for creating their work.” This makes it a variant on what Astra Taylor has called “fauxtomation,” and what Jathan Sadowski described as Potemkin AI — where production processes are represented as artificial intelligence to devalue the human labor that is actually doing the work. But here, the undue attribution to AI scrambles the way we understand not how the artwork was produced but what its commodification confers. If AI made the image, auctioning it off is not an unfortunate commercialization of human aesthetic creativity but the act that breathes “art” into something otherwise merely machinic. The work’s aesthetic value becomes equivalent to its monetary value — the image is significant only because someone paid something for it particularly, and that price is effectively its “content.” With algorithms sidelining artists, Christie’s can take centerstage, as Vincent points out: The auctioneers, he reports, have “presented the auction as a provocative gesture that refreshes the company’s brand and stakes its claim in any lucrative new art market.”

When AI art is treated as if it were made by machines rather than researchers, it treats the displacement in agency as an aesthetic, which Christie’s then puts a price to. The pretense of machinic creativity extends the algorithmic alibi, gives it a tangibility: You see? AI really can think on its own, in its own way, and here’s the visual proof. The uncanny works — recognizable as representing something but also appearing vague, alien, inorganic — help reinforce the idea that AI “thinking” is original and not merely derived from the data sets the adversarial networks are trained on, which effectively establish the limits on what can be imagined.

What the generative adversarial networks zero in on is not accidental or creative; it reflects instead the cumulative results of networked surveillance that has fed them their training material. Referencing the Deep Dream project, Hito Steyerl describes the refinement process as a kind of “automated apophenia” that can “reveal the networked operations of computational image creation, certain presets of machinic vision, its hardwired ideologies and preferences.” What becomes legible to human viewers are the visual traces of the ideology the algorithms work to reproduce — the reality they try to impose on and through the data sets they work on.

In an essay for New York magazine, Malcolm Harris described this as an expression of glitch capitalism: Machine learning’s amoral, uninhibited-by-context way of processing provides concrete illustrations of how the market, another amoral calculating machine, functions more broadly.

Because these programs are looking for the best (read: most efficient) answers to their problems, they’re especially good at finding and using cheat codes. In one disturbing example, a flight simulator discovered that a very hard landing would overload its memory and register as super smooth, so it bashed planes into an aircraft carrier. In 2018, it’s very easy to feel like the whole country is one of those planes getting crashed into a boat. American society has come to follow the same logic as the glitch-hunting AIs.

Capitalism as a whole rewards ruthlessness; algorithmic decision making is a means of optimizing for it while seeming to insulate humans from responsibility.


AI-generated art could be regarded as the corollary to what Joshua Rothman, in a recent New Yorker article, called “synthetic realism,” where machine learning is used to invent images or videos that can pass as documentation of events that never occurred. This “deepfake” technology has driven fears that AI reality generators will not merely make quirky art but rather invalidate visual evidence altogether, allowing fakes to be made on demand that may seem to prove anything, while casting doubt on “genuine” representations (if that is not already oxymoronic).

The possibility of pseudo-evidence on demand that is capable of showing anything to anyone definitively puts an end to the fantasy, still lingering in some journalistic corners, that some piece of damning evidence could be uncovered that will really change people’s minds. Of course, Trump and his administration have already faced down a countless number of genuine smoking guns and emerged more or less unscathed. Most people aren’t suspending judgment about anything political while they wait for the full revelation of the “facts” — there is no full revelation, no complete picture. And minds are mostly made up. The last thing most consumers want is some challenge to how they believe the world works, forcing politics on them.

The feared deepfakes won’t change minds, but they may instead help blur news and entertainment even further, inviting consumers to enjoy reality as an artistic effect rather than a responsibility to be reckoned with. The fakes will anchor framing narratives that one either already accepts or not, and one will calibrate their skepticism accordingly. If they allow for some satisfying emotional moment, an experience of closure or the pleasurable suspension of disbelief, then their accuracy probably doesn’t matter much. The feeling of “reality” is larger than a set of accurate facts; it is an emotional experience of ideological confirmation. What feels most real is often that which pre-empts the need for rational assessment or logical evaluation. “Reality” is the comfortable feeling that you already know what any fact means and what its purpose is.

All of what is taken to be real is in some ways “synthetic,” produced. Representations of any sort are always reductive edits of the full spectrum of reality; they never simply document incontrovertibly what happened somewhere at some time. They produce an experience rather than passively record one. That experience might be the thrill of apparent verification — the giddy sense that something we should probably doubt might be real. Or it might be a feeling that a secret has been revealed, which can then be taken as a higher truth. But there is no experience of the “real” that is somehow direct and unmediated. In fact, that is simply another of the fantasies stoked by documentary representations: They remind us of a time that never existed, when images and the people who made them could be automatically trusted. Fakes help us structure the pretense that we once enjoyed that kind of comfort. They generate synthetic nostalgia.


If “deepfakes” make us nostalgic for the supposedly automatic authenticity of documents, AI artworks posit a corollary nostalgia for the authenticity of artists, a time when human and machinic creativity were entirely and overtly distinct, and the personal responsibility for any work could be cleanly assigned. AI creativity appears as creativity with no human strategy behind it, which can be reassuring: It seems like art without ego. By staging a clumsy show of computer creativity, it makes it seem as though human individuals once really were freely creative and might be again. The aspirational ideal of the “artist” as a bastion of individualized creativity and agency can live on in a world that replaces care workers with chatbots.

Reality is the feeling of a comfort where you already know what any fact means and what its purpose is. Fakes help us structure the pretense that we once enjoyed that kind of comfort

AI-generated art depends on massive troves of collectively produced data and evokes an idea of creativity without the individualist spark of insight. Rather than make anything genuinely new, generative adversarial networks converge on a stereotype, as a “discriminator” network using a set of images or phrases already determined to belong to some genre refines the attempts a “generator” network makes to approximate that genre. In AI art projects, the researchers often settle for generated material that is close enough to evoke the genre being toying with but still odd enough to be mainly reassuring. That sweet spot is the same one that ad-targeting algorithms and recommendation engines seem to be zeroing in on: They are off enough to get you to let down your defenses, to feel as though you are still in control of who you are, even as they insinuate their messages.

But some instantiations of this logic are more ambitious, trying for a deepfake of your entire identity. “When,” as Rothman notes, “your Facebook news feed highlights what ‘people like you’ want to see,” it posits a version of you that your interaction with the site will serve to refine and which can then be used to synthesize a reality for you. The platform approximates your self and turns it into a kind of adversarial network that can produce reality as necessary, filling in gaps in a way that is convincing or satisfying. You can then use Facebook to fill in the blank spaces of your life with self-confirming information, much as you can use content-aware fills on Photoshop to extend patches of grass in an image or cover over deletions in the foreground. That generated version of yourself will continue to approach some ultimate ideal of self-consistent identity, one that allows only for the repetition of already established desires.

If you define agency and self-fashioning as try-hard posing and unreflective spontaneity as revealing the real self — the implicit ideology of most invocations of “authenticity” — this is where you end up. Algorithms can solve for a true self and present any number of plausible answers, as many as we want, as many times as we click the button, and we can curate our best life from among these preposterous futures made entirely from the detritus of our past.

Gmail’s autocomplete function in emails is another expression this same logic — another offer to extend your presence automatically (and thereby exclude it). In this New York Times Magazine column, John Hermann describes how autocomplete inverts machine learning, so that automated replies begin to train their users, who function less as creative beings and more as machinic “discriminators” sifting through AI-generated proposals.

If a canned reply is never used, this is a signal that it should be purged; if it is frequently used, it will show up more often. This could, in theory, create feedback loops: common phrases becoming more common as they’re offered back to users, winning a sort of election for the best way to say “OK” with polite verbosity, and even training users, AI-like, to use them elsewhere.

The past history of communication becomes the horizon for what can be said, and only something that can pass for having been algorithmically generated is considered viable, “realistic.” Deepfakes become the only reality.

Rothman points out how generative networks for images also create feedback loops that reinforce certain representational tropes. As we operate more continuously within the reality-testing and -constituting networks of social media, we function as adversarial generators and discriminators for each other, refining our collective expression toward familiar visual clichés: “In addition to unearthing similarities, social media creates them,” Rothman writes. “Having seen photos that look a certain way, we start taking them that way ourselves, and the regularity of these photos makes it easier for networks to synthesize pictures that look ‘right’ to us.” Clichés feel more real than idiosyncratic or nuanced forms of expression — clichés are more likely to appease the discriminator, whether that is an AI neural network or the network of social media users ranking what we express. As with automated replies, plausible-seeming images will begin to resemble what AI can readily generate. We will replicate the synthetic reality because it appears more real than the unpredictable chaos that is “actual” reality.

AI images don’t fail, because they don’t seem to try

This goes against the commonsense idea that what seems real is that which appears as more singular or idiosyncratic, as in documentary works that purport to show things you never could have seen before. There has long been a tendency to conflate “authenticity” with proximity, with access, with “really being there.” But the instantaneous operation of global communications networks make it so that we are all, in a sense, really there for everything, so being there in itself proves nothing. This changes the emotional experience of documentary photography, whose revelations may no longer seem “real” or “intimate” but obscure. It can only convey a failure to be real enough, to be intimate enough. Representations, if they are meant to be taken as primarily about their own “realness,” require novel ways of signaling and foregrounding that “reality” — ways to connote “documentary fidelity” or “unlimited access” or “authenticity.” But these techniques and markers can become overfamiliar and exhaust themselves, become gimmicks. They try to take you all the way inside but can never get past their inescapable conventions.

AI images don’t fail, because they don’t seem to try. They are already “fake,” so they can never be inauthentic.


In a 1990s-era essay about photographer Nan Goldin, Liz Kotz argued that the reality effect — the intimacy and access that the images in Goldin’s The Ballad of Sexual Dependency may seem to grant — can be exhausted by overexposure. “When the same images are reproduced too many times, in too many places, and are liked in the same way,” she argues,this intimacy is inevitably compromised.”

Naturally when I read this, I could only think of images on Instagram popping up on thousands of followers’ screens. When images are posted to social media, they all want to be liked in the same way, by definition. The more idiosyncratic the images are, the more flattened out they become by the distribution apparatus. “If we all feel the same sentimental rush before the same image, it ceases to be poignant, and instead becomes trite, coded, formulaic …” Kotz continues.Few things are more repellant than a programmed sense of ‘intimacy’ or a regulated experience of ‘accident.'”

Maybe so, but ubiquitous communication and a surfeit of images seem to be shifting that assumption. There’s not much else other than programmed intimacy and regulated accidents on social media, but users seem compelled rather than repelled by this. Liking images in the same way doesn’t take away their intimacy — instead, in the midst of ambient connectivity, the connotation of “intimacy” can be detached from the conceit of special access, the sensationalized insider’s look into a milieu that Goldin’s work promised. If highly specific images extract the same “programmed” reaction from a large group of viewers, then perhaps the banality of “trite” or clichéd images allows individuals more latitude to respond. Rather than a singular image saying the same thing to everybody, a familiar image can say something unique to you. This, and not the uncovering of something hidden, becomes the marker of the intimate, the real.

Rather than try for uniqueness (and sadly fall into triteness), images can begin as trite and thereby testify to the significance of the particular unique combination of sender and receiver who are instantiating the cliché. Such images accomplish their purpose not by being more original or spontaneous or authentic — those attitudes are more clichéd than any image trope could be — but by being “relatable,” subordinating the particular content to the implied relationship between the image maker and the image viewer.

Liking images “in the same way” doesn’t take away their intimacy — instead, in the midst of ambient connectivity, the connotation of “intimacy” can be detached from the conceit of special access

In critiquing one of Goldin’s subjects’ eagerness to be photographed and displayed, Kotz writes, “it’s as if, in our current lives of fragile identity and purely privatized experience of social power (I can’t change the world, but I can change my hair color), our very existence as subjects must be constantly confirmed by the gaze of others.” But was there ever a time of such identity stability that we didn’t need social recognition? And why wouldn’t we want as much social confirmation as our social sphere can provide? Social media have expanded the ability to pursue such confirmation. Rather than rely only on “the gaze of a lover or intimate” for recognition and for authentication, as Kotz describes, we have broader networks. And the authentication depends not on our revealing specific verifiable personal truths but on a different kind of sharing.

Kotz references a mundane skyline photo by artist Mark Morrisroe, arguing that its banality offers viewers a different way in than Goldin’s subcultural documentary style. “It’s that very ordinariness that makes it work: that the image could have been anyone’s, that you might have taken that image if you’d been there then, feeling like that … Morrisroe’s work allows viewers to project themselves and their own pasts into the image while also insisting on its specificity as a document of his life, not ours.” Jennifer Doyle cites this passage in Sex Objects, arguing that Kotz sets up a contrast between Goldin’s “almost anthropological attempt to get at the ‘truth'” and the “figuration of ‘truth’ as a pose.” Another way of putting that contrast is between the idea that truth is uncovered or revealed, and the idea that truth is built through a collective buy-in to a posture that’s been made accessible and obvious. Generic images allow for collective buy-in, but at the same time each viewer projects their own individualism. But none of that experience of individuality pivots on uniqueness or originality.

This seems to me a good way to understand the “authenticity” of formulaic influencer content and formulaic advertising more generally — a sort of authenticity that is being more or less adopted by everyone in social media. Something or someone is “authentic” because it invites vicarious participation and excludes any details idiosyncratic enough to inhibit identification and aspiration. A moment feels authentic if it feels autocompleted, if it seems like emergent behavior.

Kotz notes the “self-conscious self-fashioning” of the photographers she discusses, but this doesn’t consist of efforts to express oneself as unique. Instead, it is a matter of aspiring to achieve a generic iconicity. “Morrisroe’s works,” she argues, “continually reveal how subjectivity itself is propped up on an amalgam of desired images: ideal images which we may strive towards, yet to which we feel perpetually inadequate.” She quotes the artist Jack Pierson, to whom she attributes a similar aesthetic:

My work has the ability to be a specific reference and also an available one. It can become part of someone else’s story because it’s oblique and kind of empty stylistically … By presenting certain language clues in my work, people will write the rest of the story, because there’s a collective knowledge of clichés and stereotypes that operates.”

Autocomplete and smart replies works by a similar logic, codifying the “collective knowledge of clichés and stereotypes” and making their operation more efficient. They automate our participation in the collective at the level of shared phrases, common ways of expressing ourselves and depicting our reality. The automated function, or the generative adversarial network behind it, recognizes us; it sees that we belong to the community and shows us how to best express that belonging. It obviates the need for our having to recognize each other directly.

Community can become another content-aware fill, automatically populated with familiar expressions of approval. These are not only social media metrics, amplified by algorithmic sorting that places the content most likely to be liked in front of likable people, but also our autocompleted thoughts, and a secure sense of having been told the right things to say and when to say them. Then we can be confident we have spoken the truth.

Rob Horning is an editor at Real Life.