Home
March 15, 2019

Feelings Unremembered

A few weeks ago, the Guardian ran a story by Oscar Schwartz about Affectiva, a company that plans to market “emotion-detection technology.” That phrase, taken out of context, is an effective piece of hype even if it’s meant as a scary warning, as it concisely reinforces the company’s pitch: It suggests that emotions are discrete, universally identifiable things that actually can be detected, in individuals, in isolation, as if emotions did not occur interpersonally and did not depend on context. Instead it treats emotions as if they were coins dropped on the beach; you can just sweep your detector over the sand until you find something.

Affectiva’s CEO, as Schwartz notes, “predicts a time in the not too distant future when this technology will be ubiquitous and integrated in all of our devices, able to ‘tap into our visceral, subconscious, moment by moment responses.’” No doubt that is the hope. But it is incoherent to analyze “emotion” in the abstract, moment by moment, if the goal is to accurately describe the subjective experience of the people having them. If the goal is to override and overwrite subjective experience with something more tractable and manipulatable, then “emotion detection” will likely suit.

The software described in the article analyzes data about facial movements from images and footage as though the human face was a kind of standardized digital display off of which information could be read and decoded, with certain patterns of data labeled as corresponding with specific emotions. Then, when that pattern appears again in an image or footage, that emotion is ascribed to that person, regardless of what they would tell you about their condition. This process is then marketed as revealing the “truth” about those subjects, who have now been rendered as objects.

The point of this is not emotion detection but emotional dictation. The system is meant to deny subjects the authority to indicate their own emotional state and instead produce feeling as a narrow set of conditional possibilities — whatever the machines might be trained to identify in a given disciplinary context. As psychologist Lisa Feldman Barrett, a critic of “emotion detection,” tells Schwartz, emotion is a “product of human agreement” and assessing it is “a dynamic practice that involves automatic cognitive processes, person-to-person interactions, embodied experiences, and cultural competency.” But as far as Affectiva is concerned, the infinite shadings of feeling a person might experience alone or together with others should be compressed into an arbitrary set of cardinal emotions derived from flawed research from over a century ago. These fundamental emotions can then be used to power surveillance systems designed to impose this limited set of interpretations of those it observes in particular situations. Based on the normative behavior that the systems are meant to elicit, they can compel people to emulate outward caricatures of feeling to register responses the system expects.

Instead of pressing the happy button on a machine that asks us how we felt about a washroom’s cleanliness, we’ll perform “happiness” in front of a set of sensors that will always be becoming ever more invasive to try to prevent us from gaming them. When we master one way of feigning joy, a new set of sensors will search for a new set of patterns: a cat-and-mouse game of emotion-detection-engine optimization. Indeed, Affectiva is “experimenting with capturing more contextual data, such as voice, gait and tiny changes in the face that take place beyond human perception.”

Not only will “emotion detection” intervene in customer service, making inferences about our “feelings” about product displays or which workers we found persuasive or which movie trailers or ads appeared to hold our attention. Emotion detection can also be deployed anywhere people need to be conditioned to play their roles. Meredith Whittaker of AI Now explains that the analysis it produces “could be used in ways that stop people from getting jobs or shape how they are treated and assessed at school.” It’s easy to imagine service employees straining to smile or to keep “anger” out of their tone of voice. At school, students would have to learn to perform “attentiveness.” In stores, one would need to remember to keep oneself from performing any gestures the camera reads as furtive, so one isn’t accused of shoplifting. At home, Alexa-like devices will make inferences about our feelings from the sounds we make and make mood-related recommendations accordingly. We will live with the constant sense of our bodies betraying us, revealing some “emotion” that places us under suspicion, regardless of our conscious intentions. Maybe we’ll learn that intentionality is useless and begin to shuffle through the world in a listless state of machine-approved apathy.

To try to make an emotion-detection machine is to aim at impoverishing people’s existence and make it more amenable to outside control. Where such systems reign, personal agency becomes oriented towards reaction and dissembling. In a sense, emotion detection seeks to moderate feelings, make them appropriate to the systems designed to capture them and make them usable by outside parties. But a face-reading emotion-detection machine seems almost superfluous when social media platforms are already far more nuanced and sophisticated attempts to accomplish the same behaviorist aim. They constitute a monitoring environment that doubles as “social infrastructure,” but what is facilitated is not the spontaneous agency and collectivity of people interacting in ways they choose and control — the “connection” that is sometimes touted — but rather the channeled flow of alienated affect toward corporate goals of profit. This infrastructure renders sociality irresistibly efficient while hollowing it out, training us to perceive it as a set of attention games with a standardized set of measurable stakes. All the different emotions are distilled into the only one that matters: engagement.

Content moderators are a crude, not yet automated, component of this emotion-management machine. Casey Newton recently reported on the harrowing work conditions for some of Facebook’s moderators, who are tasked with evening out the flow of acceptable affect on the platform — modulating the gore, the hate, the lies, the sex, the violence, and so on within limits that sustain the greatest aggregate engagement. In Newton’s account, the moderators function not as emotion detectors but human shock absorbers, taking in extreme content and incurring its effects in their own bodies. The impact manifests as panic attacks, crying jags, bouts of “trauma bonding” sex with co-workers, routine drug use, and increased susceptibility to conspiracy theories and other antisocial beliefs.

Moderators are necessary because of the scale at which platforms seek to gather data and enclose social interaction. Paul Virilio famously argued that “when you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution… Every technology carries its own negativity, which is invented at the same time as technical progress.” Content moderation, then, reveals the negativity that social media bring into the world — the antisociality they organize and amplify and distribute.

Platforms hope that they can eventually “solve” the content moderation problem in the same sort of way Affectiva is trying to solve emotion — automating its detection by distilling all forms of unacceptable content to a few behavioral data shapes that can be identified algorithmically and eradicated. But in practice, the nature of unacceptable content is always shifting and expanding, in part to evade the increasingly Byzantine protocols devised for identifying it. Tarleton Gillespie has argued that “content moderation is an unresolvable and thankless task” because it “requires weighing competing, irreconcilable values: freedom of speech vs. protection from harm, avoiding offense vs. raising awareness, hiding the obscene vs. displaying the newsworthy.” But it is not merely that the principles behind content moderation are hopelessly conflicted. The nature and form of potential harm, like emotions themselves, is always contextual, and changes according to the opportunities for harm afford by given conditions. Targeting the idea of “harm” in the abstract would not be a matter of content data but user intentionality.

Machines can identify patterns and correlations, but they can’t detect the intention behind it. In this Verge piece, James Vincent paraphrases researcher Robyn Caplan, asking this question: “As long as humans argue about how to classify this sort of material, what chance do machines have?” Its best chance would be to ascribe a limited set of possibilities and compel human users to conform to them, as the emotion-detection companies are more or less attempting to do.

For an AI system of moderation to work, it would have to be capable of supplanting or controlling or formatting user behavior as it is happening. It is easier to “detect” (that is ascribe) emotion, because it is understood as being outside a person’s will. But content production (“self-expression”) seems inseparable from an assertion of will. The problem becomes how to preserve users’ desire to produce content while also prescribing and moderating the shape their agency can take.

When some of Facebook’s guidelines for content moderators leaked a few years ago, Gillespie wrote that he hoped that it would force social media companies to take a more public approach to addressing the issue. But that would run counter to ambition of automating the process, which can only work if the algorithms can’t be gamed. Increased efforts to use AI to regulate content will also intensify efforts to outsmart the systems. In this Pacific Standard piece Morgan Meeker describes some of the ways that white supremacists have evaded text-flagging systems with ruses as simple as strategic typos, as well as with more elaborate approaches that take advantage of context, code words, abbreviations, and repurposed slang.

Presumably, the hope for AI moderation is that a real-time machine-learning system could be running continually to detect patterns of usage and update filters as new forms of hate emerge, but such a system could only work if it worked proactively on users, shaping the range of their speech in advance and dictating what they can think to say. As one of the experts Meeker interviews says, “Hate is way more interesting than that.”

That’s not only because people can always come up with novel ways to express feeling. Posting objectionable content is not an unfortunate by-product, some atavistic tendency that can be steadily eliminated like bugs in code. It is instead the means by which some users measure their “freedom” or their agency on platforms. Posting objectionable content certifies their ability to impose themselves on social networks without a reciprocal sense of responsibility — the kind of disregard that the services have implicitly promised as a kind of frictionless convenience, or as “authentic” self-expression.

Content moderation, from this perspective, is just part of the game, an evolving challenge; rather than prevent objectionable content, it prompts some users to seek new modes of transgression and hate. Moderation is always also provocation. It intensifies the experience of agency as defiance and fuels engagement, which is why social media companies often seem lax in their efforts to eradicate it, why people are constantly pleading with Twitter to get rid of the Nazis. This Bloomberg Businessweek article describes how Facebook would rather apologize after the fact than take stronger measures to pre-emptively moderate content. The hate-fueled murders in Christchurch, committed with the interlocking dynamics of social media and mass media in mind, as Kevin Roose details here, Taylor Lorenz here, and Ian Bogost does here, will again raise concerns about platforms’ role in fomenting hate and extremism.

Platforms that aim for scale seem inevitably constrained to produce “virality” as an ethos. That dynamic is part of what has fueled a broader nostalgia for the lost blogosphere of the early 2000s. This Twitter thread by T. Greer evokes those good old days, arguing that mass-aggregated platforms have since eliminated the chance “to set up distinct little communities that can do their own thing separate from the broader currents of the culture war.” In the “world without blue checks,” communities were self-moderated by virtue of the effort it took to find them and get involved. Trolls — the scale was such that the word seemed more appropriate then — had less ability and incentive to infiltrate them. The discovery mechanisms were far more primitive, almost word of mouth, so this meant, in Greer’s view, that mainly high-quality material got any traction. “Expectations were higher, space was longer, and the format was less restricting.” If you earned readership, it enabled you to host commenters and moderate their behavior however you saw fit. If people didn’t like it, they could start their own blog and see if they could get anyone to pay attention.

On mass-aggregated platforms, bad actors have incentive to pollute conversations and distort attention metrics, and the capacity to moderate conversation is almost entirely out of the hands of the good-faith participants. Instead there is a broadly drawn set of “community standards” that is ultimately geared toward getting people to produce more data. On these platforms, contributors post not only with a specific conversation in mind but also the broader audience that includes unsympathetic readers, malevolent actors, and other antagonists. Hashtag-driven search orients conversations toward the potential for trending and opens them to interlopers and hijackers. Algorithmic sorting promotes a kind of drive-by participation in whatever sorts of conversations happen to surface.

The nostalgia for the old blogosphere is a yearning for social media without scale, without the looming threat/promise and daily occurrence of virality. That world, especially in hazy recollection, seemed driven by “ideas,” or some sort of rationalist ideal of discourse, where the “social infrastructure” we have now is palpably driven by affect, by emotions coarsened and made detectable and quantifiable. When properly “moderated” in this system, ideas, like intentions, feel more and more like afterthoughts, when they are thought at all. Then ideas and intentions themselves may come to seem accessible only through immoderation.