Home

More Than a Feeling

Emotion detection doesn’t work, but it will try to change your behavior anyway

Full-text audio version of this essay.

So many authorities want to use computational power to uncover how you feel. School superintendents have deputized “aggression detectors” to record and analyze voices of children. Human resources departments are using AI to search workers’ and job applicants’ expressions and gestures for “nervousness, mood, and behavior patterns.” Corporations are investing in profiling to “decode” customers, separating the wheat from the chaff, the wooed from the waste. Richard Yonck’s 2017 book Heart of the Machine predicts that the “ability of a car to read and learn preferences via emotional monitoring of the driver will be a game changer.”

Affective computing — the computer-science field’s term for such attempts to read, simulate, predict, and stimulate human emotion with software — was pioneered at the MIT Media Lab by Rosalind Picard in the 1990s and has since become wildly popular as a computational and psychological research program. Volumes like The Oxford Handbook of Affective Computing describe teams that are programming robots, chatbots, and animations to appear to express sadness, empathy, curiosity, and much more. “Automated face analysis” is translating countless images of human expressions into standardized code that elicits certain responses from machines. As affective computing is slowly adopted in health care, education, and policing, it will increasingly judge us and try to manipulate us.

Should we really aim to “fix” affective computing?

Troubling aspects of human-decoding software are already emerging. Over 1,000 experts recently signed a letter condemning “crime-predictive” facial analysis. Their concern is well-founded. Psychology researchers have demonstrated that faces and expressions do not necessarily map neatly onto particular traits and emotions, let alone to the broader mental states evoked in “aggression detection.” Since “instances of the same emotion category are neither reliably expressed through nor perceived from a common set of facial movements,” the researchers write, communicative capacities of the face are limited. The dangers of misinterpretation are clear and present in all these scenarios.

Bias is endemic in U.S. law enforcement. Affective computing may exacerbate it. For example, as researcher Lauren Rhue has found, “Black men’s facial expressions are scored with emotions associated with threatening behaviors more often than white men, even when they are smiling.” Sampling problems are also likely to be rife. If a database of aggression is developed from observation of a particular subset of the population, the resulting AI may be far better at finding “suspect behavior” in that subset rather than others. Those who were most exposed to surveillance systems in the past may then be far more likely to suffer computational judgments of their behavior as “threatening” or worse. The Robocops of the future are “machine learning” from data distorted by a discrimination-ridden past.


To many of the problems detailed above, affective computing’s enthusiasts have a simple response: Help us fix it. Some of these appeals are classic Tom Sawyering, where researchers ask critics to work for free to de-bias their systems. Others appear more sincere, properly compensating experts in the ethical, legal, and social implications of AI to help better design sociotechnical systems (rather than just clean up after technologists). As minoritized groups are invited to participate in developing more fair and transparent emotion analyzers, some of the worst abuses of crime-predicting and hiring software may be preempted.

But should we really aim to “fix” affective computing? What does such a mechanical metaphor entail? One of Picard’s former MIT colleagues, the late Marvin Minsky, complained in his book The Emotion Machine that we “know very little about how our brains manage” common experiences: 

How does imagination work? What are the causes of consciousness? What are emotions, feelings, and thoughts? How do we manage to think at all? Contrast this with the progress we’ve seen toward answering questions about physical things. What are solids, liquids, and gases? What are colors, sounds, and temperatures? What are forces, stresses, and strains? What is the nature of energy? Today, almost all such mysteries have been explained in terms of very small numbers of simple laws. . . .

Emotions, by his logic, should be subject to scientific reduction as well. He proposes to decompose “feelings” or “emotions” into constituent parts, a step toward quantifying them like temperatures or speeds. A patent application from Affectiva, the firm co-founded by Picard, describes in some detail just how such quantification might work, by analyzing faces to capture emotional responses and generate “an engagement score.” If institutions buy into these sorts of assumptions, engineers will continue making such machines that try to actualize them, cajoling customers and patients, workers and students, with stimuli until they react with the desired response — what the machine has already decided certain emotions must look like.

From within an engineering frame, the scientific research behind affective computing is uncontestable, nonpolitical — something that must of necessity be left to AI experts. The role of critics is not to distrust the science but to help engineers reflect consensus social values in how they apply their findings about facial analysis, which, as another Affectiva patent filing notes, could “include product and service market analysis, biometric and other identification, law enforcement applications, social networking connectivity, and health-care processes, among many others.”

Treating persons as individuals, with complex and evolving emotional lives, is time-consuming. Attributing some “engagement score” to them is scalable

There is another and better framing available than the engineering one, though — a more political one, focused on longstanding controversies regarding the nature of emotions, the power of machines to characterize and classify us, and the purpose and nature of feelings and moods themselves. From this perspective, affective computing is not merely pragmatic people processing but a form of governance, a means by which subjects are classified for corporate chieftains and their minions alike. Treating persons as individuals, with complex and evolving emotional lives, is time-consuming and labor-intensive. Attributing some “engagement score” or classifier to them is scalable, thereby saving the human effort that would have once been devoted to more conversational explorations of emotional states.

This scale in turn fuels the profitability of affective-computing applications — the same software can be adapted to many situations and then applied to vast populations to draw actionable conclusions. We have seen the money to be made in conditioning persons to interact in standardized ways in platforms like Twitter and Facebook — including “hearts” and reaction buttons. Now imagine the business opportunities in standardizing emotional responses offline. All manner of miscommunications could seemingly be avoided. Once communication itself was constrained within tight bands of machine-readable emotional indicators, more messages and preferences would be instantly transmitted.

This is a dangerous project, though, because the meanings of, say, a sneer in a controlled experimental setting, a movie theater, a dinner date, and an armed robbery, are probably quite distinct. James A. Scott explored the dangers of “legibility” as a politico-economic project in his classic Seeing Like a State, which described bureaucratic overreach based on flawed presumptions about social reality. Scott theorizes disasters ranging from China’s Great Leap Forward to collectivization in Russia and compulsory villagization in Ethiopia and Tanzania as rooted in failed efforts by the state to “know” its subjects:

How did the state gradually get a handle on its subjects and their environment? Suddenly, processes as disparate as the creation of permanent last names, the standardization of weights and measures, the establishment of cadastral surveys and population registers, the invention of freehold tenure, the standardization of language and legal discourse, the design of cities, and the organization of transportation seemed comprehensible as attempts at legibility and simplification. In each case, officials took exceptionally complex, illegible, and local social practices, such as land tenure customs or naming customs, and created a standard grid whereby it could be centrally recorded and monitored.

Crises of misrepresentation — or forced representation — are also likely to arise in ambitious affective-computing projects. Not all classifications of a person as, say, “angry,” are based on accurate readings of emotional states. They could be projections, strategic readings or misreadings, or mere mistakes. But regardless of accuracy, they become social facts with weight and influence in various databases, which in turn inform decisionmakers.

Thus emotion metrics are not simply trying to provide a representation of what is but are also a method for producing subjects that are susceptible to the means of control that the metrics feed and administer. In other words, much of affective computing is less about capturing existing emotional states than positing them. It defines particular emotional displays as normative under particular circumstances and then develops systems (as the Affectiva patent filings mentioned above adumbrate) for rewarding, imposing, or even policing compliance with these norms. While affective computing’s long-term vision is now framed as a peaceable kingdom of pleasing computers and happy users, its inattention to power dynamics betokens a field easily repurposed to less emancipatory ends.

For example, if, having seen a series of widely publicized summary executions by police, most people begin to approach police officers with extreme deference, this behavior could be captured and normalized, resulting in software that calculates “obedience scores” for suspects. But this practice would not merely report on reality. Rather, it would help create new realities and could easily increase the risk of more violence against those who fail to properly perform obedience in the future. Like Noelle-Neumann’s classic “spiral of silence,” a “spiral of servility” is a distinctive danger of a world affectively computed by increasingly touchy, defensive, and intolerant authorities.


Affective computers may themselves become caught in such spirals, innovating displays of concern or respect for those subject to their interventions. One Medicare program now features talking avatars of cats and dogs, designed to soothe the elderly. Operated remotely in a manner reminiscent of the film Sleep Dealer, the avatars are meant to put a kawaii face on the ministrations of distant workers, while perhaps also sparing the workers some burdens of emotional labor while monitoring and responding to their clients. One can imagine retailers adding smiling, animated characters to self-checkout kiosks based on a customer’s internet browsing patterns. We may even welcome automated systems that simulate concern — mechanical havens in a heartless world.

Emotions are largely treated as autonomous and univocal rather than as prompts to articulacy or dialogic evaluation of one’s situation

But these comforts are no less manipulative for being personalized. As Daniel Harris argued in Cute, Quaint, Hungry, Romantic, cuteness has a curious duality: meant to evoke warmth and care, cute creatures are also abject, pathetic, helpless, innocuous. When a faceless corporation or state deploys such visual rhetoric, the foregrounded meaning is care and concern, but lurking in the background is another resonance of cuteness: infantilization, exacerbated by a sense that controllers of the system not only deem you too insignificant to deal with personally but can’t even be bothered with conjuring a human avatar to enable their distance.

These subtle and recursive dynamics of feeling — and the thin line between caring and patronizing gestures — do not seem to trouble most work in affective computing. The field’s model of mental activity is more behaviorist (seeking the best stimuli to provoke desired responses) than phenomenological (richly interpreting the meaning of situations). From this perspective, emotions are essentially pragmatic tools. Feelings are as functional as a like button or a traffic light: Joy and love affirm one’s present state; fear and sadness provoke a sense of unease, a need to flee or fight, criticize or complain. Emotions are largely treated as autonomous and univocal rather than as prompts to articulacy or dialogic evaluation of and reflection on one’s situation.

Researchers have described multiple affective-computing projects as ways of detecting — and even predicting — emotional responses, conceived in this limited fashion. For example, AI might find patterns of microexpressions (rapid and fleeting facial expressions) that often precede more obvious rage. Police departments could try to predict crime based on little more than a person’s demeanor. Customer-service systems want to use voice-parsing software to determine just how far they can delay a customer’s call before the neglect becomes grating. Some employers think that workers’ brain waves hold critical clues about their engagement and stress levels.

The shameful history of so-called lie detectors should inform future work to mechanically “decode” intent, stress levels, and sincerity

Such programs may provide valuable data to a corporation or government looking to maximize profits or to subdue a population. But are they really “computing affect,” rendering machine-readable something as ineffable and interior as emotion? Critics argue that the computation of affect is far harder than researchers make it out to be. They point to the many ways facial expressions (and other ostensible indicators of emotion, like heart rate or galvanic skin response) fail to accurately convey discrete states of minds. Are shifting eyes a sign of distraction or deep thought about the problem at hand? When is a smile a grimace? Or a wink, a twitch, in anthropologist Gilbert Ryle’s classic formulation? And why should we assume that turning involuntary or semi-voluntary expressions into forms of computerized interaction (or opportunities for computational classification) would serve our interests rather than those of the clients of affective computing’s leading firms? The shameful history of so-called lie detectors should inform future work to mechanically “decode” intent, stress levels, and sincerity.

Affective computing is just as much an engine as a camera, a way of arranging and rearranging social reality (rather than merely recording it). The more common it becomes, the more these sociotechnical systems will incentivize us to adjust our outward “emotional” states to get computers to behave the way we’d like. Of course, we do this in conversation with people all the time — strategic and instrumentalized communication will always be with us. But these systems will be more gameable because they will be far more limited in how they draw conclusions. And thanks to the magic of scale, they will also tend to be far more consequential than one-off conversations.

In all too many of its present implementations, affective computing requires us to accept certain functionalist ideas about emotions as true, which leads to depoliticized behaviorism and demotes our conscious processes of emotional experience or reflection. Just as precision manipulation of emotions through drugs would not guarantee “happiness” but only introduce a radically new psychic economy of appetites and aversions, desires and discontents, affective computing’s corporate deployments are less about service to than shaping of persons. Preserving the privacy and autonomy of our emotional lives should take priority over a misguided and manipulative quest for emotion machines.

Frank Pasquale is Professor of Law at Brooklyn Law School, and author of New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020).