Home

Data Sweat

Even through a screen, machines can read our body language

From angry Twitter rants to #instagood affirmations, our online lives are teeming with feelings. Histories of posts and posting habits trail behind us not just as impersonal datapoints but as affect-laden narratives, leaving extensive emotional archives in the wake of each seemingly ephemeral update. However, this actively published digital footprint isn’t the only emotional record generated by our online activity. A huge amount of information is tracked, documented, and stored in the form of digital “exhaust” — metadata that is constantly generated by our online activity. Although digital exhaust may not seem so affectively revealing, it nevertheless amasses its own stores of feeling.

On the surface, data about how long you hover over a particular image on Instagram may seem fairly impersonal compared to a blog post filled with political opinions or a YouTube video meticulously detailing a self-care routine. Yet at their core, these types of digital exhaust are products of tangible offline interactions between human bodies and technology. Our corporeal feelings migrate into digital information through the ways we literally touch our devices and look at our screens, creating exhaustive records of our lives that run alongside the ones we intentionally curate on social media. Acknowledging that digital exhaust creates uncannily enduring affective archives can reframe how we think about this data — and show how profoundly intimate it really is.


Digital footprints — basically, the sum of everything you’ve ever done or posted online — are most frequently invoked in discussions about user responsibility and protection in the face of flimsy digital privacy controls. We are regularly exhorted to be wary of oversharing on the internet and to diligently monitor our digital identities in order to protect compromising information about ourselves from future employers, marketers, scammers, and other prying eyes. Coming of age in a world where there’s no right to be forgotten means acknowledging that anything you share online can (and likely will) come back to haunt you; hygienic practices around digital footprints are now compulsory education, a virtual analog of handwashing.

This “passive” type of metadata is profoundly embodied in even deeper ways than many of the things we intentionally publish

Digital exhaust receives less attention in conversations about online privacy than our trails of intentionally published content. This diffuse, somewhat enigmatic subset of the digital footprint is composed primarily of metadata about seemingly minor and passive online interactions. As Viktor Mayer-Schönberger and Kenneth Cukier write, it includes “where [users] click, how long they look at a page, where the mouse-cursor hovers, what they type, and more.” While at first glance this type of data may appear divorced from our interior, personal lives, it is actually profoundly embodied in even deeper ways than many of the things we intentionally publish — it is inadvertently “shed as a byproduct of peoples’ actions and movements in the world,” as opposed to being intentionally broadcast.

In other words, digital exhaust is shaped by unconscious, embodied affects — the lethargy of depression seeping into slow cursor movements, frustration in rapid swipes past repeated advertisements, or a brief moment of pleasure spent lingering over a striking image.

This information does not pass through a cognitive filter as it is created and stored, but instead emanates from physical rhythms and actions that are usually not consciously recognized in the moment of their appearance. Within this paradigm, digital exhaust is a kind of economically valuable “affective surplus” that is extracted from human bodies and that fuels the continued evolution of the digital ecosystem.

The deeply physiological, preconscious level of emotion I’m pointing to is often referred to as “affect” by contemporary philosophers. Affects are feelings that are lodged in the body but have not found their name as a concrete, recognizable emotion. As Melissa Gregg and Gregory Seigworth explain in the first paragraph of their Affect Theory Reader, these feelings are “visceral forces beneath, alongside, or generally other than conscious knowing, vital forces insisting beyond emotion.” Affects reside in the uncomfortable sinking sensation in your stomach that hasn’t (and might never) solidified as excitement or anxiety; they motor the barely palpable tremor in your fingers testifying to a low-lying nervousness will pass without a second thought if it doesn’t amplify in intensity.

Gregg herself gestures at resonances between digital exhaust and affect by introducing the concept of “data sweat” in her 2015 essay “Inside the Data Spectacle.” Rechristening digital exhaust as data sweat emphasizes the ways in which it leaks messily out of our pores as opposed to emerging directly from our machines as a kind of industrial waste product. “[Sweat] speaks, albeit voicelessly, on our behalf,” she writes. “Sweat literalizes porosity: It seeps out at times and in contexts that we may wish it did not.… Sweat leaves a trace of how we pass through the world and how we are touched by it in return. It is the classic means by which the body signals its capacity to ‘affect and be affected.’” Data sweat is a residue of feelings that we might not be able to name but that still circulate within us; it can reflect our intimate inner lives as much as a carefully written confessional.


The accumulation of emotional exhaust impacts the future just as a Google search conditions subsequent queries. Even if affective interactions are harder to name than search terms, they feed the same predictive algorithmic machineries that organize online experiences and pattern user behaviors. In The Age of Surveillance Capitalism, Shoshana Zuboff writes that accurate predictions of future actions generate revenue, and that “the surest way to predict behavior is to intervene at its source and shape it … machine processes are configured to intervene in the state of play in the real world. These interventions are designed to enhance certainty by doing things: they nudge, tune, herd, manipulate, and modify behavior in specific directions by executing actions as subtle as inserting a specific phrase into your Facebook newsfeed, timing the appearance of a BUY button on your phone.” The fact that our own data sweat frequently leaks out unnoticed makes it a particularly powerful resource for generating these subtle “nudges” that strive to operate below conscious awareness.

Digital exhaust is a kind of economically valuable “affective surplus” that is extracted from human bodies

Most of the exact mechanisms that generate digital models of the self and shape our online environments are proprietarily protected or blackboxed. However, even a cursory glance at how researchers are currently striving to utilize digital exhaust as a means of assessing mental well-being testifies to this material’s unnerving ability to provide insights into our emotions. A host of publications in fields like psychiatry, neuroscience, and computer science describe their “unobtrusive monitoring” or “passive sensing and detecting” of particular mental states based on smart phone accelerometers, GPS location data, amount of time with the screen “on,” patterns of app usage, and even information about the relative presence or absence of human speech in the phones’ vicinity.

The self-proclaimed “unobtrusive” quality of these studies is oxymoronic: It hinges on the fact that their data collection methods rely on the digital exhaust that we are exuding all of the time, meaning there is no need to append new monitoring devices to study subjects. And although some of these experiments required participants to download special apps to facilitate data collection and visualization, the majority of this information could already be stored and used by third party organizations.

According to Katarzyna Szymielewicz, co-founder of an NGO “defending human rights in surveillance society,” the metadata collected about online behavior feeds profile-mapping algorithms that are designed to guess things about you that you haven’t revealed publicly. Because this data is more useful in aggregate than in isolation, “what the machines think of you” hinges on the lingering cloud of digital pollution that you’ve accumulated over time more than the exhaust you produce in real time at any given moment. Passing affects are embalmed in an exhaust(ive) warehouse as part of the ubiquitous technological processes that speculatively decide who you are and what you are feeling.

Data sweat, in addition to self-published inputs, fuels the algorithmic creation of personalized profiles which include large-scale assumptions about individuals’ identity and inner life. These categorizations generate emotional appraisals, not just topical targeting. For example, keystroke patterns or finger movements across a phone screen might differentiate a conservative shopper from a compulsive one. Advertising strategies — ranging from the content individuals receive to these ads’ aesthetic appearance and when they appear on screen — will adapt according to these affectively influenced labels. Netflix’s eerily personalized image thumbnails that pitch the same movie as a thriller to one user and a romance to another are just the tip of the iceberg.

The fact that data sweat leaks out unnoticed makes it a powerful resource for generating subtle “nudges” that operate below conscious awareness

An example of how the most minute trickles of data sweat can create uncannily intimate portraits of our emotional states is an app created by the startup company Mindstrong Health. This product tracks users’ cognitive and emotional activity entirely through smartphone data exhaust. After the app is installed it compiles data on how a user “types, taps, and scrolls” on other apps. The data shed during these physical human-screen interactions is stored and analyzed until a digital phenotype can be identified that marks the user’s “normal” state. By tracing deviations from normal, this app might be able to diagnose depression before the individual realizes they’re depressed, and even predict how a person will feel a week in the future, at least according to the Mindstrong’s founders.

It is unlikely that Google is particularly concerned with offering personalized DSM-5 diagnoses. Yet the same metadata that can purportedly be used to gauge psychic states is constantly being collected by other entities and used to construct personalized digital environments. Digital infrastructures of data collection and storage thus end up impacting human experience not just through concrete privacy violations, but also by affectively “tuning” online worlds — digital exhaust becomes an enduring part of our online environments, preserving seemingly transient affects and dragging them into the future.


Social media sites encourage users to “package their lives as a succession of dramatic emotional moments” in regular posts, profile updates, and interactions with other users. Apps like Timehop, or trends like the #10yearchallenge are potent reminders of just how far back our digital footprints stretch, and of the frequently cringe-worthy amount of personal content most of us have willingly disclosed.

The metadata that we produce on an everyday basis forms even more extensive archives of our lives than these public posts. Although digital exhaust is not so legibly emotional or neatly packaged for conscious human reflection as a Facebook photo album, it is just as laden with feeling. Indeed, the accumulation of digital exhaust makes it possible for digital environments to be uncannily attuned to our feelings even without our conscious recognition of what kind of embodied trails we are leaving. While it’s hard to trace exactly how your affective data exhaust is shaping your online experience, there is no question that it is — no matter how carefully we curate our feeds, or attempt to keep track of our online behaviors, our digital footprints are much more intimate than we would like to think.

Amanda K. Greene is an Andrew W. Mellon Postdoctoral Research Scholar at Lehigh University. Using interdisciplinary approaches from the humanities and social sciences, her research examines the constantly evolving feedback loops between human bodies and new media technologies.