Home

Luxury Surveillance

People pay a premium for tracking technologies that get imposed unwillingly on others

Full-text audio version of this essay.

One of the most troubling features of the digital revolution is that some people pay to subject themselves to surveillance that others are forced to endure and would, if anything, pay to be free of.

Consider a GPS tracker you can wear around one of your arms or legs. Make it sleek and cool — think the Apple Watch or FitBit —  and some will pay hundreds or even thousands of dollars for the privilege of wearing it. Make it bulky and obtrusive, and others, as a condition of release from jail or prison, being on probation, or awaiting an immigration hearing, will be forced to wear one — and forced to pay for it too.

In each case, the device collects intimate and detailed biometric information about its wearer and uploads that data to servers, communities, and repositories. To the providers of the devices, this data and the subsequent processing of it are the main reasons the devices exist. They are means of extraction: That data enables further study, prediction, and control of human beings and populations. While some providers certainly profit from the sale of devices, this secondary market for behavioral control and prediction is where the real money is — the heart of what Shoshana Zuboff rightly calls surveillance capitalism.

Consumers of luxury surveillance see themselves as powerful and sovereign, even immune from unwelcome monitoring and control

The formerly incarcerated person knows that their ankle monitor exists for that purpose: to predict and control their behavior. But the Apple Watch wearer likely thinks about it little, if at all — despite the fact that the watch has the potential to collect and analyze much more data about its user (e.g. health metrics like blood pressure, blood glucose levels, ECG data) than parole or probation officers are even allowed to gather about their “clients” without specific warrant. Fitness-tracker wearers are effectively putting themselves on parole and paying for the privilege.

Both the Apple Watch and the FitBit can be understood as examples of luxury surveillance: surveillance that people pay for and whose tracking, monitoring, and quantification features are understood by the user as benefits they are likely to celebrate. Google, which has recently acquired FitBit, is seemingly leaning into the category, launching a more expensive version of the device named the “Luxe.” Only certain people can afford luxury surveillance, but that is not necessarily a matter of money: In general terms, consumers of luxury surveillance see themselves as powerful and sovereign, and perhaps even immune from unwelcome monitoring and control. They see self-quantification and tracking not as disciplinary or coercive, but as a kind of care or empowerment. They understand it as something extra, something “smart.”

In the U.S., “Trusted Traveler” programs like TSA PreCheck and Global Entry make the tradeoffs explicit: People who believe they have “nothing to hide” willingly submit to surveillance (which they do not even understand as surveillance), pay more for it, and put themselves into a special, highly privileged category of person. Contrast the expectations and attitude of a first-class frequent flyer with Global Entry using their economic power and apparent political immunity to smooth their international travel, with the political refugee or migrant: The refugee faces constant demands to submit papers, documentation, and even to wear electronic monitoring devices much like those on parole or probation. Everything about their experience is meant to make the behavioral control and modification they face explicit, along with their socially subordinate position.

The refugee or parolee’s experience exemplifies imposed surveillance: surveillance the subject would prefer not to have but is required to for one reason or another. The ankle monitor has been its avatar, but imposed surveillance has moved beyond that. Tech innovators are busily promoting “e-carceration” devices like the ShadoWatch, an “attractive tamperproof wristwatch” that “incorporates wi-fi, GPS, and network location technology to increase location accuracy” and that “includes motion sensors, vibration alerts, messaging, heart rate, and blood pressure detection.” Sounding much like a smart watch, the ShadoWatch is sold by “offender management solutions” company Shadowtrack, and even though people on parole or probation must pay fees for the privilege of wearing the device, the company’s real customers are corrections officers and institutions.

People who believe they have “nothing to hide” willingly submit to surveillance, pay more for it, and put themselves into a highly privileged category of person

Smartphones, too, are being accessed by law enforcement and immigration officials as tools of surveillance, even as they serve many important functions for refugees. From the perspective of their institutional customers, the institutions are using technology to surveil other people for “us,” but from the perspective of the relatively less powerful migrant or the person on probation, their surveillance is imposed and generally unwelcome.

It’s no accident that surveillance is often imposed on marginalized populations — Black, poor, unhoused, formerly incarcerated, trans, indigenous people — while those who feel they are part of a powerful majority purchase surveillance tech as a luxury.

Imposed surveillance can seem like the model of social control evident in George Orwell’s novel 1984: Big Brother sees everything, understands our own impulses better than we do, tracks and reports our actions with disturbing accuracy to bureaucratic powers that sanction or reward us. By contrast, luxury surveillance is more like the social control in Aldous Huxley’s Brave New World, where people willingly, even demand, to submit to a manipulative authority that keeps them docile and controllable. But these are not necessarily alternatives; they may co-exist as different perspectives on the same set of conditions.

How does it happen that strikingly similar and even identical surveillance devices and technologies — especially, but not only, biometric surveillance — come in two such apparently different modes and are perceived so differently? What does it say about our society’s adoption of technology that such different cognitive frames can be used to understand and obfuscate two faces of what is essentially one technology — that some eagerly adopt what others must suffer? And what purpose is served by this two-pronged approach to promoting surveillance?


Part of the answer clearly has to do with power, privilege, and one’s perceptions of them. People who feel socially disempowered are often sensitive to (or at least aware of) the presence of imposed surveillance, whereas those who align with power either ignore or welcome it as a luxury. When people believe (often correctly, as it happens) that social power is on their side, and when they see themselves as the ones doing the watching, they believe that such technology works in their favor and they will gladly pay to wear or install it. They might even demand it, despite the fact that the benefits it offers may be illusory or accrue largely to others, and it will likely worsen the conditions that made the technology seem attractive to them in the first place.

For example, people purchase home surveillance technology and its add-ons (Ring surveillance cameras and the Neighbors app that networks their users) so that they will feel and actually be safer. But the data shows that people who use these devices are not only no more safe than they were without them; they also feel less safe. By producing anxiety in some of its customers, Amazon makes their need for its products seem more intense, even though the products themselves produce the anxiety.

In and of itself, this cure-is-also-the-disease dynamic is familiar throughout consumer marketing, and it’s especially prominent in digital technology, where dating and social media apps are notorious for providing intermittent reinforcement that drives users to return again and again, despite the fact that the product itself may be a big part of why they feel (and are) unsatisfied. But with surveillance products, this cycle of disappointment and recommitment has even more significant externalities. It intensifies a climate of anxiety for everyone; it expands the net of surveillance over everyone. It normalizes suspicion and diminishes our inclinations to trust each other.

When people believe that social power is on their side and such technology works in their favor, they worsen the conditions that made the technology seem attractive to them in the first place

Part of why some are willing to pay for seemingly self-defeating technology is that it simultaneously serves another function: It solidifies users’ position in the “luxury surveillance” class and tells them that they are part of the “us” and not the “them” — as if these classes are unalterable. The pleasure of belonging to the “luxury” class is always maintained at the expense of those who do not belong, much as the misery of imposed surveillance comes partly from its constant reminder that we have to submit to external control. Perhaps unsurprisingly, the effects of legally imposed surveillance like ankle monitors also seem to have more do with creating a feeling of fear and submission than with rehabilitation, following the law, or reducing incarceration.

Employee wellness programs illustrate how slippery the distinction between luxury and imposition can be. As Ifeoma Ajunwa, Kate Crawford, and Jason Schultz argue in this paper, some employees willingly provide more data via “employee wellness” programs, perhaps enticed by bonuses or other benefits. These are depicted as “voluntary” — a sort of luxury to opt in to, or a kind of self-care — only because they otherwise would fall under U.S. regulations that prohibit the collection of certain data. Even now, the EEOC has no definition of “voluntary,” leaving employers largely free to package sticks as carrots, levying fines for failing to provide “voluntary” information in the form of “bonuses” for actually providing it. (Earlier this year, the EEOC finally offered a proposed definition of voluntary, although it may be watered down if it is ultimately implemented.) Wellness programs are framed as luxuries but are effectively imposed; the degree to which they are adopted helps normalize them and justify them as “what employees want.”

The “wellness” involved in these surveillance programs largely functions as an alibi. As Gordon Hull and Frank Pasquale put it in a 2017 article, these programs “promote a partial and biased conception of wellness,” in which any improvements to individual health are “secondary to the larger power grab they offer to their implementers.” Hull and Pasquale quote a human resources journal article from as far back as 2003 recommending employers develop such programs specifically because they offer a “way of establishing and maintaining an effective corporate culture” and a “means of social control.”

The much-discussed “social credit scoring systems” like those that China is said to be developing operate according to a similar dynamic. In this respect they are not some totalitarian innovation but little more than extensions (if even that) of what already exists in the U.S. In the often xenophobic Western press, China’s social-credit system is often described as based on punishment: Citizens with poor scores are banned from some airplane flights, have their internet speeds throttled, and have their school choice curtailed. Yet as with corporate wellness programs, social credit can look from other angles as if it offers “benefits”: If one simply behaves appropriately, one can “level up” into achievements like first class air travel and the best schools.

Indeed, in the limited places where China has implemented a social credit system, citizens often speak about it in very positive ways: as one entrepreneur told Foreign Policy in 2018, “I feel like in the past six months, people’s behavior has gotten better and better. For example, when we drive, now we always stop in front of crosswalks. If you don’t stop, you will lose your points. At first, we just worried about losing points, but now we got used to it.” Of course, the logic of the system itself inherently calls into question their sincerity, if not their liberty to have any opinion of their own on the subject. And that is part of the point: Sincerity becomes inseparable from compliance much as imposition becomes inseparable of what feels like a luxury.


The twin modalities of luxury and imposed surveillance may look different on the surface, but they represent two faces — carrot and stick, if you will — of a challenge to the bases of democracy and social freedom. The carrot of luxury surveillance for some authorizes the stick of imposed surveillance for others. Though electronic monitors may look very different from the Apple Watch, we should not be the least surprised when the approaches converge: This Wired article details the increased interest in using phones as means for monitoring people on probation or parole.

The lines between what we want and what we need, and what is good for us as individuals and what is good for society as a whole, have always been blurry. Surveillance technology ups the ante considerably. It divides the world into haves and have-nots, using the perceived privilege of the “haves” to weaken protections for everyone against illegal search and seizure, against broader manipulations, and against “opt-in” technologies becoming mandatory.

We need to develop a much deeper way of talking about surveillance technology and a much richer set of measures with which to regulate their use. Just as much, we need to recognize that voluntarily adopting surveillance isn’t an isolated choice we make only for ourselves but one that impacts others in a variety of ways we may not recognize. We need always to be asking what exactly it is that we are enthusiastically paying for, who “we” are and who is the “them” on the outside, and what all of us are being made subject to when we allow (and even demand) surveillance technology to proliferate as wildly as it does today.

Chris Gilliard has a PhD from Purdue University’s Rhetoric and Composition Program and currently teaches at Macomb Community College. His work concentrates on privacy, institutional tech policy, digital redlining, and the re-inventions of discriminatory practices through data mining and algorithmic decision-making, especially as these apply to college students.

David Golumbia is an Associate Professor of English Department at Virginia Commonwealth University. He is the author of The Cultural Logic of Computation (Harvard University Press, 2009), The Politics of Bitcoin: Software as Right-Wing Extremism (University of Minnesota Press, 2016) and many articles on digital culture, language, and literary studies.