Home
July 5, 2019

All Gates Open

Sometimes, when writing about “smart” devices (ones that connect to the “internet of things” regardless of its owner’s wishes) or health trackers (an internet-of-things device where the “thing” is your body), I’ll compare them to carceral ankle bracelets, vaguely gesturing toward the idea that data capture seeks to inscribe us in a behavioristic prison. This investigation by Ava Kofman into how actual ankle bracelet use is expanding puts that glib comparison into a different light.

As Kofman reports, monitoring bracelets are seen as a cheaper and more humane alternative to putting people in jails and prisons; as part of the cost-saving motive, those mandated to wear them are also forced to pay for them. If they can’t afford it, it can sometimes be treated as a violation of the terms of their release, or it can become a debt burden that accrues interest charges. “Across the country, defendants who have not been convicted of a crime are put on ‘offender funded’ payment plans for monitors that sometimes cost more than their bail,” Kofman writes, “And unlike bail, they don’t get the payment back, even if they’re found innocent.”

Because ankle-bracelet monitoring is easier to implement and perceived to be less overtly cruel, it may serve not to reduce the impact of the carceral system on society but expand it. Rather than suspend or dismiss sentences, why not throw an ankle bracelet on someone? They’re paying for it. And because the monitors are capable of listening in on ostensibly private spaces, they can serve as spying devices, collecting information for the discovery and prosecution of further crimes. “Far from serving as an alternative to incarceration,” Kofman notes, electronic monitoring “ends up sweeping more people into the system.” In other words, electronic monitoring becomes another aspect of the prison-industrial complex’s propensity to manufacture “crime” rather than deter it, producing the criminals required for the industry’s growth from among a society’s marginalized populations. (Joshua Scannell details that process in this essay.)

At the time, electronic monitoring can be used to maintain populations as marginal through a combination of biometric stigmatization and asymmetric tracking. Kofman cites sociologist Simone Browne, who “has connected contemporary surveillance technologies like GPS monitors to America’s long history of controlling where black people live, move and work.” Monitors can impose geographical restrictions, a kind of real-time operationalization of redlining, barring populations from certain neighborhoods. In the past this was implemented through restrictive covenants on loans and an informal process of screening, as well as the ongoing instantiation of confrontational bigotry. It was dependent on being rude or hostile to those who had the temerity to show themselves where “they did not belong.” But electronic monitoring promises to save people from the work of exclusion and the unpleasantness of having to be racist to people’s faces. In the style of the Deleuzean control society, it will bar access on a flexible, pre-emptive basis, facilitating “friction-free racism,” as Chris Gilliard details here. This in turn supports that fantasy of post-racism, the “racism without racists” that Eduardo Bonilla-Silva has described. In the future, only computers will be racist.

The logic of electronic monitoring is at work at a vaster and much more expanded scale in China’s repression of Uighurs in Xinjiang province. There, the principles of electronic monitoring are not limited to ankle bracelets but can make use of other electronic devices already embedded in the environment of the targeted population, an intrinsic part of the experience of “normal life.” So not only are there camera and scanners and checkpoints distributed throughout public spaces to monitor the population, but Uighurs are made to download special apps onto their phones that transmit data about their movements and behavior to the police.

The U.S. may not be far behind on this. Kofman reports that “some GPS monitoring vendors have already started to offer smartphone applications that verify someone’s location through voice and face recognition.” And of course, social media apps already serve as a form of blanket surveillance: “For the nearly 4.5 million Americans on probation or parole, it is not difficult to imagine a virtual prison system as ubiquitous — and invasive — as Instagram or Facebook.”

The Chinese government probably regards its environmental tracking of the Uighur population as a “humane” alternative to detainment, forestalling the need to send all Uighurs to the “re-education” camps it has established in the region. But that is another way of saying the whole province is being technologically reshaped into a soft re-education camp, and only the problem students get sent to literal detention. Meanwhile, as Karen Hao notes in this MIT Technology Review newsletter, China hopes to implement surveillance throughout its education system, touting the “use of face analysis cameras and other sensors in classrooms to monitor student engagement and teacher performance.” What is developed to control and oppress a minority population is then imposed on the majority as a form of benevolent paternalism.

It can also be developed for export. Political and economic motives for oppressing a population can merge as a form of “surveillance capitalism,” as Darren Byler points out in this Guardian article about the Xinjiang police state. “The power — and potential profitability — of the predictive technologies that purport to keep Xinjiang safe derive from their unfettered access to Uighurs’ digital lives and physical movements,” he explains. “From the perspective of China’s security-industrial establishment, the principal purpose of Uighur life is to generate data, which can then be used to further refine these systems of surveillance and control.” These systems are then marketed around the world. The data collected from Uighurs is also used to train machine-learning algorithms that seek to identify incriminating patterns — “an archive that can supposedly help identify suspicious behavior in order to predict who will become an ‘unsafe’ actor,” Byler writes. But as Scannell argues, this kind of digital phrenology as predictive policing doesn’t so much detect as produce such actors. Guilt is presupposed, and data collected about the guilty is used reproduce the rationale for that supposition, to vindicate prejudice after the fact. The surveillance in Xinjiang produces a set of marked behaviors that rationalizes the perpetuation of surveillance as an efficient (albeit tautological) system of control.

Xinjiang is typically described by journalists as a kind of nightmare — an “open-air prison.” But electronic monitoring may not come in such obviously dystopic forms. The same kinds of techniques have long been at work in casinos. Gaming companies have been on the forefront of surveillance, data collection, and analysis, seeking to find the best ways to milk customers of as much money as possible without fully incapacitating them. Not only are the environments of casinos designed to this purpose — combining ambient disorientation (no clocks, no windows, no obvious exits, no clear sight lines) with the full arsenal of tracking technologies — but profiles of individual customers are built up and analyzed to identify when certain inducements are necessary to sustain their “productivity” — their ability to lose steadily and predictably over time.

This Axios report gives an overview of how casinos are using AI to gain an edge over their customers and better modulate their behavior: “To entice people to spend more, casinos use their troves of data to tweak every aspect of the gambling experience — from marketing and casino layout to the incentives and freebies that get people through the door and then keep them inside.” This may be less adversarial than it sounds; submitting to this kind of control and having the cares of life funneled down to a spin of the wheel may be the basic appeal of gambling. But it implements the same behavioristic premise as predictive policing: It purports to be able to anticipate and manage the gambling behavior of clients, but rather it produces gamblers who fit the casino industry’s profit schedule — a different kind “offender-funded” monitoring. If surveillance is sufficiently thorough, the interior lives of the population under watch can be reshaped to suit the purposes of the system that authorized the surveillance in the first place.

Electronic monitoring can be used to make people feel like they are in prison, as Kofman details, or it can be oriented toward making containment feel like pleasure, as casino management demonstrates. In either case, data collection is presumed to provide leverage over not just how people behave but how they experience it. Surveillance is made synonymous with “emotion detection” and then emotion correction. The ambition of tech companies and governments alike is to generalize this monitoring and inflect it to indicate an individual’s social standing. No one would be free in the sense of being safe from observation; instead everyone would just be in the jail or in the casino. How we are being watched would dictate how we feel, and not whether anyone is watching or not.