Home

Borders Everywhere

How technologies of population control are becoming techniques of pandemic response

Full-text audio version of this essay.

In the aftermath of Hurricane Sandy in 2013, tenants in an affordable housing complex in New York City found that their building now had a facial-recognition scanner installed at various entrances. It seemed as though the landlords hadn’t consulted anyone in the building, and later there was a dispute about whether or not the installation was even legal. Residents told Gothamist in 2019 that they felt like “guinea pigs.” At the Atlantic Plaza Towers apartment complex in Brooklyn in 2018, facial-recognition cameras sprung up overnight in the building, which is rent-stabilized and in a rapidly gentrifying area. A spokesperson for Nelson Management Group, which manages the building, said that they were merely doing it “to create a safer environment for tenants.”

In the European Union, the Dublin Return regulation — which states that asylum seekers must apply for asylum in the first country they reach — has often used biometric data to identify and deport individuals who sought to make it to another country where they may have family or face more favorable conditions. In Bangladesh, the Rohingya population — a Muslim minority group fleeing ethnic cleansing and violence in Myanmar — have been subject to intensive data collection and monitoring, both from the Bangladeshi government and the UN High Commissioner for Refugees. The UNHCR created an app that would collect refugees’ information, including photographs, and the Bangladeshi government worked with a firm called Tiger IT to register people, taking their fingerprints along with data about their religion, birthplace, and parents’ names. The Bangladeshi Industry Minister has confirmed that this was done to “keep a record” of the population.

Despite the apparent humanitarian aims of tools of “intervention,” they become assimilated to extracting profit from marginalized groups and then repackaging for use in schools and offices

In each of these cases, a humanitarian crisis has been seized upon to introduce technological methods of population control, positioning vulnerable populations as a ready-to-hand test subjects, disenfranchised enough to be exploited without much recourse. In a parallel to Naomi Klein’s idea of “disaster capitalism,” unforeseen events are use as a pretext to roll out interventions that might otherwise have been widely opposed, not only by private interests but also through partnerships between private companies (like Palantir) and public facing agencies, such as the World Food Program, under the auspices of institutions like the United Nations. The alibi of urgency can be used to suspend ordinary moral or ethical considerations, as Sean Martin McDonald, Kristin Bergtora Sandvik, and Katja Lindskov Jacobsen suggest in an essay for the Stanford Social Review: “Introducing untested technologies into unstable environments raises an essential question: When is humanitarian innovation actually human subjects experimentation?”

While these interventions may be well-intentioned — initially — this is not a sufficient bulwark against authoritarianism or exploitation. They feed into the idea that crises, disasters, unforeseeable events can be brought under the purview of technologies like biometric data collection or location tracking and so can be solved. Often they set the stage for a wider deployment of risk-management technology against anyone deemed to be a threat (e.g. the minority Uyghur population in China, treated as de facto terrorists by the Chinese government). Despite their apparent humanitarian aims, they become assimilated to what Darren Byler and Carolina Sanchez Boe call “tech-enabled terror capitalism,” in which technological tools are used to extract profit from marginalized groups and then repackaged for use in schools and offices.

The humanitarian alibi has often been used to inscribe the logic of the security state onto vulnerable populations, as when biometric and surveillance technologies are imposed as a means of delivering aid to populations in crisis situations. In a 2006 report, for example, the United Nations High Commissioner for Refugees said that biometric technology like iris scanning and fingerprinting, heralded a “new direction in refugee registration.” This was framed as a positive development, something that should theoretically benefit refugees on their path to a new life in another country. But as legal scholar Petra Molnar explains, these new technologies for monitoring, administering, and “controlling” migration are themselves largely unregulated and their implications in the long run are unknown; refugees and asylum seekers often bear the brunt of these technologies, serving as involuntary test subjects. They become one of the first populations to be subject to the kinds of control that these new technologies make possible.

Technology companies and startups with government contracts have been trialing their hardware and software on vulnerable populations for years. Avatar, a DHS funded program designed specifically for use at ports of entry, uses artificial intelligence to measure changes in people’s gestures and behavior for indications that they may be lying; it was tested at the U.S.–Mexico border on asylum seekers. To train massive facial-recognition systems, the National Institute of Standards and Technology used images of individuals booked on suspicion of criminal activity, applicants for U.S. visas (particularly from Mexico), and children who had been exploited in child pornography. Iris scanning as a means of biometric identification was deployed at refugee camps in Jordan to administer daily rations, despite the fact that many of the refugees felt uncomfortable about it. It was also used on Afghan refugees seeking repatriation from camps in Pakistan. The method’s high rate of false positives there was not held against the technology but the refugees, who were accused of impersonation by Iridian, the company who provided the system.

The United Nations High Commissioner for Refugees said that iris scanning and fingerprinting heralded a “new direction in refugee registration.” This was framed as a positive development

Eventually the use of such technologies is expanded and deployed on broader populations. Iris scans are now commonplace in airports and in police departments. Frontex, one of the European agencies responsible for monitoring the Mediterranean for refugee traffic, has allegedly been testing the use of unpiloted drones to track the movement of asylum seekers and refugees off the coast of Malta, Italy, and Greece. Drones similar to those deployed in “peacekeeping missions” around the Democratic Republic of Congo and Malawi have since been increasingly used on protesters around the U.S.

In The Origins of Totalitarianism, Hannah Arendt called this the “imperial boomerang”: when interventions and policies tested abroad are brought back to the “heartland.” The Covid-19 pandemic has caused this boomerang to be tossed yet again. The pandemic presents an opportunity to create groups of “compliant” and “noncompliant” people out of thin air as new rules are imposed and new systems are deployed to enforce them. It allows for a sort of internal border control, mediated by redeployments of surveillance technologies that grant or withhold access and privileges to people.

Migration itself is reframed as a humanitarian crisis best addressed in terms of security. “The main technique of securitization,” Didier Bigo argues in “Security and Immigration: Towards a Critique of the Governmentality of Unease,” is “to transform structural difficulties and transformations into elements permitting specific groups to be blamed, even before they have done anything, simply by categorizing them, anticipating profiles of risk from previous trends, and projecting them by generalization upon the potential behavior of each individual pertaining to the risk category.” This warrants a “humanitarian” intervention that mainly consists of data collection and predictive analytics.

The turn toward scientific or technological methods of “managing risk” — a companion euphemism to “humanitarian innovation” — has long been established at the U.S. border. Since 2004, as Louise Amoore details in “Biometric Borders: Governing Mobilities in the War on Terror,” the U.S has been collecting detailed data on all people who enter the U.S., as a means of “managing risk by embracing risk,” as then Homeland Security secretary Tom Ridge put it. From this perspective, the best way to handle uncertainty and ambiguity is simply to collect as much data as possible — patterns of movement, purchasing history, the call logs on a phone, and much more — to allow a frictionless existence for those who are deemed worthy of it and to make it rife with difficulties for everyone else. In the 1990s, Oscar Gandy called a version of this the “panoptic sort”: “a kind of high-tech, cybernetic triage that categorizes people according to their presumed economic or political value.”

Already, the pandemic has generated a host of technological solutions bidding to become routinized biometric enforcement mechanisms

Now such mechanisms are being proposed for the pandemic response. Historically, pandemics have often been structured by racist policies and ideologies. Viral contagion and disease have often been associated with the “other,” as Edna Bonhomme explains in this piece for the Baffler. Contagions are given xenophobic names, like “Asiatic cholera” or “Spanish flu.” But the current “humanitarian innovation” on this legacy is to encode such racist reaction in surveillance protocols offered for the public good. The underlying assumption is that risk can be mitigated through data collection, which can then be implemented to control the behavior of biometrically tracked individuals.

Already, the pandemic has generated a host of technological solutions bidding to become routinized biometric enforcement mechanisms, including drones to enforce social distancing, quarantine ankle monitors (as are often used on parolees), contact-tracing apps for phones, and fever-scanning drones. Startups like Feevr, which markets fever-detection scanners and “smart” thermometers, have been quick to enter the pandemic surveillance space. Others companies eager to exploit the pandemic opportunity, like Clearview AI and Palantir, are familiar names. Palantir has already implemented “data trawling” models and proprietary algorithms to help law enforcement conduct immigration raids at workplaces and separate families at borders. Mijente, a Latinx activist organization, has documented how the infrastructure and cloud computing services provided by companies like Palantir (and Amazon, Salesforce, Dell, Microsoft, and others) makes the deportation and detainment of immigrants possible. This approach is being applied to pandemic track-and-trace programs. Companies like IBM, Facebook, Apple, and Google rapidly developed plans for managing community transmission, and public health websites have tracked the movement and online activity of those who had tested positive for Covid-19, often to an alarming degree. Documents obtained by OpenDemocracy and Foxglove found that the U.K. National Health Service has given Palantir access to the health data of millions of people for a track-and-trace scheme, but the contracts show that Palantir will also be permitted to train other models with this data. Initially, the deal was made for a pound, for three months — which might indicate that the real value of the contract was in the data Palantir would gain. After this three-month period expired, Palantir extended the contract and set its price at £1 million.

Those who are already marginalized carry the logic of the border with them, always an uncompliant subject or unruly body

In Governing Through Biometrics, Btihaj Anjana notes that when the management and control of our identities and movement is governed through our bodies, the body becomes like a password, albeit one we don’t set and can’t change, that sets inescapable limits on our access to people, places, and opportunities. Not only is data about a particular individual — their location, their age, gender, temperature, income, mannerisms, time of entry, purchasing habits — understood as a means for predicting their future behavior, but also the data about their social relations: people that they associate with, the locations that they visit, and even the order that they visit them. This can all be used to assign a level of risk to that individual that is then inscribed at the level of the body, associated with biometric markers (one’s iris, one’s face, one’s gait, and so on).

But this collection and operationalization of biometric data — at the border, at the entrance to an office, on the floor of a warehouse — is not some neutral means for assessing the risk a given individual poses, with respect to Covid-19 or any other hazard, any more than biometric data in general simply represents the reality or perceived reality of someone’s identity. Such biometric forms of control replicate already existing biases about who must bear the brunt of surveillance technologies, marking out particular populations for more intensive scrutiny. This was true when concerns about terrorism after 9/11 led to the sorting of foreign workers, immigrants, asylum seekers, and noncitizens into “desirable” and “undesirable” categories, as David Lyon explains in Surveillance, Power and Everyday Life. It was also true with predictive-policing algorithms, which, as Ruha Benjamin and Simone Browne have intensively catalogued, were first deployed against Black populations.

Once such technological systems of control are in place, it’s easy for those implementing them to say that rolling them back could lead to anarchy and chaos. People who refuse or protest such measures can be painted as noncompliant, tarred as unpatriotic or negative, guilty of having something to hide — particularly when they are marked as “other” in the first place. (White protesters of mask mandates are given far more leeway than nonwhite protesters of discriminatory police practices.) Even if they function only as what Bruce Schneier described as “security theater” — measures (like liquid bans at airports) meant to engender a sense of safety by conveying that authorities are doing something, regardless of whether it works — they still become entrenched. The measures imposed post-9/11 (government data collection via third parties, the controversial NSA call records program) are still in place. Pandemic-containment measures will likely play out similarly. In China, the contact-tracing app that was previously used to manage community transmission has now become a generalized health-monitoring app, with people having to present a red, green, or yellow code before they enter local establishments and public transportation. Amazon has purchased temperature-scanning cameras that had previously been used in China to monitor the movements of Uighurs to reportedly monitor warehouse workers, who tend to be working class and people of color.

And once in place, such systems are easy to expand. For instance, AC Global Risk, a startup based in California, is experimenting with risk assessment based on someone’s voice; this maps onto patents filed by Amazon to identify someone’s accent and nationality based on what they say to an Alexa. These border-control-style systems, aided by government contracts with technology companies and startups, proliferate in everyday life. What begins as “humanitarian innovation” — that is, as involuntary human experimentation — ends up being another part of an expansive control society. Biometric data is not merely a way to represent the perceived reality of someone else’s identity; it is a means for those with access to that data — workers at Palantir, the people running Avatar at the border — to create a particular future. For those who are already marginalized, this means they carry the logic of the border with them, always an uncompliant subject or unruly body.

At the Knickerbocker Village housing complex, the facial-recognition scanner is still in place. It doesn’t work on sunny or rainy days — both frequent occurrences in New York — and children who live in the apartment building have to submit to frequent scans as they age because their facial features change. In 2019, a group of 130 tenants living in Atlantic Plaza Towers apartment complex undertook legal proceedings to try to prevent their landlord’s plans to install a facial-recognition system. Five months later, the tenants won that case. Meaningful opposition is possible, and humanitarian experimentation, regardless of its scope or location, isn’t necessarily a foregone conclusion.

Sanjana Varghese is a journalist and researcher based in London. She writes about technology and power, and has previously written for the New Statesman, Wired UK, Vice and others.