It’s happening more frequently that I will visit someone’s home and they’ll have one of those stand-alone voice-assistant devices — usually Amazon’s Echo; I’ve yet to see a Facebook Portal in the wild. And I’ll think to myself that tech criticism has basically failed. Despite the steady flow of complaints in more and more high-profile media outlets about “surveillance capitalism” in its many guises — the way it erodes privacy, abolishes trust, extends bias, foments anxiety and invidious competition, precludes solidarity and resistance, exploits cognitive weaknesses, turns leisure into labor, renders people into data sets or objects to be statistically manipulated, institutes control through algorithms and feedback loops, and consolidates wealth into the hands of a small group of investors and venture capitalists with increasingly reactionary and nihilistic politics — many people are still going to go with the flow and get the latest gadgets, because why not?
Voice assistants seem well-suited to extending accessibility to those who have difficulty interacting with conventional devices. Old people report finding them especially useful. But this also creates an exploitable vulnerability, in that Alexa — which plans to provide crowdsourced answers to general questions — can be manipulated to provide misleading or malicious information to populations who may be less equipped to dismiss it. As Ryan Henyard argued on Twitter, “Two populations heavily marketed for by Alexa-type products are especially vulnerable to bad outcomes: elderly and disabled folks using these as assistive technologies, and children encouraged to use voice search to explore their curiosities about the world.” The risks of this surely must be pretty obvious to Amazon, but the company would rather increase engagement and lure users in through participatory strategies and convenience than take a more responsible approach. This resembles the strategy of other social media platforms that enable (if not encourage) systemic abuse in order to scale up their mesh of interconnection and “engagement,” and then backpedal and apologize later when the inevitable damage is done.
But in general, it seems people buy smart devices and invite their surveillance capacity into their homes in exchange for some largely frivolous conveniences — the tenuous illusion of being waited on by Amazon, or of being able to boss objects around. When I ask people why they have these devices, their sheepish answers usually range from “I don’t know” to “It was on sale” to “It was a gift” to “It’s cool to make music come on by talking at it.” It is as though we’re only starting to learn what the supposed pleasures of this new level of monitoring are supposed to be. But our conditioning in consumerism is such that many are willing to take it on faith that the pleasures will emerge — that if corporations throw enough marketing and customer service behind a product, we’ll all soon be sufficiently trained to truly enjoy it. Network effects, mimetic desire, and aspirationalism all play their part too: The ubiquity of a product begins to convey a pleasure in its own right — just as the sheer demonstrable popularity of a song can make it seem “good,” can make participation in it feel inevitable and joyful, a consubstantiation of the zeitgeist.
Occasionally I might follow up by asking Alexa-device owners if they worry about having a corporate device that is always listening to them and tracking their behavior embedded in their personal space, but I’ve come to realize that this question doesn’t really register. It probably just makes me come across as a conspiracy-theorist kook. The question also applies to phones, so I may as well be asking it of myself. I wonder if owning an Amazon Echo or similar device currently has a chiefly symbolic value: It signifies that you are not a conspiracist, that you quite obviously have nothing to hide, that you understand that you belong to that privileged group of people who can expect to experience surveillance as care rather than discipline or persecution. Ring doorbells are in the same category: While they are ostensibly about security, they are also a way to broadcast one’s allegiance to the police and the belief that one belongs to the group that they protect. Installing a Ring communicates one’s willingness to increase the cost of exclusion from the protected group; it’s a symbolic border wall for the neighborhood.
But if tech companies have their way, voice assistants feel will be on their way to inevitability, where not using one will be a more conspicuous marker than having one, and resistance to them will seem quirky and contrarian if not downright antisocial. I remember feeling this acutely in 2007, my last year without a cell phone, when it was becoming more and more of a burden to make plans with anyone because I had to insist they show up at a specific place at a specific time, a demand that seemed onerous and bizarre to people who had grown accustomed to adjusting plans in transit. I remember throwing rocks at a friend’s fifth-floor window, because his building didn’t have a buzzer system, but I didn’t have a phone to call up. Eventually it became untenable not to carry a phone; I felt I had no choice but to be perpetually trackable, on the grid.
In general, wireless technology and the internet, as directed by capitalism’s remorselessness, has produced ever more intrusive and inquisitive networks that are capable of gathering more data and imposing more control over the environments in which they operate. (The private surveillance service built out through a distributed network of license plate readers, which Vice reported on here, is a good example. Profit potential knits rogue opportunism and the affordances of networks into highly exploitive and expansive use cases.)
The internet of things, of which voice assistants are a part, aspires to network the entire physical world and make it responsive to centralized control, for the benefits of a few ultimate owners of the system. It is more or less modeled after social media — the internet of human things — which network effects have made compulsory as a digital identification and validation system, extending the function of the mobile phone of discretely locating everyone at every moment. The social media platforms aspire to become a means for charting the “social graph” of all human interaction and extracting profit from it all, the ultimate capitalist implication of technology’s possibilities. As Jacqueline Rose writes in this review of Jia Tolentino’s Trick Mirror, “While the Internet was meant to allow you to reach out to any- and everyone without a hint of the cruel discriminations that blight our world, it turned into the opposite, a forum where individuals are less speaking to other people than preening and listening to themselves — turning themselves into desirable objects to be coveted by all. It became, that is, the perfect embodiment of consumer capitalism, where everything can be touted in the marketplace.”
Together, the totalizing internet-of-all-the-things systems aim to account for and ultimately predict all possible movements of people and objects, pointing toward a vast social immobility in the name of a comprehensive data set that can accommodate the demands of algorithmic control. Proponents of autonomous vehicles openly dream of eliminating all the unpredictable elements from the streets to make the system work — pedestrians, human drivers, anything that reflects the possibility of spontaneous human agency. But why stop there? Why not eliminate human agency and social mobility from all of social life? How else will we ever get a complete data set? How can you expect convenience without consenting to a fully administered life, without accepting your proper encoded place and allowing the sensors to ensure you are staying put?
As Karen Levy and Solon Barocas outline in this recent paper on what they call “privacy dependencies,” it has always been the case that “our privacy depends on the decisions and disclosures of other people.” And of course the same is true of our horizons for agency: the decisions of others can recalibrate what it is possible for us to do, and we can’t do anything about it. In the absence of scale-seeking technological systems meant to track and correlate our behavior with others, the extent of these “dependencies” were typically somewhat limited. One could find regions of refuge from social entanglements (i.e. “privacy loss”). It seemed reasonable for a judge to argue (in a decision Levy and Barocas cite) that being “betrayed by an informer … [is] inherent in the conditions of human society,” since the scope of these potential (and inescapable) betrayals seemed personally manageable, not a Hobbesean condition of endless interpersonal war of all against all. But Levy and Barocas note that “many legal scholars have opined that the doctrine makes less sense given the ubiquity of platform-mediated communication, in which we have virtually no choice but to pass information through Observers” — other people and the platforms themselves. Mobile technology and networks bring the war home. Alexa makes that literal.
The proliferation of smart devices extends the network of compulsion and betrayal. The absence of trust indicated by ever-more-fine-grained surveillance, implemented voluntarily by those already situated in a privileged position, is amplified by network effects to make betrayal not an unfortunate risk but the basis of social inclusion.
Yet paradoxically, the more minutely we are collectively surveilled, the more likely our life possibilities will be dictated not by our own circumstances but by the inferences derived from proxies, from “lookalike audiences” and the like that will have some trace of statistical validity to justify after the fact their otherwise arbitrary imposition. Levy and Barocas describe this as the capacity to stereotype individuals according to “non-socially-salient groups” — invisible stereotypes only machines can read but that people experience.
The scope and integration of various data feeds provide endless opportunities for this kind of stereotyping on the fly, and once categorizations are made they can be used for discrimination without legal recourse. But since humans can’t perceive the basis or means for implementing the discrimination, they are far less likely to protest it or develop solidarity with those similarly targeted. Levy and Barocas note that “people subject to adverse decisions have no socially salient criteria from which to make sense of and contest their treatment.” This is the ultimate point of commercial surveillance: to impose levels of profit-driven discrimination that consumers can’t detect and may even embrace as convenient, or as emblems of their unencumbered lifestyle. Every new household with a smart device in it increases the discrimination while it naturalizes the perception of convenience. These devices will listen carefully for our every whim and turn them into ways to betray someone else.