In a famous image from the Tiananmen Square protests in 1989, one man stood in front of a column of tanks to protest an authoritarian government that had declared martial law in part to assert its control over public space. In 2018, in Arizona, another man stepped forward:
Charles Pinkham, 37, was standing in the street in front of a Waymo vehicle in Chandler one evening in August when he was approached by the police.
“Pinkham was heavily intoxicated, and his demeanor varied from calm to belligerent and agitated during my contact with him,” Officer Richard Rimbach wrote in his report. “He stated he was sick and tired of the Waymo vehicles driving in his neighborhood, and apparently thought the best idea to resolve this was to stand in front of these vehicles.”
His protest was not entirely in vain.
It worked, apparently. The Waymo employee inside the van, Candice Dunson, opted against filing charges and told the police that the company preferred to stop routing vehicles to the area.
A company spokesperson “disputed claims that Waymo was trying to avoid bad publicity by opting against pursuing criminal charges,” but it certainly seems as though it is conceding that protest is an understandable reaction to its activities. The New York Times article linked above recounts people forcing Waymo vehicles off the road, driving erratically to disrupt them, screaming at them, pelting them with rocks, and threatening the employees inside them with various weapons. The company and their state sponsors would prefer that these appear as isolated incidents of petulant spite perpetrated by “vandals” and “individual criminals” than acts of political resistance met with state persecution. It seems that Waymo doesn’t try to hold protesters accountable because it isn’t interested in being held accountable itself for how it has imposed itself on a population, how its system works, or how that system’s benefits are distributed.
The situation is similar with the companies like Bird, which unilaterally enter a town and litter the streets and other public spaces with personal scooters for paying customers while immiserating everyone else. With little recourse against this, people express their resistance by throwing the scooters in the ocean, hiding them, smashing them, setting them on fire, or otherwise disabling them.
In a perverse way, this is entirely in keeping with the scooter business model, though, which suggests that no individual scooter is worth that much — you can literally dump them out on the street like trash. What is valuable to Bird is the system for coordinating them. The scooter, like the driverless car, or the delivery robot, or the proliferation of Airbnb key lockboxes chained to railings (also a target for attack, though here the state is on the side of local residents), is an unmistakable sign that an invasive species has found a foothold in your environment, and the encroaching privatization it betokens will continue unabated despite these brazenly ad hoc colonization strategies, which leave their depredating traces in plain view on the streetscape.
When a company floods a space with its scooters, it is blanketing its network and its algorithms over the existing rules that once governed how that space was used and maybe even shared. Each scooter is a fence post in an act of enclosure, as is every Airbnb listing and Uber car (see Alex Rosenblat’s Uberland: How Algorithms Are Rewriting the Rules of Work, reviewed here). But it is not so much that a common is being privatized but that a whole map is being redrawn. A different understanding of space is being charted, of territory as a product of networked connections, conditional links, and spontaneous arenas for competition rather than a matter of geographic contiguity. Space is not a fixed array but is reconceived in terms of availability, with rights to it redistributed in terms of who can sell what when.
In other words, maps are displaced by markets. When a privately owned logistical system is able to impose itself on a territory, it changes the use and value of all the existing infrastructure across the board, and reclassifies all the parties to it. Airbnb lays claim to the entire structure of rental properties; scooters lay claim to all the sidewalks at once. And their alien appraisals of what those spaces are worth and how they should be used become something that everyone must contend with, even if they had no intention of riding a scooter or running their home like a boardinghouse. These systems are not about individual convenience or entrepreneurship and only indirectly, if at all, about “safety” — they are primarily about top-down control, containment, and exploitation. But they are able to rationalize themselves not only by getting some people to buy in at the expense of others but by representing themselves as inescapable.
“Connecting” online once referred to ways of communicating; now it is understood as a means of digital totalization, typically euphemized as objects becoming “smart.” Each data-collecting object requires a further smartening of more objects, so that the data collected can be made more useful and lucrative, can be properly contextualized within the operation of other objects. You can’t opt in or out of this kind of connectedness. Soon you can’t buy a car that is not uploading all the information about where you go to who knows who, or a refrigerator that isn’t monitoring its contents and transmitting them to Amazon. But you can try to fit your life more harmoniously into this system: It becomes natural and not alarming to have a corporate listening device in your house to expedite your spending and monitor your behavior so that marketers can better classify you. People even nominally pay for the privilege.
The ubiquity of connected devices and rapacious business models invites the half-paranoid, half-utopic assumption that these various acts of algorithmic enclosure and unregulated surveillance have already coagulated into “the Algorithm” — the vernacular idea that our lives are increasingly governed by an inscrutable, overarching force that we can only partly interpret and indirectly affect. Framed that way, the Algorithm sounds a lot like astrology. Lauren Oyler, citing Adorno’s The Stars Down to Earth, writes in this Baffler essay that “astrology represents a desire for a benevolent ‘abstract authority’ that would create the illusion of freedom, which, if you listen to your horoscope, ‘consists of the individual’s taking upon himself voluntarily what is inevitable anyway.’”
This is what “the Algorithm,” as an amalgam of all the technological systems imposed on us, represents as well, a similar conflation of what is voluntary with what seems inescapable. We can consult “the Algorithm” through various stichomantic means (what is at the top of my feed right now?) as one would consult a star chart or a horoscope column, to experience a moment of distraction in which the various expert systems we are enmeshed in seem to resolve themselves into something concrete and personal, a prediction about ourselves that we can either invest with hope or laugh away. We can overinflate our estimation of the capabilities of the Algorithm, so we can then enjoy how often it seems wrong. It knows me, but not really.
Many are content to collaborate with “the Algorithm” on these terms, as though it really were astrology rather than a product of political economy — what Shoshana Zuboff calls The Age of Surveillance Capitalism — and as though its individualized promises of convenience need only be trusted to be true. Yes, the Algorithm is an opaque and all-powerful system that controls our lives, but in the end it just wants to help us understand what is possible and make the best of it. This conception, making it both friendlier and more omnipotent, may be easier to live with than seeing the systems as a “black-box society,” as Frank Pasquale theorized them. Instead of a matter of companies imposing surveillance with impunity to profit off our vulnerabilities, algorithmic control becomes a largely benevolent mystery, as obscure a destiny as our fate written in the stars.
Maybe this is why most people don’t bother to shake their fist at the sky, throw rocks at Waymo cars, dump Bird scooters into the canal. Attacking these material manifestations of the Algorithm may seem like a direct way to try and hold it accountable, but they do nothing to hurt it. It operates at suprahuman scale; it processes protest as merely more data. This makes it a god you don’t have to pray to or praise; it gathers up its tokens from you automatically. Still, you can elect to propitiate the Algorithm with what amount to passing acts of digitized superstition, liking posts as a way of knocking on wood or throwing salt over your shoulder just in case. It can be a form of Pascal’s wager: Aren’t you risking more by not believing in the Algorithm? Better make sure you share that Fitbit data with everyone, to help medicine advance; better share that browsing and location data “to improve your experience.”
Usually, our tolerance for the algorithm is represented as a simple trade-off, as though it were a calculated choice. “In exchange for surveillance,” Jennifer Szalai writes in a review of Zuboff’s book, “we get convenience, efficiency and social connection.” This allows us to accept a business model that Zuboff, in Szalai’s words, regards as “too radical to be taken for granted.”
In his review of the book for LARB, Nicholas Carr pushes a little further, framing submission to surveillance capitalism less as a universal trade-off than a matter of opposing states of mind. Not everyone takes it for granted, but “many people, it seems clear, experience surveillance capitalism less as a prison, where their agency is restricted in a noxious way, than as an all-inclusive resort, where their agency is restricted in a pleasing way.”
But it seems likely that there aren’t two kinds of people — those who see surveillance capitalism as a prison and those who see it as a resort — but that many of us experience it as both, not alternately but simultaneously. The Algorithm serves as a kind of metaphor for society itself, which conditions and limits our desires even as it makes them meaningful, desirable in the first place. The prison-resort doesn’t restrict agency so much as become the particular condition that makes agency possible in our culture at this moment. That doesn’t mean it can’t be resisted or must be taken for granted, but the degree to which it appears natural and inevitable allows it to function as ideology, as an identity-granting structure that shapes subjectivity. How we see ourselves and what we can do in the world does not precede the circumstances of how our behavior is registered socially and how we receive feedback about it; it follows from them.
At the very least, the idea that we’re under constant surveillance, within a sensorium that can be modulated in anticipation of our next moves, blurs distinctions between interior desires and external predictions and manipulations. Also blurred is the distinction between the individuated self and the self as part of a larger mass, co-created and contingent on who one associates with or who fits one’s statistical demographic. Writing in Art in America about a Xu Bing film made entirely from edited surveillance camera footage, Ava Kofman argues that it constitutes “a new form of narrative that captures the contradictions of surveillance technology, which at once pinpoints personal identities and submerges them in a mass of other averaged data points.” The new kind of narrative reflects a different kind of subjectivity, a life that can be experienced and produced from many different points of view simultaneously, through many different avatars, or through other people vicariously. It points to how new forms of surveillance “not only make us vulnerable to punishment but also produce new pleasures, identities, and desires.” This reminds me of Oyler’s description of “astrology lovers,” who “today feel less need to mask their self-obsession; they relish it, brag about it on social media, even claim it as political.”
It figures that the stories of our lives would be told through surveillance footage, or by “showing the receipts” in various ways. In a conversation at Edge.org about “how technology changes our concept of the self,” science historian Peter Galison points to Norbert Weiner’s development of cybernetic missile-targeting systems as key: “It taught an incredibly new lesson to people, which is that even in the absence of any concrete understanding of the interior life of, say, bomber pilots, just by their exterior actions, you could anticipate what they would do in the future and, in this case, send a projectile up to shoot the plane down.” This rationalized building a behavioristic view of the self into other forms of technology: What matters is only the data that can be collected about a subject; what they intend or say they want is irrelevant noise. And this technology reshapes our “concept of the self,” as something that follows from data rather than something that intentionally produces data.
The rise of cybernetics points to a post-intentional self, or what I talked about here as a “postauthentic” self. It doesn’t matter if your actions match your intentions, since your intentions don’t matter and are disregarded in advance, and in many ways those intentions merely follow from what you and others demographically or statistically similar to you have been systematically permitted to do anyway. (I think you can track how far along the postauthentic self is by how urgently authenticity is preached and claimed. Paeans to authenticity are tombstones.)
Resistance to this kind of self through the technological means that enable it — whether that is social media practices or hyperpersonalized commerce or “smart”-tech convenience — is another expression of the contradictions of surveillant subjectivity. The tension in this form of subjectivity may be pushing us into what Adorno calls pseudo-rationality, the “twilight zone between reason and unconscious urges.” Adorno saw pseudo-rationality as both cause and effect of authoritarian forms of control. One could interpret its rise today in similar terms, a manifestation of the rise of tech authoritarianism that Zeynep Tufekci describes here and here, and Fred Turner describes here. Tufekci focuses on state surveillance, the dissemination of misinformation, polarization and microtargeting, and “persuasion architecture.” Turner points to the way interactivity was deployed as social control. Communications technology, he argues, promised to reverse the mass conformity that supposedly produced fascism: “If the mass-media era had brought us Hitler and Stalin, they believed, the internet would bring us back our individuality. Finally, we could do away with hierarchy, bureaucracy, and totalitarianism. Finally, we could just be ourselves, together.” But instead it revealed how individualism and self-expression — transformed into predictive personalization, or packaged as alt-right redpilling — could be just as useful to fascists.
It’s easy to see pseudo-rationality as something that other people are guilty of, to use the concept to assist in private projects of disavowal. The more I spot other people’s pseudo-rationality, the more rational I seem to myself. Of course, that is precisely pseudo-rationality in operation. But I think the concept speaks to the doubleness we live with, and how current media technologies seem to divide us against ourselves in our desire to be free but connected, actively engaged yet passively entertained. They reveal facets of ourselves that must but can’t co-exist. So it is that pseudo-rationality comprises not only belief in astrology or the Algorithm but also the impulse to run a Waymo car off the road or attack people in comment threads or go on “digital fasts.” Pseudo-rationality is both actively searching for what you want to consume and letting algorithmic feeds take over. Most of all, it’s not knowing when one has become the other.