It’s not hard to imagine Adorno detesting something like the “like” button, but it was still interesting to discover that he literally argued against one during his years doing empirical sociology in the U.S. in the 1940s. In “Homophily: The Urban History of an Algorithm” by Laura Kurgan, Dare Brawley, Brian House, Jia Zhang, and Wendy Hui Kyong Chun, the authors take note of Paul Lazarfeld’s Radio Research Project, which investigated how mass media affected listeners. “Among the methods of data collection pioneered by the Radio Research Project,” they write “were ‘like’ and ‘dislike’ buttons that enabled the tabulation of momentary emotional responses from the members of a focus group.” Adorno was an assistant on this project, and according to the authors he rejected the idea that “culture could be reduced to such a binary and worried about the implicit suggestion of the research itself: that media could be crafted to have specific effects on groups of people.”
Of course, that possibility is now largely taken for granted; many social-media business models are fully premised on the idea that you can target media at specific people to achieve desired effects. What Adorno appears to have been concerned about, and what the homophily essay overall claims, is that research design can foster the conditions that make true what they were supposedly trying to investigate. Homophily — the idea that people are only comfortable with people who are like them — is not, the authors argue, a naturally occurring phenomenon but was induced by poor research design in sociological studies and then replicated through urban planning that presumed it. It seems that homophily was easier for mid-20th century researchers to accept as an “explanation” for racism than colonialist expediency and economic exploitation.
Social media platforms that presume homophily — that we only want what we have already liked; that we only want to be what we already are; that we too will like what people who are demographically similar to us already like — have similar effects. Using those platforms inculcates homophily as a principle of “real identity.” They train us to orient ourselves to the world with the understanding that our actions will be taken as endorsements and will be amplified by the systems in which we are embedded, and that “who we are” and who we can know should depend on how we navigate these algorithms.
This essay by NM Mashurov frames this as a question of “algorithmic literacy”: learning how to guide our interactions with algorithms so that we understand the identity that they are reifying for us. Taina Bucher, in If … Then: Algorithmic Power and Politics, characterizes it as “programmed sociality,” how platforms try to engineer social relationships to maximize their value for the companies administering them. Much of her focus is on Facebook’s News Feed, whose “algorithms decide which stories should show up on users’ news feeds, but also, crucially, which friends.” As Bucher points out, feeds are great at normalizing engagement; what users see on them first and foremost is how people use Facebook to do things. “By creating the impression that everybody participates,” she writes, “Facebook simultaneously suggests that participation is the norm.”
The purpose of feeds sorted by opaque algorithms is to grant them disciplinary power. They allow the platform to teach users what they should experience as “most important” or “most interesting” or “most valuable to you.” The training operates on both sides of that content-consumer relation, with the sorting algorithms working to establish a particular idea of what makes for value and what makes a self. If social media are moving away from feeds, it’s likely because they are instituting that same sorting and training at some other level of the user’s experience. Being shown what you are “supposed” to see is central to what social media offer (the promise of self-expression is mainly an alibi for that larger surrender to algorithmic recommendation); they allow us to consume that passivity toward what we want as pleasurable in itself.
That extends, as Bucher emphasizes, to friendship. Facebook doesn’t just target you with content, products, and ads but with people: which friends you are supposed to care about. The programmed sociality apparent in Facebook is about more than making people remember friends from their past with the help of friend-finding algorithms,” she argues. It is also producing connection that is dependent on the platform and shaped according to its needs. “Some relations are more ‘promising’ and ‘worthwhile’ than others …,” she notes. “Friendships are continuously monitored for affinity and activity, which are important measures for rendering relations visible.”
The relations that Facebook makes visible will be chosen to benefit Facebook first and foremost. Users are shown the content that makes them more valuable to Facebook, and not necessarily the content they prefer. In fact, the content is explicitly meant to change users’ sense of their own preferences, and moreover, the underlying logic of why they would prefer one thing to another. “From a computational perspective,” Bucher argues, “friendships are nothing more than an equation geared toward maximizing engagement with the platform.” That is, for Facebook, friendship is a tool to get users to generate more “edges” — more relational data that the company can use to market its targeting ability. And the value of “targeting” itself increases the more Facebook can train users to accept homophily as the “truth” about what they really want and who they really are.
The goal of the platform’s interface design is to render this as convenient, pleasurable, a kind of gift. Facebook tells you who you should pay attention to and be friends with and absolves you of the responsibility for having to choose to assign your attention and the conflicts that inevitably arise from that. This mirrors the rationalization that homophily has provided for “white flight” and internalized racism. Facebook, as well as suburbia, make it seem natural that one would only associate with certain types of people, the ones that are just like you, the ones that make your marketing demographic most explicit and concrete, that confirm that you can be targeted through proxies. The “social graph” is not a map of your connections but an engine for reshaping them into legible and easily predictable configurations.
Bucher stresses that this kind of “panoptic diagram” works not only by making everyone’s behavior visible to some centralized authorities but by making us intermittently invisible to our peers.
While … each individual is subjected to the same level of possible inspection, the news feed does not treat individuals equally. There is no perceivable centralized inspector who monitors and casts everybody under the same permanent gaze. In Facebook, there is not so much a ‘threat of visibility’ as there is a ‘threat of invisibility’ that seems to govern the actions of its subjects. The problem is not the possibility of constantly being observed but the possibility of constantly disappearing, of not being considered important enough. In order to appear, to become visible, one needs to follow a certain platform logic embedded in the architecture of Facebook.” [The idea is also laid out in Bucher’s 2012 paper on the “threat of invisibility.”]
So algorithms provide a kind of pleasurable passivity in dictating what you see, enforced by the threat of social exclusion if you refuse to enjoy it, or fail to adapt yourself to what the algorithms need to work — if you don’t make as much of your life as possible available to platforms to transform into data.
This data is seemingly used to personalize what platforms offer you, but the arrow also points the other way: individuals are rendered commensurate as processable data structures. “Just as with the specific machines (i.e., military, prisons, hospitals) described by Foucault, it is not the actual individual that counts in Facebook,” Bucher notes. “This is why spaces are designed in such a way as to make individuals interchangeable. The generic template structure of Facebook’s user profiles provide not so much a space for specific individuals but a space that makes the structured organization of individuals’ data easier and more manageable. The system, then, does not particularly care for the individual user as much as it thrives on the decomposition and recomposition of the data that users provide.”
This is perhaps most explicit when platforms begin to use predictive analytics to score individuals. The point of these actions is to make people numerically rankable, to make their relative value within various economic functions appear explicit: Which people are worth showing an ad to, or offering a deal to, or a job? As part of that scoring, platforms will ascribe identity to users, regardless of how users may choose to identify themselves. Bucher frames the issue with a question: “What happens when the world algorithms create is not in sync … with how people experience themselves in the present?”
As part of her effort to investigate people’s algorithmic literacy, Bucher interviews some ordinary users, a few of whom report some of the discontinuities inherent in how algorithms render our identity. “What seems to bother Robin is not so much the fact that the apparatus is unable to categorize her but the apparent heteronormative assumptions that seem to be reflected in the ways in which these systems work. As a person in transition, her queer subject position is reflected in the ways in which the profiling machines do not demarcate a clear, obvious space for her. A similar issue came up in my interview with Hayden, who is also transgender and uses social media primarily to read transition blogs on Tumblr and Instagram.” Tumblr, in particular, had been, as this paper details, regarded as a “trans technology”: It “supported trans experiences by enabling users to change over time within a network of similar others, separate from their network of existing connections, and to embody (in a digital space) identities that would eventually become material.” This is in addition to being a “queer technology” that permitted “multiplicity, fluidity, and ambiguity” with respect to identity expression.
Such conceptions of particular platforms echo the utopian idea about the early internet as a place where we could free ourselves from assigned identities and fashion ourselves anew. But that turned out to be bad for business. Since then, the internet has been transformed into a totalizing digital system that can track everything we do and assign our identity to us, regardless of who we think we are, automatically erasing any boundaries we might have set up between different identities in place or time. This shift has been rationalized by the logic of homophily, by the idea that we demand to be consistent with ourselves and we’ll tolerate any kind of external intrusion that works to guarantee that a specific “comfort zone” will be maintained around us, protecting us in advance from our own potential multiplicity.
I used to try to be optimistic on this front. In a series of tweets from 2013, for instance, I wondered if algorithmic identity would lead to “postauthenticity.” I was likely thinking about Baudrillard’s argument about the “silence of the masses” and the kind of hyper-conformity that nullifies itself. I thought algorithms would be felt as an alternative to the work of self-branding, that they pointed away from it, when really they were automating it and making it more hegemonic and decisive. Now I think that algorithmic identity makes the demand for “authenticity” more intense and subject to external verification, even as it becomes more incoherent and implausible. Algorithmic identity appears less as a novelty and more as a surveillance enforcement mechanism, a kind of “social credit” scheme. Data science poses the problem of authenticity in what seems like an algorithmically solvable way; users in algorithmic systems get to feel like their “authentic behavior” has more value even as they are further estranged from it or further forced to confront its structural impossibility.
Some of Bucher’s interview subjects seem to be grappling with this tension. Hayden, Bucher reports, “is not particularly impressed by the way in which algorithms seem to only put ‘two and two together’ when ‘the way things work in society can be quite the opposite.’ If this is how algorithms work by simplifying human thoughts and identity, Hayden thinks social media platforms are moving in the wrong direction. As he tellingly suggests, ‘people aren’t a math problem.’”
Yet this is precisely how algorithmic systems understand people, and from within the platforms they administer, they can’t be “wrong.” Any mismatch between what you think of yourself and how the algorithm interprets you serves as proof that the platform is working; it is modulating your subject position. It is capable of training you. It is making you who you need to be in order for algorithmic systems to “work” for their owners.
John Cheney-Lippold’s account of algorithmic identity in this paper makes clear how we are little more than “math problems” to social-media platforms, detailing how algorithmic systems render us according to probabilities it calculates about whether we fit into certain established categorizations. These systems don’t simply make an unequivocal decision about what, say, gender or race we are presumed to be and they don’t necessarily take our word for it. Instead they calculate the likelihood based on patterns in and inferences about data sets deemed large enough to serve as a workable simulacrum of the world. So such a system doesn’t understand me as “white” and “male” but as some percentage likelihood that I am white and some other percentage likelihood that I am male. What makes for “maleness,” too, may change from moment to moment, depending on how the underlying data changes and how the algorithms sorting it adjust themselves.
This approach can be extrapolated indefinitely to a limitless number of categories: There is some percentage likelihood that I am of a certain religion, in a particular income bracket, in a relationship, have children or pets or certain diseases, and so on. And new categories can be generated ad hoc for specific purposes, as with the procedures for refining audiences for ad targeting, e.g. “white male Star Wars fans with no college and low income who live in rural districts,” etc. These are math problems in set theory, with no preordained answer; it’s more new math than arithmetic.
In principle, regarding identity as probability, as a dice throw to be cast on whatever occasion it has become relevant, would seem to make it more fluid and malleable. But the fluidity is conditional on overriding whatever claims we might make about ourselves. One can be operationalized as fluid, but one can’t claim fluidity as an identity — whenever the system needs to calculate who you are, it can produce a clear and fixed result. It means also that everyone is gendered and racialized (and so on) according to inaccessible definitions and categories. As Cheney-Lippold argues, “How a variable like X comes to be defined, then, is not the result of objective fact but is rather a technologically-mediated and culturally-situated consequence of statistics and computer science.” What is “male” depends on how the algorithm is trained to identify “maleness.” Regardless of how flexible or malleable those definitions prove to be, they still reify the significance of those particular categories and the historical legacies they bear, the exclusions and biases associated with those categories. They are made into Platonic essences, as Dan McQuillan argues in this paper:
Data science strongly echoes the neoplatonism that informed the early science of Copernicus and Galileo. It appears to reveal a hidden mathematical order in the world that is superior to our direct experience. The new symmetry of these orderings is more compelling than the actual results. Data science does not only make possible a new way of knowing but acts directly on it; by converting predictions to pre-emptions, it becomes a machinic metaphysics. The people enrolled in this apparatus risk an abstraction of accountability and the production of ‘thoughtlessness’.
Data science tries to fix categorical identifications on people as conditions that determine what sort of behavior can be expected from them, situating them and their life possibilities within these statistical horizons that have nothing to do with them as individuals. These will then shape users’ interactions with any environments such algorithmic systems touch. Algorithms can implement social control by holding the secret truth of a categorization. We will have to continually interact with that system, give it more information, submit to its tests. Our data is reprocessed from moment to moment, positing a different self for us to inhabit and imposing a different set of culturally inflected prejudices on us.
In Flann O’Brian’s novel The Third Policeman, a police sergeant tells the narrator of his idea of “Atomic theory,” by which people exchange atoms with the things they use, thereby changing their nature. This means that the more a person, for instance, rides a bicycle, the more their atoms are blended with the bicycle’s atoms, such that they to a certain degree become a bicycle. “People who spent most of their natural lives riding iron bicycles over the rocky roadsteads of this parish,” he explains, “get their personalities mixed up with the personalities of their bicycle as a result of the interchanging of the atoms of each of them and you would be surprised at the number of people in these parts who nearly are half people and half bicycles.” He goes so far as to cite specific percentages for certain individuals, who have become, for example, 48 percent bicycle. He makes it his mission to steal bicycles and hide them, in order “to regulate the people of this parish.”
The narrator responds to all this by eventually saying, “I do not think I will ever ride a bicycle.” The sergeant stops short of agreeing. “It is not easy to know what is the best way to move yourself from one place to another.”