Home

Force Fed

How recommendation algorithms tried to make me a Nazi

Until recently, I was heavily addicted to the multiplayer computer game Dota 2, a thriving e-sport with a massive community that produces an enormous amount of online content to interact with. On YouTube, there is a plethora of guides, matches, and memes, and I spent a good amount of my waking hours there. But for the past few months, along with the latest Dota memes, YouTube has been putting a variety of hyper-misogynist neo-Nazi movement content in my “recommended for you” column: Million Dollar Extreme, pickup artists’ videos, Milo Yiannopoulos, and more, despite my having only ever used YouTube for Dota videos and (nonfascist) music. A long, deep engagement with a particular branch of gaming has led YouTube to assume I’m an alt-right fuckboy, or at least might want to be.

What is happening here? There has been a lot of centrist handwringing about “filter bubbles,” “echo chambers,” and “epistemic closure” in the past decade, all arguing essentially that people — coddled millennials, especially — are unwilling to leave their informational comfort zones and safe spaces and engage with capital-T Truth. But when YouTube began trying to recruit me to the Nazi movement, it was not because I had entered an “echo chamber” of individuals who thought just like me. As a queer Jewish anarchist, I had been lumped into a filter bubble antithetical to my very existence, invited to participate in an ad hoc, algorithmically generated program of online radicalization.

To go by the U.S. mainstream media, online radicalization is the special province of Islamic State. The supposed social media prowess of ISIS and its ability to win recruits around the globe was among the most widely reported fact about the group. Business Insider hailed “Inside ISIS’s sophisticated strategy to brainwash”; Wired warned that “ISIS is winning the Social Media War.” Then-president Barack Obama outlined the basic argument: “The high-quality videos, the online magazines, the use of social media, terrorist Twitter accounts — it’s all designed to target today’s young people online, in cyberspace.” So nefarious, targeting young people in cyberspace with high-quality videos. Won’t somebody protect the young people!

The panic about online radicalization seemed to imply that radicalizing experiences were somehow rarer before the internet. But radicalization occurs during moments of social upheaval

The panic about online radicalization seemed to imply that radicalizing experiences were somehow rarer before the internet. But this is false: Radicalization occurs during moments of social upheaval. Media technologies may shape the way radicalization occurs — since the 2008 financial crisis especially, social media interconnection has given views from the fringes a quick route to the disoriented center — but it is the social transformation that produces radicals. Before the internet, people were sometimes radicalized by books, newspapers, and movies perhaps, but more often than not, just like today, they were changed by their own lives — by friends, work, family, school, encounters with the police or the state, or experiences passed down or witnessed in their communities.

When YouTube tried to Nazify me, it subjected me to a different kind of process. The internet does not produce radicals by its very nature, but its efficiency at circulating information and tracking users allows tech companies to try to consolidate audiences based on (potentially spurious) correlations in consumption data, positing political and social norms for disparate users across a spectrum of interest groups, as if their algorithms have some ultimate insight into our true desires. Users are molded into an image and a mode of online interaction that the companies find easiest to control and sell back to us. The new groupings these methods posit are not benign reflections of some objective reality of affiliation, but efforts to condition the future behavior of those lumped into them.

This process of algorithmic discipline seeks to turn interest in one thing — one particular product, Twitter account, or news story — into interest in (or purchase of) more, quasi-related things. More and more apps and services work to “predict” our desires, in music, movies, products, meals, friends and news stories. These predictions produce and reinforce the ideology that the algorithms actually recommend what we want — news stories about how Facebook can predict breakups or anticipate where someone is likely to go on a given day proliferate. It is exemplified by Amazon’s “people who bought this also purchased,” Facebook’s algorithmically sorted news feed, or “recommended” Google advertising. To achieve the increased engagement and consumption it aims for, it takes the long-standing marketing tactic of demographic targeting to almost absurd extremes.

When marketing’s old-school one-size-fits-all approach was more overt — when it relied on showing more or less everyone the same ads, and so projected the same (white, skinny, straight) desirable bodies, the same (middle class, suburban, family-centered) desirable lives — it was part of a project of “mass culture” formation, of the violent unifying and homogenizing of subjects as they were initiated into consumer capitalism. It was also easier to dismiss as crude. But as demographic targeting capacities improved, so did the possibility of constituting new demographics altogether. Concepts like “Generation X,” “soccer moms,” and “males between the ages of 18 and 35” have gone from marketing conferences to broad acceptance as legible sociopolitical units.

With social media, demographic constitution has transformed into the deployment of individuated mainstreams, which are spontaneously tailored according to the available data and the context but are generic enough not to be experienced as isolating. This brings to its logical conclusion what communications theorist Oscar Gandy in 1993 called the “panoptic sort,” where computer technology empowers capital to increasingly classify individuals by their potential economic or political value.

And given the way the internet is organized, the tendency to render individuals as increasingly specific demographics bleeds through to all corners of life. Data is harvested from personal correspondence, public social identity, private browsing, and purchasing history indiscriminately and concatenated to provide a fluid experience of personalization on otherwise homogeneous platforms. On these platforms, ideologies are at the same level as products, no more or less integral to identity and equally available to be combined with any other product or opinion in order to query databases in search of correlations.

Algorithmic discipline does not reflect the desire to mainly interact with people who share some of your interests, experiences, or attitudes — the desire to have friends, as this terrifyingly narcissistic millennial affectation used to be known. “Echo chamber” social grouping is driven instead by advertising-oriented algorithms of tech companies and data brokers, and their need to relate their targeting to results. Algorithms produce algorithmically measurable identities — just as marketing produces consumers — and not the other way around.

Society is correspondingly reshaped in the image of the needs of the market. That’s why my interest in Dota led to YouTube trying to make me a Nazi: If I conformed to a subcultural norm, it would not only be easier to sell me shit, but it would be easier to sell and support a whole system that is premised on the validity of the correlations that suggested I might like to be a Nazi.

Once correlations are established and picked up algorithmically, the feedback loops inherent in the process begin to emphasize these connections’ “naturalness.” This makes algorithmic discipline particularly helpful for movements that work through the infiltration of “nonpolitical” spaces, who, like the far right, find it necessary to use Trojan horses for their ideology and hide it behind “jokes,” trolling, irony, and insincerity. Today’s Nazis used spaces where young, angry, isolated middle-class white boys were already gathering — gaming communities, Reddit, 4chan, 8chan — to recruit, radicalizing aggrieved beta masculinity into openly white supremacist politics. The algorithms clocked the connection infiltrators fomented between gaming and far right politics, and re-enforced and reproduced it. This strategy exploits liberal tolerance while radicalizing the uninitiated.

YouTube becomes a fascist recruiter because the market benefits from increasingly “predictable” sub-identities. As marketing seems to become more and more targeted and “individualized,” it actually insists more and more strongly on conformity — not to a media-orchestrated liberal mainstream but to something with more and more machine-driven specificity. The end goal of this is, paradoxically, a world of perfect individuals, all differentiated from each other with measurable precision but within the exact same processes of individualization: an infinity of identically different subjects.

But some people will be more sorted than others. As of 2012, the top 40 percent of earners were responsible for more than 85 percent of total U.S. consumption. These are the Americans who are particularly relevant to advertisers. These predominantly wealthy, white, and male subjects face algorithmic sorting and surveillance primarily meant to stoke their desires.

YouTube becomes a fascist recruiter because the market benefits from increasingly “predictable” sub-identities. The end goal of this is an infinity of identically different subjects

The rest face a more vulgar, violent kind of sorting: that of race, gender, sexuality, or ability. As Simone Browne demonstrates in Dark Matters, the technique of producing a particular kind of Black subject — for example, the post-emancipation “mammy” domestic worker — was based on a combination of surveillance of her every moment of work and her deindividuation within a proscribed racial stereotype. This double-sided process echoed the way enslavers could offer highly detailed, personalized descriptions of the enslaved when necessary, as notices of escape amply demonstrate, while simultaneously treating them as depersonalized chattel. Plantations used some of the earliest known forms of employee surveillance in American history to render the enslaved as a homogenous mass of the hyper-individualized. Surveillance as a mode of social sorting developed to racialize Black subjects and maintain their subordination and productivity within racial labor regimes. For the underclasses, algorithmic discipline adds another layer of domination that facilitates the operations of the police, immigration agents, and prison guards.


Traditionally, modes of social grouping have been grounded in geographic, temporal and historical coincidence: people in the same town, workplace, family or nation formed a unit of potential mutual interest. Through the internet, capitalism has managed to dramatically downgrade the primacy of such “accidental” identifications as geography. A seemingly voluntary identity, produced entirely by and around user interest, action, and pleasure, becomes possible, particularly for the privileged.

This was the dream of the early internet utopians: that “the web” was a form of real anarchy, a totally voluntary system of association, interest, and desire. But of course, the internet was created and has evolved to serve the needs of capital, not the people who use it. Technology companies have recognized the possibilities for social domination opened up by increasingly geographically dispersed workplaces and communities. And the algorithmic discipline they have developed has a corresponding geopolitical imaginary.

Unlike the liberals, who have proved utterly incapable of a coherent political vision moving forward, tech libertarians have recognized the imminent collapse of the nation-state and its nonsovereignty in the face of global capital. Seeing that London, Tokyo, and New York City have more in common with each other than with Birmingham, Osaka, or Albany, they envision the political return of much smaller sociopolitical units capable of serving as effective nodes in integrated global flows without all the hang-ups of nations, borders, or social services. The “neoreactionary” right wing of this group advocates the return of monarchy, while the Burning Man–types dream of seasteading city-states, or California splitting into six parts. But in all these visions, corporate sovereigns replace national ones. The internet economy is set up to deliver and manage such a world.

The tech-futurists are post-nationalists; theirs is a fundamentally different vision from that of the neo-fascists currently rising to power. The resurgent nationalism and ethno-fascism represented by the likes of Donald Trump are a counter-tendency that wants to reinvigorate the nation-state through virulent racism and hard borders. Despite their far-reaching hopes for ethnic cleansing, these neo-fascists lack a transformative economic vision. They may be able to plunder the wealth of the wrong types of people — queers, Black people, Muslims, immigrants, Jews — who their program of intensified policing, both at borders and internally, would make vulnerable to further robbery, low-wage exploitation, or prison enslavement. Combined with total deregulation and the selling off of what’s left of the social democratic state in one last cash grab, the strategy could offer continued profits and stability of the system for the medium term. But fascist nationalism has no more ability than neoliberalism to actually solve the economic crises of capitalism or save the nation-state.

The most sinister of the algorithmic disciplinarians, like Peter Thiel, are making deals with the nationalist right, imagining that they can weld fascist power to their post-national vision. They may believe they can implement their fantasies about city-states, Mars colonization, and corporate sovereignty more easily after those likely to resist have been put in camps. For Thiel, fascism does the dirty work of breaking up the state and enacting repression on movements and classes, rendering them docile so they will not interfere with a new tech order in which entrepreneurial citizens live within the real or metaphoric walls of the new city-states while the forgotten surplus humanity rots outside them.

Predominantly wealthy, white, and male subjects face algorithmic sorting meant to stoke their desires. The rest face a more vulgar, violent kind of sorting: that of race, gender, sexuality, or ability

These city-states would seem to have their perfect sub-unit in the algorithmically identified and disciplined social grouplet. Both point toward a world in which people are identified entirely by “voluntary” economic and individualized association, a world not organized by identities of race, class, or nation. In the tech-libertarian view, class struggle need not be resolved but can simply be disrupted away, as ever more intricate forms of algorithmic discipline and communal algorithmic sorting produce a fundamentally new form of consent: not consent as it is understood in liberal political theory, as moderate democratic participation, but consent as epitomized by terms of service contracts imposed by tech providers. The ethnonationalists, however, think that the only way out of the current impasse is in the forceful reassertion of race, gender, and nation through an ocean of blood.

Although these two perspectives are seemingly incompatible and contradictory, they are only different sides of the same coin. They are the “right” and “left” wings of the new fascism. When Mark Zuckerberg positions himself for a political future by writing manifestos about “building a global community,” he demonstrates the contours of this new intra-fascist “left”/“right” distinction. Like many apparent contradictions, the two strategies can be, and indeed are, advanced simultaneously.


To hear it told, we have now officially entered evil times. But the majority of us have never known another. The collapse of liberal consensus has seen the rise of a misogynist white supremacist movement not ashamed of the drones, extrajudicial execution, and mass incarceration that liberals feebly apologized for or imagined away. But the collapse of liberalism’s ideological hold has also opened the field to better possibilities.

The complaints about echo chambers and internet radicalization have a kind of commonsense allure because they recognize (but misdiagnose) a real form of social violence. However, communities often accused of echo-chamber mentality — millennials, nonwhite people, queer, trans and disabled people — are precisely those most visibly resisting algorithmic discipline’s insistence on group isolation and market-enforced self-aggrandizement. They are instead struggling with those hostile technologies to organize, reach out, politicize, and transform communities on their terms: They are building social movements.

Nonetheless, the specter of isolated online worlds has emerged as a worry even for some radicals. Have we become a mere subculture? Is “Antifa” or “revolutionary” becoming an identity rather than a practice? Are we divided from the communities we want to fight beside? Is algorithmic discipline prompting us to wallow in comfort zones and “safe spaces” rather than struggling to reach a broader audience?

Algorithmic discipline is creating newly sorted subjects identified purely by their “interests”: subjects perfect for an era of disappearing jobs and individualized economic and social activity taking place in high-tech urban cores, divided from ever more desperate surplus populations. We have to fight the “right” ethnonationalist fascism without succumbing to the strategies of the “left” fascism of algorithmic disciplining.

To resist this discipline, some have tried to become opaque, incoherent, or illegible to algorithms by unplugging from their networks and identifying instead with an oppositional communal practice, as we saw in Occupy and the global movement of the squares of 2011, or more recently in the re-emergence of the black bloc. Another means of resistance, exemplified by Black Lives Matter or the unity of indigenous nations at Standing Rock, can be pursued by organizing around non-algorithmic racial, classed, gendered identities to articulate material, ethical, and historical demands that neither algorithmic control nor ethnonationalist fascism are capable of suppressing. These two strategies are perhaps merged in what Simone Browne calls “dark sous-veillance”, an oppositional mode of looking, thinking, and fighting that rejects state surveillance without giving up the terrains of visibility or legibility.

We need to take our strategies against algorithmic discipline and push them forward, continuing the work of forming active communities of self-definition and self-defense. The rise of neo-fascism means that the wolf has shed his sheep costume. Let’s sharpen our aim and finally put one between his eyes.

Vicky Osterweil is a writer, editor, and agitator based in Philadelphia. She is the co-host of the podcast Cerise and Vicky Rank the Movies, where they are ranking every movie ever made, and the author of In Defense of Looting.