Home

Lawful Neutral

Liberalism and AI share the political project of eliminating human difference

Full-text audio version of this essay.

Alan Turing is often regarded as one of the pioneers of artificial intelligence research, but two of his best-known papers — “On Computable Numbers” (1936) and “Computing Machinery and Intelligence” (1950) — implicitly argue against the possibility of “strong” AI: the ability of a machine to approximate general human intelligence. Both of Turing’s papers were fundamentally concerned with the question of procedure: the operationalization of rules. For Turing, a machine’s operation is always completely described in a set of rules, whereas no such rule set could be devised to govern human beings. “It is not possible,” he writes, “to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances.” Machines may have a limited ability to adapt their rule sets (as with contemporary machine learning algorithms), but human intelligence is not a matter of formal procedures at all; rather it is a matter of intuitive, content-rich creativity. Accordingly, Turing essentially rejected the possibility that human life could be reduced to predictable calculation.

If AI can’t replicate human intelligence, it nevertheless models the sort of intelligence needed to make liberalism coherent

However, the determinism that Turing rejects in artificial intelligence research has long occupied an important position within liberal political thought. Liberalism presumes that a clear, neutral set of rules can produce a predictable social order and can be applied universally, independent of any given context, social structure, or power dynamic. If AI can’t necessarily replicate human intelligence, it nevertheless precisely models the sort of intelligence needed to make liberalism coherent.

Despite Turing’s position, artificial intelligence research has become deeply embedded in the neoliberal scientific, economic, and political project that has sought to remove all that is irrational, unpredictable, and risky from human behavior. From “good old-fashioned AI” (which sought to reproduce human intelligence purely through the manipulation of symbols) to today’s deep-learning neural-network systems, the algorithmic control of human behavior has been deepened and expanded.

With the advance of new technologies, faster and more efficient algorithms are being used to order social, economic, and political processes to reduce “risk” (including the risk of social unrest) and secure the outcomes necessary for increased profits and capitalist expansion. The recent scandal regarding grading algorithms in the U.K. is paradigmatic: Their supposedly neutral determinations turned out to reinforce the pre-existing class structure, limiting educational opportunities and life chances.

In seeking to reduce human life to a predictable order, liberalism conforms to the AI model, leading to the idea that we are more like atomistic machines than beings with a situated, creative intelligence. So what may have begun as artificial intelligence’s attempt to emulate human intelligence has become a political project that attempts to reduce human intelligence to the programmability of the machine. Liberal procedure is closed — fixed in institutions, laws, regulations, and rigid bureaucratic processes — and finds its perfection in the static form of the algorithm.


In most programming paradigms, rules and code, or data and content, are treated as two distinct things, at least conceptually. For example, in The Structure and Interpretation of Computer Programs, for many years a foundational computer science text at MIT, the authors write that “in programming, we deal with two kinds of elements: procedures and data. (Later we will discover that they are really not so distinct.)” While some programming languages, like John McCarthy’s LISP, exhibit “homoiconicity” — the representation of code and data using the same symbols and syntax — contemporary machine learning systems continue to maintain a strict distinction between code and data, modifying their own processes in response to particular patterns in the data they are operating on. For example, the code of a neural network engaged in facial recognition or data mining is not at all the same as the data used to train it. As the neural network runs over the data, the data triggers changes in the synaptic connections within the code, but the code and the data remain strictly distinct things.

Turing’s insight was that in order to even begin to imitate human intelligence, machines would have to overcome this distinction, precisely because the distinction does not exist within human intelligence itself. For a procedure to truly learn, it would need to be able to change its own rules while it is running, not merely reweight probabilities within the same established process. It would be able to treat its own code as data, collapsing the distinction between code as a formal, content-independent system and data as bearer of semantic content. Turing argues, however, that this ability can only ever be partly achieved by a machine, precisely because a machine must have its behavior completely described by a set of rules.

AI research may seem geared toward developing machines flexible enough to overcome this. But algorithmic forms of control don’t require machines to change; they require humans to. To make machines behave like people, Turing argues that the code/data distinction must be overcome. Liberal politics seeks the opposite. To treat human beings as parts of a machine, it raises the code/data distinction to a fundamental value: Liberalism must make code (or rules, laws, or procedures) everything, while making the singular content of human lives nothing. It must reduce human life to mere data points to be used for the application of procedures.

The distinction between procedure and content — the rule of law and lived experience — is shared by both contemporary technology and liberal political thought, which emerged together from changes in the material relations of production after World War II: The development of computerization, the expansion and refinement of worker time-and-motion studies, the roboticization of factories, and the expansion of individualized consumer demand have led to the algorithmic financialization, globalization, and automation of neoliberal capitalism.

The political focus on individualism and the breakdown of social bonds (for example, in Thatcher’s “there’s no such thing as society”) was an integral part of the increased mechanization and automation of the neoliberal turn. But this return to pure individualism did not develop in a vacuum. The war and its immediate aftermath demanded personal sacrifice, but as consumption and standards of living improved over the 1950s and ’60s, people began to demand more from postwar society than simply reconstruction and growth. The rise of the civil rights movement and feminism were part of this demand, as were the worker-student revolts of 1968.

As factories have become increasingly automated with robotic assembly lines, so too has daily life become more subject to algorithmic structuring

However, as David Harvey argues in his Brief History of Neoliberalism (2005), neoliberalism took these late-’60s demands for equal rights and individual freedom and made them the cornerstone of a new mutation of capitalism, coopting progressive demands for liberation to increase profits and reduce human life to computerized procedure. It did this partly through the expansion of individualized consumerism, leading to what Tom Wolfe called “the me decade” of the 1970s, and partly through what Karl Marx called “subsumption,” the restructuring of daily life according to the logic of the factory. As factories have become increasingly automated with robotic assembly lines, so too has daily life become more and more subject to algorithmic structuring. By as early as the 1980s, capitalism had entered the world of AI expert systems such as the Caduceus medical diagnostic system (introduced in 1984) or the “Advisor’s Assistant” used by American Express (1988).

Many left-wing critics — especially the autonomist feminists and Marxists like Silvia Federici, Leopoldina Fortunati, Mario Tronti, and Antonio Negri — saw this expansion of technological capitalism into daily life as serving a social as well as an economic need. The integration of its logics of automation, command, and control into daily life created the subjects necessary for the neoliberal period: subjects who operated according to the rules of the machinery they engaged with on a daily basis and whose behavior thus became stable and predictable. As the roboticization of the assembly lines had already shown, predictable behavior was easy to automate. With the extension of automation, algorithms would now do the same for life beyond the factory. The cost of this predictability is the flattening out of “unstable” difference and the reduction of human behavior to its lowest common denominator: a procedural or algorithmic “equality” that could produce profit while seeming to defuse crisis.

These efforts are doomed to fail, as the attempts to automate content moderation show. Research by Sarah T. Roberts details how software platforms rely on the intervention of armies of (precarious, emotionally exploited) content moderators, generally in the capitalist periphery, whose human judgment and flexibility is still needed to counter the often horrific and all-too-human cruelties shared on social media. But Silicon Valley and liberal politics continue their search for a way to automate away everything — for good or ill — that makes us human. They use their failed attempts to algorithmically administrate the social world as a reason to demand more power, more data, to impose more rules that stigmatize difference and exacerbate the cruelty.

This regime of machinic subjection has received philosophical justification from liberal political theory. In his 1978 paper “Liberalism,” Ronald Dworkin argues that a government can treat its citizens as equals in one of two ways: (1) it can remain neutral with respect to any particular conception of the good life, uniformly applying objective laws and policies or (2) it can acknowledge that equality is meaningful only in the context of “a theory of what human beings ought to be.” These two opposing perspectives — a procedural view of equality without normative content (i.e. without ethical commitments or values, without a conception of what a good life should be like) vs. a view that recognizes the irreducibility of context, including values and social bonds — lie at the heart of many contemporary social and political controversies about social justice.

For Dworkin, the first theory — proceduralism — leads to what are commonly understood by political philosophers as liberal values and political institutions: a state that is neutral with respect to any particular conception of the good and that prioritizes the content-neutral application of procedures to “guarantee” fair and equal distribution of material goods and rights. John Rawls’s Theory of Justice (1971) similarly argues that a society could agree on what is just and fair if everyone’s personal values, experiences, emotions, and commitments were bracketed off and decisions relied solely on a detached, unencumbered, transcendental view of social life — what he called “the veil of ignorance.” An abstract algorithm could be devised that would universally apply to any social situations that might be plugged into it and would always output justice. However, as critics like Dean Spade have pointed out, a purely procedural legal and administrative system that does not take the content of lives seriously risks further injuring marginalized people. It would impose normativity disguised as neutral proceduralism rather than dealing with the range and fullness of human existence. At its starkest, the application of administrative universality forces marginalized people into the orbit of police violence. A clear example is the universalizing application of binary gender assigned at birth, which excludes both nonbinary conceptions of gender and the ability of gender to change over time, forcing transgender people to conform to the administrative requirements of, say, the medical profession or the prison system, rather than the other way around.

A procedural understanding of justice is inextricably tied to an atomized, abstracted individual

The universal application of procedure — even the procedure of “human rights” conceived procedurally — erases important differences that make human life rich and meaningful. Rawls’s “veil of ignorance” depends on what political philosopher Michael Sandel called the “unencumbered self” — a purely free and independent subject capable of free choice, unencumbered even by its own aims and interests. As he asserts in “The Procedural Republic and the Unencumbered Self,” it is “not the ends we choose but our capacity to choose them” that is “most essential to our personhood” under a Rawlsian/liberal conception of governance. In his critique, Sandel argues that “as unencumbered selves, we are of course free to join in voluntary association with others, and so are capable of community in the cooperative sense. What is denied to the unencumbered self is the possibility of membership in any community bound by moral ties antecedent to choice.” The individual, Sandel claims, “cannot belong to any community where the self itself could be at stake.” In other words, being born into a particular society, culture, or community — truly belonging to it and being bound by its values — is excluded from liberalism from the outset. A procedural understanding of justice is inextricably tied to an atomized, abstracted individual.

The “unencumbered self” is not natural; it must be produced and given ideological and structural support. In describing the high-tech “decade of greed” in the early 1980s, Sandel notes that “it is as though the unencumbered self presupposed by the liberal ethic had begun to come true — less liberated than disempowered, entangled in a network of obligations and involvements unassociated with any act of will.” The “network” Sandel was thinking of included the financial systems of credit and debt which became unavoidable by the 1980s, as well as the computerized record-keeping systems that have led inexorably to today’s platforms and surveillance culture. Now we might add neural networks and, of course, the internet. Online networks appear to connect people but also end up contributing to the alienation necessarily produced by capitalist social relations. The internet’s reorganization of social life has led to mass surveillance and datafication, which in turn has brought about the development of predictive analytics, facial recognition, employee tracking, and other forms of monitoring. Algorithmic systems of control are now realizing the kinds of subjects liberalism (and especially neoliberalism) has long presupposed (alienated, atomized, stripped of community and collectivity or bonds that exceed market forces) but could not fully impose.

Liberalism — like the necessarily undemocratic capitalism for which it serves as an alibi — seeks to reduce human life to a predictable, exploitable, profitable minimum. It adopts the simplest social ontology (individualism) to make its formalism work, to make its algorithms or procedures appear universally applicable. Liberalism demands a strict division between form (the system of rules) and content (the messy details of social entanglement) to make reality tractable to its logic. Artificial intelligence likewise requires that its form (code) be separate from its content (data). Machines require the simplest possible data on which to work (binary numbers, for example) to make their procedures uniform and generalizable and the world computable.

Liberalism’s presumption of an underlying universality (binary biological sex, for example, or “post-racial color-blindness”) makes the unruly, messy data of human life appear tractable and computable for algorithmic procedures. And increasingly comprehensive algorithmic systems in turn render life in the flattening image of proceduralist liberalism. In a sense, liberalism and artificial intelligence are converging: Algorithmic intelligence, which depends on liberalism’s assumptions about proceduralism, is now being imposed to reinforce that logic — to reshape the world so that those governing assumptions are literally encoded into systems that administrate society. Reactions by software companies like Google to the work on algorithmic inequality done by Safiya Noble and others — that their algorithms are neutral, and any racist or sexist “content” is empirically derived from “raw” data — underlines how such encoding is justified and maintained.


The distinction Dworkin draws between two antagonistic theories of equality makes sense only within the context of the administrative power constituted by computer systems and current political institutions. For autonomists like Negri, however, the choice is a false one, as is the antagonism between individual and collective life that is at the heart of liberal thought itself. The flattening out of difference to ensure an algorithmic “equality” is one of Negri’s main political objections to liberal political thought.

From Negri’s point of view, difference is a valued production of the community rather than a deviation to be stamped out or vaguely tolerated as long as it poses no threat to liberal procedure and the sort of individual rights that follow from it. The strength of both individuals and the collective “is expressed and nourished by discord and struggle,” Negri argues. The algorithm — the procedure of the liberal republic — represents closure, the shutting down of individual agency and collective strength. The alternative — Negri’s reading of “constituent power” — is, on the contrary, always open: “It is at the same time resistance to oppression and construction of community; it is political discussion and tolerance; it is popular armament and the affirmation of principles through democratic invention.”

True intelligence can be seen as the constant production of new rules, new procedures, rather than a mere departure from the fixed order of a given algorithm

Negri thus rejects the opposition between Dworkin’s two theories of equality, between Sandel’s procedural republic and the constitutive attachments of human life. These choices are false ones, set up by liberalism’s position as mouthpiece of capitalist social, economic, political, and technological order. Every attempt to strike the “right” balance between competing theories of justice or equality, or between the individual and the collective, or between universalism and particularity, or between tech optimism and tech pessimism, is doomed to fail precisely because they are conceived under the aspect of the capitalist state. If we reject this aspect, many of those political problems fall away.

Turing saw intelligent behavior as consisting precisely in “a departure from the completely disciplined behavior involved in computation.” He believed that the machine was capable of only minor deviations from its programming, not the wholesale creation of new procedures. By the same logic, true learning and intelligent behavior can be seen as the constant production of new rules, new procedures, rather than a mere departure from the fixed order of a given algorithm, a modest adjustment of existing rules.

This is the central element of what Negri describes as an “absolute procedure,” one not bound by existing rules (even in the sense of breaking them) but instead constantly producing new ones. The radical democracy of Negri’s constituent power, intimately connected with the flourishing of the community, avoids the false distinction between juridical form and the content of life.

However, to reach the point where a direct, open-ended, participatory decision-making can form the basis of politics requires a fundamental transformation of our social relationships. The fact that algorithmic and liberal proceduralism both reflect capitalist social relations in the same way indicates that to try to change either — to come up with, say, a new kind of technology that might avoid or even solve the problems of capitalist society, what Evgeny Morozov calls “technological solutionism” — is to get things backward. Only a radical transformation of the controlling logics of capitalist society — e.g. the search for profits, global expansion, class structure, and systems of oppression — can lead to a political theory and a technology fit for the constitutive attachments and rich diversity of human life, the appreciation of the value of difference, and the strength of collective living itself.

Sam Popowich is a librarian at the University of Alberta and a PhD student in political science at the University of Birmingham. He is the author of Confronting the Democratic Discourse of Librarianship: A Marxist ApproachHe blogs regularly about librarianship, technology, and politics at redlibrarian.ca.