Just Randomness?

Algorithms do not become more fair by becoming more inexplicable

In 1979, by an overwhelming majority decision grounded in the Fourth Amendment, the U.S. Supreme Court ruled that police had no right to stop drivers at random. These stops, it argued, “violate the constitutional guarantee against unreasonable seizures.” Today, almost 40 years after the Supreme Court ruling, the TSA “randomly” selects some passengers for additional security screening, such as a pat-down by an officer. But what has changed in the intervening period? Why are random traffic stops by police largely impermissible, whereas random detailed searches at airports are authorized and even explicitly written into the rulebook?

The answer is obvious: passengers identified for additional airport screening are picked by a computer program, which creates the illusion of control for the human factor operative in police activities. According to this line of thought, the problem is that even apparently haphazard human choices are not random enough, and therefore not disinterested or impartial enough. Algorithms are implicitly and rather automatically interpreted as corrective measures here. Ostensibly impersonal and indifferent to whomever they are applied to, they are deemed to live up to the exigencies of justice by seeming “more random.” 

Excessive reliance on algorithms not only masks the persistence of bias, but also threatens to make human experience itself appear totally random

In the ancient world, sortition and the casting of dice or lots (procedures grouped under the heading of cleromancy) were in use at some of the most important points in personal and political life. Election by lots was an integral part of the democratic process in ancient Greece — above all, in Athens. In the Hellenic and Hebraic paradigms alike, the randomness of the outcome was seen as an expression of divine will, which could take care of the future much better, more successfully and wisely than humans with their finite knowledge. Chance stood for a higher necessity, inaccessible to our faulty reasoning and dim awareness of causes and their effects. The Roman goddess Justitia, who later became Lady Justice, was depicted blindfolded, suggesting not freedom from prejudice but that only divine indifference could neutralize the biases as well as the familial, affective, and other attachments that inevitably persist in human decision-making. 

One can imagine a modern instantiation of sortition in public life: electoral tie-breaks decided by casting lots, for instance, or the randomization of waiting lists for organ donations. More often, however, our hopes of deliverance from bias are transferred onto algorithmic decision-making systems, which have been broadly implemented across contemporary societies, ostensibly in hopes of making employment, financial, legal, and other decisions fairer. Many human resource managers, for instance, now resort to data-driven algorithms in order to sift through the pools of job candidates and make appropriate hiring decisions. The gods of old have been carried over into the present and the future in the shape of computational thinking, artificial intelligence, and technological innovation. Though many critics have pointed out how algorithmic systems often conserve rather than eradicate bias, stubborn faith in their superhuman ability to correct an essential flaw in our human condition persists. They allow people to “recuse” themselves from decision-making processes and avoid making sense of causal relationships and phenomena when these are too complex to parse. As a result, human actors believe they have mitigated their biases, as though prejudiced thinking could not be transmitted to and engrained in an automated process.

Excessive reliance on algorithms not only masks the persistence of bias, but also threatens to make human experience itself appear totally random. It is as though the milestones of your existence, such as getting a job or receiving a rejection letter, befell you out of the blue, with no rhyme or reason, with no one to blame, to praise, or to hold responsible. Would you like to live in a world where everything happened without a why and a because? How would life feel, were you to perceive it, including every major and minor occurrence it was woven of, as part of a strange lottery? How would you string together the story of such a life? What, if anything, would there be to narrate? Where would the descriptors “good” and “bad,” “just” and “unjust,” belong in this mess? Does justice have any meaning outside of human deliberation?

Despite “randomization” techniques in security screenings at airports, passengers deemed “high-risk” are often subject to discrimination on the basis of their ethnicities, nationalities, religious affiliations, or even names. This is, of course, because the parameters for a computer program are set by human programmers. Algorithmic bias preserves sexist, racist, and other problematic attitudes, and may actually amplify already existing prejudices. AI is also prone to learning to be sexist and racist through repeated interactions with humans espousing such attitudes. Although such structural flaws are widely publicized, they persist, perhaps because they displace the responsibility for biased conduct onto an ostensibly impersonal system.

If scientific experiments use randomization, why shouldn’t experiments in living together do the same?

Introducing true randomness into our social, political, and legal realities would dovetail with the quintessential dream of modernity to reorganize human institutions on a scientific foundation. If chaos theory applies to the universe at large, then why should it not have a bearing on the legal system? If scientific experiments use randomization, why shouldn’t experiments in living together do the same? Religious faith, which in the course of the European Enlightenment has mutated into a faith in reason, is now imperceptibly passing into a faith in technology. Technocratic solutions to the nearly universal problems of human existence and coexistence are the legatees of these multifarious mutations at the ideological level.

By attempting to prevent selection bias, randomization techniques in clinical trials (allocation concealment, blinding, etc.) unclutter cause-effect relations in the assessment of experimental treatments. They ensure that (1) the tested subjects in various groups are not “systematically different,” and (2) no prior knowledge of group assignment, capable of influencing the outcome, exists. But transferring a similar logic to legal and political systems (say, by blinding persecutors to the defendants’ race) would not reduce selection bias so much as transmute an actual suspect into a neutral, genderless, raceless, classless, and ultimately abstract, context-free individual, befitting a technocratic outlook. Unlike in the life and medical sciences, “blinding” — assuming such a thing were possible — neither respects the diverse fabric of social life nor gives way to the transparency of knowing the cause, if only a probable one, behind a determinate effect. Lady Justice’s blindfold stays on past its due. 

The idea that a “random” system could be just fuels the development of sociopolitical algorithms and the emphasis on quantification that they require. To achieve equal treatment under the law through randomization, each citizen must be first converted into a number, giving equality a strictly numeric, mathematical, or statistical expression and undermining its substantive dimension. Human relations are missing from its purview, not only at the level of causes and effects but also at the level of community, which is reduced to an amorphous pool of individuals from which aleatory samples are drawn for various social proceedings, as in jury pools.

The price we pay for efficiency is exorbitant; the bill arrives in the form of utter blindness to singularity

Instead, randomness (whether genuine or misconstrued) enables efficiency to override other concerns. At its source in the Old French randon, the word random itself means “swiftness,” a “great rush,” from the verb randonner, “to run fast.” In terms of time efficiency, overviewing everything quantitatively is quicker than going into painstaking details of diverse cases and circumstances. The price we pay for efficiency is exorbitant; the bill arrives in the form of utter blindness to singularity, the blindness that flips justice around into injustice.

And yet, the kind of technologically mediated “randomness” such systems offer also harks back to the difficulties we face in attributing causality. We live in an age when direct attributions of causes and effects are exceptionally difficult, because the former grow more and more distant from the latter in space and in time. Randomness may then be perceived as the overarching principle of events experienced as though coming from nowhere, happening as effects of effects untraceable to a cause. Given this state of affairs, reliance on randomization procedures may be more palatable, compared with arguments from necessity that secure beyond a shadow of doubt the link between a cause and its effects.

The ancient worldview again proves instructive here. In the Book of Job, the divine punishments that befall Job are not the effects of a cause found in his moral conduct; what he undergoes is a purely random experience. But the cause is not absent — god has his reasons — it is simply beyond human understanding. An analogous sentiment today makes the prima facie “randomness” of algorithmic systems become more acceptable. Given the opacity of iterative machine learning and the massive scale of data combed for correlations, it appears that an algorithm’s output is as “random” as god appears to be in the Book of Job. The current technological state demands a faith in the god-like algorithms that seem to operate beyond understanding. The carryover of past theological beliefs shorn of their original context, then, makes us all too willing to accept the framework of justice without justifications.

The ends of justice — the aims and goals it pursues — are sacrificed to the smooth functioning of the means when we introduce randomness into the equation. The loss of a why and a because makes it a dubious foundation for justice, for legal and political procedure, or, broadly, for human co-existence. On the ideally level playing field of equality before the law posited by a purely quantitative approach, a choice appears just when, randomly generated, it eludes a narrative justification.

The ideal of randomness and an algorithmic shaping of social reality undercut the very possibility of ordering and reordering the world, along with the ongoing search for meaning. Whether or not our latest weltanschauung represents the universe as a chaotic mess, shattered at the extreme into an infinite number of pluriverses, a meaningful life unfolds with a modicum of (a flexible, adjustable or suspendable, mutable) order.

Rather than a correctly or incorrectly implemented set of procedures, justice is entwined with the view of what reality actually is and of the way things should be in the best of possible scenarios. In pluralistic societies, where such views frequently clash, ongoing discussions, deliberations, and sensible explanations for decision-making are indispensable. Bypassing these elements under the cover of randomness and algorithmic solutions serves only to mask — not to resolve — the underlying conflicts of interpretation corresponding to frictions between different ways of life or basic worldviews.

The issue, as I see it, is not the use of inevitably biased algorithms or the ideal of randomness that provides an alibi for them, but the worship of algorithms without considering substantive, qualitative, causal, and other concerns. The danger is that, with overreliance on automated decision-making systems, there will be no need for thinking (and perpetually rethinking) justice either in its general outlines or in its application to particular cases. Computerized calculation will take the place of deliberation and will be responsible for further concealing the unjust intentions programmed into the “algorithms of justice.” And few things are more harmful to our vital ability to seek the meaning of the world and of our place in it than that.

Michael Marder is an Ikerbasque Research Professor of Philosophy at the University of the Basque Country, UPV/EHU, Vitoria-Gasteiz. His work spans the fields of phenomenology, environmental philosophy, and political thought.