Home

Rule by Nobody

Algorithms update bureaucracy’s long-standing strategy for evasion

The compensation for a death sentence is knowledge of the exact hour when one is to die.
—Cincinnatus C., Invitation to a Beheading (Vladimir Nabokov, 1935)

 

Decision-making algorithms are everywhere, sorting us, judging us, and making critical decisions about us without our having much direct influence in the process. Political campaigns use them to decide where (and where not) to campaign. Social media platforms and search engines use them to figure out which posts and links to show us and in what order, and to target ads. Retailers use them to price items dynamically and recommend items they think you’ll be more likely to consume. News sites use them to sort content. The finance industry — from your credit score to the bots that high-frequency traders use to capitalize on news stories and tweets — is dominated by algorithms. Even dating is increasingly algorithmic, enacting a kind of de facto eugenics program for the cohort that relies on such services.

For all their ubiquity, these algorithms are paradoxical at their heart. They are designed to improve on human decision-making by supposedly removing its biases and limitations, but the inevitably reductive analytical protocols they implement are often just as vulnerable to misuse. Decision-making algorithms replace humans with simplified models of human thought processes that can reify rather than mitigate the biases those programmers are working from in conceptualizing the algorithm’s intent.

Cathy O’Neil, in her recent book Weapons of Math Destruction, defines algorithms as “opinions formalized in code.” This deceptively simple appraisal radically undercuts the common view of algorithms as neutral and objective. And even if programmers were capable of correcting against their own biases, the machine-learning components of many algorithms makes their workings mysterious, sometimes even to programmers themselves, as Frank Pasquale describes in another recent book, The Black Box Society.

Algorithms can never have “enough”

In the complexity of their code and the size of the data troves they can process, these kinds of algorithms can seem unprecedented, constituting an entirely new kind of social threat. But the aims they are designed to meet are not new. The logic of how these algorithms have been applied follows from the longstanding ideals of bureaucracies generally: that is, they are presumed to concentrate power in well-ordered and consistent structures. In theory, anyway. In practice, bureaucracies tend toward inscrutable unaccountability, much as algorithms do. By framing algorithms as an extension of familiar bureaucratic principles, we can draw from the history of the critique of bureaucracy to help further unpack algorithms’ dangers. Like formalized bureaucracy, algorithms may make overtures toward transparency, but tend toward an opacity that reinforces extant social injustices.

In the early 20th century, sociologist Max Weber outlined the essence of pure bureaucracies. Like algorithms, bureaucratic processes are built on the assumption that individual human judgment is too limited, subjective, and unreliable, deficiencies that lead to nepotism, prejudice, and inefficiency. To combat that, an ideal bureaucracy, according to Weber, has a clear purpose, explicit written rules of conduct, and a merit-based hierarchy of career employees. This structure places power in the apparatus and allows bureaucracies to function consistently regardless of who occupies different roles, but this same impersonality makes them controllable by anyone who can seize their higher offices. Also, because the apparatus itself generates the power, bureaucrats have incentive to serve that apparatus and preserve it even when it veers from its original intended function. This creates a strong tendency within bureaucracies to entrench themselves regardless of who directs them.

The way algorithms are implemented can mimic these bureaucratic tendencies. Google’s search algorithm, for example, appears to have a clear, limited purpose — to return the most relevant search results and most lucrative ads — and operates within a growing but defined space. As the company’s engineers come and go, ascend through the company hierarchy or leave it entirely, the algorithm itself persists and evolves. The intent of the algorithm was once to organize the world’s information, but as it has become a commonplace way of finding information, information has been reshaped in the algorithm’s image, as is most obvious with search-engine optimization. This effectively entrenches the algorithm at the expense of the world’s diversity of information.

Both bureaucracies and algorithms are ostensibly committed to transparency but become progressively more obscure in the name of guarding their functionality. That is, the systematicity of both make them susceptible to being “gamed”; Google and Facebook justify the secrecy of their sorting algorithms as necessary to thwarting subversive actors. Weber notes that bureaucracies too tend to become increasingly complex over time while simultaneously becoming increasingly opaque. Each trend makes the other more intractable. “Once fully established, bureaucracy is among those social structures which are hardest to destroy,” Weber warns. In bureaucracies, over time, only those “in the know” can effectively navigate the encrusted processes to their own benefit. “The superiority of the professional insider every bureaucracy seeks further to increase through the means of keeping secret its knowledge and intentions,” he writes. “Bureaucratic administration always tends to exclude the public, to hide its knowledge and action from criticism as well as it can.” This makes bureaucracies appear impervious to outside criticism and amendment.

But as O’Neil argues about algorithms, “You don’t need to understand all the details of a system to know that it has failed.” The problem with both algorithms and bureaucracies is that they try to set themselves up to be failure-proof. Bad algorithms and bureaucracies have a built-in defense mechanism in their incomprehensible structure. Engineers are often the only people who can understand or even see the code; career bureaucrats are the only people who understand the inner workings of the system. Since no one else can identify the specific reasons for problems, any failure can be interpreted as a sign that the system needs to be given more power to produce better outcomes. What constitutes a better outcome remains in the control of those implementing the algorithms, and is defined in terms of what the algorithms can process.

As Weber wrote, “The consequences of bureaucracy depend upon the direction which the powers using the apparatus give it. Very frequently a crypto-plutocratic distribution of power has been the result.” Likewise with algorithms: If a company’s algorithm increases its bottom line, for example, its social ramifications may become irrelevant externalities. If a recidivism model’s goal is to lower crime, the fairness or appropriateness of the prison sentences it produces don’t matter as long as the crime rate declines. If a social media platform’s goal is to maximize “engagement,” then it can be considered successful regardless of the veracity of the news stories or intensity of the harassment that takes place there, so long as users continue clicking and commenting.

Though automated systems purport to avert discrimination, Pasquale writes, “software engineers construct the datasets mined by scoring systems; they define the parameters of data-mining analyses; they create the clusters, links, and decision trees applied; they generate the predictive models applied. Human biases and values are embedded into each and every step of development. Computerization may simply drive discrimination upstream.” O’Neil offers a similar argument: “Models are constructed not just from data but from choices we make about which data to pay attention to — and which to leave out. Those choices are not just about logistics, profits, and efficiency. They are fundamentally moral. If we back away from them and treat mathematical models as a neutral and inevitable force, like the weather or the tides, we abdicate our responsibility.”

For bad algorithms and bureaucracies, any failure can be interpreted as a sign that the system needs more power to produce better outcomes

Far from an unintended consequence, however, that abdication becomes the whole point, even if algorithms and bureaucracies are frequently born with benevolent aims in mind. For the proprietors of these algorithms, this abdication is translated into a fervor for objective purity, as if neutrality in and of itself is always an undisputable aim. The intent of algorithms is presented as always self-evident (be neutral and thus fair) rather than a matter of negotiation and implementation. The means and ends become disconnected; objectivity becomes a front, a way of certifying outcomes regardless of whether or not they constitute social improvements. Thus the focus on combatting human bias leads directly to means for cloaking and dissipating human responsibility, merely making human bias harder to detect. Efforts to be more fair end up being a temptation or justification for opacity, greasing the tracks for an uneven allocation of rewards and penalties, exacerbating existing inequalities at any turn.

In On Violence, Hannah Arendt characterizes bureaucracy as “the rule of an intricate system of bureaus in which no men, neither one nor the best, neither the few nor the many, can be held responsible, and which could be properly called rule by Nobody.” Left unchecked, bureaucracy enables an unwitting conspiracy to carry out deeds that no individual would endorse but in which all are ultimately complicit. Corporations can pursue profit without consideration for effects on the environment or human lives. Violence becomes easier at the state level. And anti-state violence, without specific targets to aim for, shifts from strategic, logical action to incomprehensible, more terroristic expressions of rage. “The greater the bureaucratization of public life, the greater will be the attraction of violence,” Arendt argues. “In a fully developed bureaucracy there is nobody left with whom one could argue, to whom one could present grievances, on whom the pressures of power could be exerted.” It would, of course, be difficult to “attack” an algorithm, to make it feel shame or guilt, to persuade it that it is wrong.


In a capitalist society, the desire to remove human biases from decision-making processes is part of the overarching pursuit of efficiency and optimization, the rationalization Weber described as an “iron cage.” Algorithms may be sold as reducing bias, but their chief aim is to afford profit, power, and control. Fairness is the alibi for the way algorithmic systems reduce human subjects to only the attributes expressible as data, which makes us easier to monitor, manipulate, sell to, and exploit. They transfer risk from their operators to those caught up within their gears. So even when algorithms are working well, they are not working at all for us.

It’s obvious that algorithms with inaccurate data can be harmful to someone trying to get a job, a loan, or an apartment, and Pasquale and O’Neil trace out the many ramifications of this. Even if you can figure out when data brokers have inaccurate data about you, it is very difficult to get them to change it, and by the time they do, the bad data may have been passed along to countless different brokers, cascading exponentially through an interlocking system of algorithmic governance. Many algorithmic systems also use questionable proxies in place of traits that are impossible to quantify or illegal to track or sort by. Some, for instance, use ZIP codes as a proxy for race.

As with bureaucracies, algorithms purport to gain fairness by measuring only what can be measured fairly, leaving out anything prone to judgment calls, but in actuality this leaves a lot of leeway for those who have inside information or connections that can help them navigate the byzantine processes, and massage their data.

More precise and accurate data can’t fix a bad system. Even though the data may be accurate, the systems may lack the proper context for that data that situates its systemic implications. Pasquale summarizes how this occurs in lending: “Subtle but persistent racism, arising out of implicit bias or other factors, may have influenced past terms of credit, and it’s much harder to keep up on a loan at 15 percent interest than one at five percent. Late payments will be more likely, and then will be fed into present credit scoring models as neutral, objective, non-racial indicia of reliability and creditworthiness.”

Often these systems create feedback loops that worsen what they purport to measure objectively. Consider a credit rating that factors in your ZIP code. If your neighbors are bad about paying their bills, your score will go down. Your interest rates go up, making it harder to pay back loans and increasing the likelihood that you miss a payment or default. That lowers your score further, along with those of your neighbors. And so on. The algorithm is prescriptive, though the banks issuing loans view it as merely predictive.

No matter how much good data you have, there will always exist additional context, in the form of additional data that could improve it. There is no limit to reach that will confer objectivity, that will render results beyond being subject to interpretation. Algorithms can never have “enough.”

The need to optimize yourself for a network of opaque algorithms induces a sort of existential torture. In The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy, anthropologist David Graeber suggests a fundamental law of power dynamics: “Those on the bottom of the heap have to spend a great deal of imaginative energy trying to understand the social dynamics that surround them — including having to imagine the perspectives of those on top — while the latter can wander about largely oblivious to much of what is going on around them. That is, the powerless not only end up doing most of the actual, physical labor required to keep society running, they also do most of the interpretive labor as well.” This dynamic, Graeber argues, is built into all bureaucratic structures. He describes bureaucracies as “ways of organizing stupidity” — that is, of managing and reproducing these “extremely unequal structures of imagination” in which the powerful can disregard the perspectives of those beneath them in various social and economic hierarchies. Employees need to anticipate the needs of bosses; bosses need not reciprocate. People of color are forced to learn to accommodate and anticipate the ignorance and hostility of white people. Women need to be acutely aware of men’s intentions and feelings. And so on. Even benevolent-seeming bureaucracies, in Graeber’s view, have the effect of reinforcing “the highly schematized, minimal, blinkered perspectives typical of the powerful” and their privileges of ignorance and indifference toward those positioned as below them.

Fairness is the alibi for reducing human subjects to attributes only expressible as data, which makes us easier to exploit. Algorithms transfer risk from their operators to those caught up within their gears

This helps explain why bureaucrats and software engineers have little incentive to understand the people governed by their systems, while the governed must expend precious intellectual capital trying to reverse-engineer these systems to survive within them. It’s a losing battle, of course: Navigating the world effectively may require more and more awareness and interpretation of algorithmic systems, but in many cases the more we know, the more likely our knowledge is to become obsolete. The institutions that run these systems tend to treat our reverse-engineering them as inappropriately learning how to game them, and they can change them unilaterally. As Goodhart’s law states, when a measure becomes a target, it ceases to become a useful measure. The moment that more than a few people understand how an algorithm works, its engineers will modify it, lest it lose its power.

So we must simultaneously understand how these systems work in a general sense and behave the way they want us to, but also stop short of any behavior that could be seen as gaming them. We know our actions are recorded, but not necessarily by whom. We know we are judged, but not how. Our lives and opportunities are altered accordingly but invisibly. We are forced to figure out not only how to adapt to the best of our abilities but what it is that even happened to us.

Unfortunately, there’s not much an individual can do. It’s undeniable that individuals have been harmed by algorithms yet nearly impossible for any of those victims to prove it on an individual basis and demonstrate legal standing. O’Neil and Pasquale both note that the problems with algorithms are too extensive for any silver-bullet solution, offering instead a laundry list of approaches drawing from precedents in U.S. policy (e.g. the Fair Credit Reporting Act and the Health Insurance Portability and Accountability Act) and European legal codes. But regulatory means of reigning in algorithms — even assuming the significant hurdles of regulatory capture (the government’s understanding of these instruments is informed mostly by their beneficiaries) could be surmounted — would still require labyrinthine bureaucracies to implement them. If the problem with algorithms lies in how they mimic the ways bureaucracies function, trying to fixing them with different bureaucracies merely reiterates the situation.

Algorithms are probably not going anywhere. Technology and bureaucracy both tend toward expansion as they mature. But while getting rid of algorithms seems unlikely, they can be modified toward greater social utility. This would require evaluating them not in terms of how objective they seem, but on ethical, unapologetically subjective grounds. O’Neil argues that algorithms should be judged by the ethical orientation their programmers and users give to them. “Mathematical models can sift through data to locate people who are likely to face great challenges, whether from crime, poverty, or education,” she writes. “It’s up to society whether to use that intelligence to reject and punish them — or to reach out and help them with resources they need.” O’Neil writes of even more promising applications, like an algorithm that scans troves of data for signs of forced labor in international supply chains and another that identifies children at greatest risk for abuse. Crucially, they rely on humans at both ends of the process to make key decisions.

In this paradigm, the problem with “customized” rankings is not their lack of universality but the fact they could be even more customized to suit specific users’ goals. If a platform wishes to be truly neutral, its algorithms must be amenable to the unique objectives of each user. Pasquale suggests that when Google or Yelp or Siri makes a restaurant recommendation, a user could decide whether and how heavily to take into account not just the type of food and the distance to get there, but whether the company provides its workers with health benefits or maternity leave.

Opaque algorithms that rely on Big Data create issues that are commonly brushed aside as collateral damage when they are recognized at all. But those issues are avoidable. By acknowledging and accepting the human bias endemic to these systems, those same forces could be repurposed for good. We need not be trapped in their iron cages.

Adam Clair is a writer currently based in Philadelphia. He tweets infrequently at @awaytobuildit.