Home

Odd Numbers

Algorithms alone can’t meaningfully hold other algorithms accountable

Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, the differences of opinion about how best to proceed were made apparent. One attendee was quite blunt in expressing his reservations about whether the academic disciplines represented at the workshop were up to the task of solving these problems.

Algorithms and data could be misused, but were also responsible for enormous benefits — how could we turn back the clock on them now?

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether.

One of the algorithmic accountability movement’s greatest initial successes — getting the attention of corporate leaders — is limiting its larger political imagination. In an era of Trumpism, Tory chaos, and ethnonationalist resurgence, it is easy for academics to give up on trying to influence government policy and seek changes directly from corporate leaders. However, the price of that direct approach is translating one’s work into a way of advancing overall corporate goals — a distortion similar to the mistranslation of reality into code that provoked algorithmic-accountability scholars in the first place. Such corporate goals may help burnish scholars’ reputations at first, but eventually they need to boost the bottom line. Even monopolistic firms like Google, Amazon, and Facebook, which should have a much freer hand at engaging ethically than the run-of-the-mill corporate giant, are ultimately beholden to investors.

What investors want is assurance that a company’s products are going to be more widely disseminated. This tacit expectation has circumvented important debates. Consider recent controversy over cutting-edge surveillance systems. We now know that many commercial facial recognition systems have more difficulty identifying nonwhite — and particularly black — faces. MIT has advanced a project that would ensure that facial recognition systems can better identify black persons, and particularly black women, who have been the most “unidentified” in systems tests. But is the answer for accountability here really to perfect the ability of corporations and the police to match every face to a name — and an algorithmically generated record? It may be that police use of facial recognition deserves the same reflexive “no” that greeted Google Glass, and that scholars Evan Selinger and Woodrow Hartzog have sought to extend to ubiquitous facial recognition, arguing that biometric faceprints should be like fingerprints or Social Security numbers — something that it should not just be fair game to just collect and disseminate.

In the case of AI recognizing black faces, Social Science Research Council president Alondra Nelson observed earlier this year, “Algorithmic accountability is tremendously important. Full stop. But I confess that I struggle to understand why we want to make black communities more cognizant in facial recognition systems that are disproportionately used for surveillance.” Stopping false positives is an important goal. But to critique facial recognition in terms of its accuracy seems to already accept that its enormous power will inevitably be deployed in more settings — an assumption that privacy and Fourth Amendment activists are quick to dispute.

In short, power dynamics are key. In “Beijing’s Big Brother Tech Needs African Faces,” an essay for Foreign Affairs, Amy Hawkins noted that being “better able to train racial biases out of its facial recognition systems … could give China a vital edge,” but also that “improving” this technology abets an authoritarian approach to controlling populations. If South Africa had had the technological infrastructure that Beijing now deploys in the largely Muslim province of Xinjiang, would the anti-apartheid movement have developed? It is hard to spin a narrative where that apparatus helps a would-be Mandela.

The debate over the terms and goals of accountability must not stop at questions like “Is the data processing fairer if its error rate is the same for all races and genders?” We must consider broader questions, such as whether these tools should be developed and deployed at all.

I have to admit that I have been part of the problem here. In the past I have been too focused on narrowly legalistic questions while downplaying larger issues in political economy. In a talk on algorithmic accountability in 2016, I focused on how to reform Uber’s practices of rating, “activating,” and “deactivating” drivers, which raise questions of fair process. Algorithmic accountability researchers have aspired to ensure that aspects of Uber’s practices are “fairer” — that is, that ratings can be contested; that drivers’ credit scores have not been unduly reduced by the precarity of their employment; that surge pricing actually persists long enough to justify drivers’ travel to surge zones.

But we should not confuse the necessary work to incrementally reform a system with the moral imperative to also question its basic presuppositions. In my talk, I downplayed at least two larger issues. First, is there any practical limit to “limitless worker surveillance” now practiced at so many firms? Or to the proliferation of devices designed to track and measure us in any role? Whenever a reformer proposes to solve an algorithmic-sorting problem with “more data,” that is an invitation to more surveillance, which can go to absurd lengths. For example, a scheme called “Greater Change” in the U.K. recently proposed bar-coding homeless people to enable digital donations. The algorithmic imagination can propose even more data processing here: perhaps pedestrians could look up each rough sleeper’s story using a QR reader to decide if they deserved help. Bureaucrats could give commendations to homeless persons who were particularly polite or inobtrusive, or who helped with “volunteer” clean-up efforts. Even if this approach were to lead to more donations, would it be worth the cost in human dignity? Or entrenching dynamics of voyeurism and judgment in interpersonal relations?

The second issue I should have emphasized is distributional. Even if platform capitalism (and all the surveillance it entails) prevails in urban transport, how are its spoils divided? How is Uber’s cut of revenue determined, and is there any practical limit to it? A recent industrial action by Uber drivers in Australia focused on exactly this issue, demanding that the firm reduce the percentage of fares it kept for itself.

Neither privacy nor distributional concerns come naturally to technocratic reformers. At a recent roundtable discussion on “ethics in artificial intelligence” and “algorithmic accountability,” one convener, seeking to reassure corporate participants, remarked, “We’re not Luddites here — we’re not trying to slow down the collection of data.” I then brought up a recent example of Chinese school surveillance, which monitored students’ facial expressions and bodily comportment in the classroom, producing daily summary reports for teachers on who paid attention the most, who was distracted, who slumped on the desk, and so on. Whatever benefits to learning or even mental health outcomes might result from nonstop videotaping of student facial expressions, I didn’t want that data collected.

The dispute over how to reform or restrict algorithms is rooted in a conflict over to whom algorithmic processes should be accountable

The comment had a deflationary impact, as if the smooth momentum toward rational solutions to technology deployment had halted and we were suddenly shipwrecked on an island of fundamental value conflicts. But the conversation soon shifted back to a calmly reformist tone: Algorithms and data could be misused, but were also responsible for enormous benefits — how could we turn back the clock on them now? Credit scores and Uber ratings have disciplined consumers and workers, pressuring each to be more reliable, polite, and predictable. Why not apply that rationality to schoolchildren at a critical time of their development? The path to reform must run through technology, not in attempts to limit its use. More accurate and comprehensive data will set us free.


The dispute over how to reform or restrict algorithms is rooted in a conflict over to whom algorithmic processes should be accountable. If it’s to a community of engineers and technocrats, then accountability will usually mean more comprehensive data collection to produce less biased algorithms. If it is accountability to the public at large, there are broader issues to consider, such as what limits should be placed on these tools’ use and commercialization, if they should even be developed at all. Technology-intensive firms (and the researchers they fund or support) tend to think of algorithmic accountability as a limited and technical project, while social critics challenge the underlying logic of applying algorithms to social situations and conditions. The narrowest conceptions of accountability can themselves be treated algorithmically, while the broader conceptions demand political engagement and social change.

The legal scholar Edward Rubin has defined accountability as “the ability of one actor to demand an explanation or justification of another actor for its actions, and to reward or punish the second actor on the basis of its performance or explanation.” That means that accountability is transitive: Firms and governments have to be accountable to some person or community. But tech companies and their critics have different ideas of who the developers of algorithmic tools should be accountable to. Firms assume that the demand for accountability must be translated in some way into computer science, statistics, or managerialist frameworks, where concerns can be assuaged by a tweak of a formula or the collection of more data. But computer scientists and statisticians do not represent everyone’s concerns and are only one part of the community to whom tech companies should be accountable. Other academic fields have much to offer, particularly when they are rooted in the lived experience of people marginalized by the digital turn.

Social theory, critical race theory, and feminist theory can all help construct a more inclusive and critical conception of algorithmic accountability. In Algorithms of Oppression, Safiya Umoja Noble tracked search engines’ representation of black women and found disturbing evidence of sexist and racist overtones in search results. Algorithmic representations of people were all too vulnerable to manipulation by racists, or bias from a disproportionate number of salacious searches. Noble’s work is not bogged down in efforts to score search results as either “legitimate” or “illegitimate,” valid or outliers, as a corporatized view of algorithmic accountability might demand.

Instead, she reframes search results as a question of social justice and sociological investigation, rather than a mere business or technical problem of optimized relevance and profit maximization. Rather than acceding to demands to make her case for accountability in a computationally tractable way — for example, delineating procedures for sorting and ranking the data that influences results — Noble returns to the roots of the accountability movement, insisting that the owners of algorithmic systems ensure that they perform in ways that relevant communities can identify as fair. This demand is part of a larger recognition that platforms such as large search engines function just as much as media companies (where questions of meaning and representation have always been prominent) as they are communications intermediaries (where the problem of connecting one user to others has traditionally been treated as a question of optimized technical protocols).

What might constitute that “fairness” is yet another front on the battle for algorithmic accountability that is less about improving existing processes than developing entirely new ones. Standards and issues of accountability are always in play and always need debating and reassessment. No algorithmic system can circumvent the necessary and endless conversations that these ultimately political and moral questions demand. That is one reason my recent work has focused on the balance of power among different professions and interest groups in identifying and solving algorithmic accountability concerns. We cannot code solutions here, but we can assure more diverse and inclusive conversations about how algorithmic assessments should work, and their proper extent.

Complementing Noble’s focus on the private sector, Virginia Eubanks’s Automating Inequality identifies profound problems in governmental use of algorithmic sorting systems. Eubanks tells the stories of individuals who lose benefits, opportunities, and even custody of their children, thanks to algorithmic assessments that are inaccurate or biased in profound ways — but she does not recommend policy changes that are amenable to simple software redesign or user interface improvement. Were we to approach the problem from a purely technical perspective, we might promote more and better data gathering about the struggling individuals she describes, to ensure that they are not misclassified. But Eubanks argues that complex benefits determinations are not something well-meaning tech experts can “fix.” Instead, the system itself is deeply problematic, constantly shifting the goal line (in all too many states) to throw up barriers to access to care.

Scholars like Noble and Eubanks need to be at the center of future conversations about algorithmic accountability. They have exposed deep problems at the core of the political economy of information,  in data-driven social control. They diversify the forms of expertise and authority that should be recognized in the development of better socio-technical systems. And they are not afraid to question the goals — and not simply the methods — of powerful firms and governments, foregrounding the question of to whom algorithmic systems are accountable.

Our practices of accountability can sometimes be made fairer by becoming more algorithmic. But leading practitioners of algorithmic approaches to social order have made their fortunes via complicity with unjustifiable hierarchies of wealth, power, and attention. An algorithmic accountability movement worthy of the name must challenge the foundations of those hierarchies, rather than content itself to repair the wreckage left in their wake.

Frank Pasquale is Professor of Law at Brooklyn Law School, and author of New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press, 2020).