Home

Calculating Instruments

Computing has always been sourced from the knowledge of underpaid or unpaid people

Full-text audio version of this essay.

Like so much tech jargon, “crowdsourcing” was born in the pages of Wired magazine. It appears in a June 2006 article in which Jeff Howe paints an optimistic, managerialist picture of the labor market that would be created by new communication technologies:

Technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. Hobbyists, part-timers, and dabblers suddenly have a market for their efforts, as smart companies in industries as disparate as pharmaceuticals and television discover ways to tap the latent talent of the crowd. The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s crowdsourcing.

Among the examples Howe offers are image repositories that allow amateur photographers to sell stock photos at rates that drastically undercut professionals, a network of hobbyists who are on call to solve scientific and engineering research problems for large companies for the fraction of the cost of an in-house research laboratory, and Amazon’s infamous Mechanical Turk platform, which was then at an early stage and without a clear application.

Howe’s article presented two trajectories for the development of crowdsourced labor: a skilled path, exemplified by amateur scientists solving knotty problems, motivated and fulfilled by intellectual interest and fairly recompensed; and an unskilled path, exemplified by platform workers competing for whatever atomized and mind-numbing tasks requesters post (surveys, image labeling, correcting sentences, psychological experiments).

Technological systems generate a need for crowdsourcing, because the idea of complete automation is an ideological fiction

It is safe to say that we have taken the latter path: Rather than making work more independent and voluntary, crowdsourcing has contributed to work’s devaluation and destabilization. The majority of crowdsourced labor is not decentralized and emergent, but is administered in a top-down fashion through platforms. It is insecure, poorly paid, repetitive, and unfulfilling. As researchers have documented, the workforce on online platforms like MTurk and Upwork numbers in the millions, despite low pay and precarious work conditions. Tech companies have leaned on this workforce to make their services possible. Mary L. Gray and Siddharth Suri argue in Ghost Work that human piecework invisibly powers many purportedly automated “AI” tech applications, from image recognition to search engines. Sarah T. Roberts’s Behind the Screen details a similar sleight of hand in commercial content moderation: For years, social media companies have maintained that most of the moderation on their platforms was done by algorithms, despite the reality of human “data janitors“ carrying a significant part of the load. Everyday internet use makes everyone into unwitting crowdsourcers: Instagram harnesses hashtags to produce datasets of labeled images, ReCAPTCHAs are used (somewhat ironically) to train algorithmic systems to perform Turing tasks, and language-learning apps like Duolingo mine users’ responses to improve their datasets.

Crowdsourcing lies at the unhappy intersection of capitalist labor practices and the technology sector’s valorization of artificial intelligence over human skills. Technological systems generate a need for crowdsourcing, because the idea of complete automation is an ideological fiction. Gray and Suri identify the paradox of the last mile of automation, the idea that automation inevitably creates new tasks that are difficult or impossible to automate: Datasets for AI must be made machine readable by humans, computers must be maintained, and difficult cases must be mopped up. The history of computing reminds us that computing systems often create an increased need for human labor, at least in the short term.

The novelty of the term crowdsourcing can make it seem like it names a new form of exploitation, facilitated by historically unprecedented technological systems. However, the genealogy of computing technologies reveals a long history of exploited and deskilled human labor. These forms of labor emerge because computing technologies are applications of the principles of the division of labor to tasks; as some tasks are automated, more demand for repetitive and atomized human labor is created.

Over the past 10 years, a great deal of crowdsourced labor in tech has been located in the global South (especially South Asia, the Philippines, and East Africa). However, a combination of increasing unemployment, weak labor regulations, pandemic-induced precarity, and a lack of access to the traditional labor market (especially for people with disabilities and caring responsibilities) means that crowdsourced work is becoming increasingly prevalent in both the U.S. and Europe. The fall-out from Prop. 22 in California demonstrates quite how quickly stable employment can be transformed into precarious gig work.

Writing about technology and work often focuses on concerns about automation replacing human workers with machines, but the history of computing labor can provide a useful corrective, reminding us that we should be just as much concerned about the dehumanizing character of machine-adjacent work.


The story of the relation between computing technology and human labor often starts with the female clerical workforce at Bletchley park, with human computers working at NASA, or with the programmers who worked on ENIAC. But for the purposes of thinking about crowdsourcing, we can begin our history of human computing in post-revolutionary France at the close of the 18th century.

Before the revolution, there was no national standardized system of measurement across France, which made it difficult to track land ownership and levy taxes. After the revolution, the National Assembly appointed a commission to address this problem, which not only yielded the metric system still in use today but also a decimal system of grades which rendered existing tables for trigonometric functions otiose (degrees split the right angle into 90 units, and grades split it into 100 grades). It fell to Gaspard Clair François Marie Riche de Prony — the director of the Bureau du Cadastre, the state’s map-making body — to create a new set of metric tables for the trigonometric and logarithmic functions.

This was a vast task: In Prony’s estimation, finishing the project would take the rest of his days, even with the assistance of four collaborators. As the story goes, Prony was browsing through a bookshop in despair when he chanced on a French edition of Adam Smith’s Wealth of Nations. Opening the book at random, he read Smith’s famous description of the division of labor increasing efficiency in the manufacture of pins. Prony realized the division of labor could solve his problem. In a publication notice from 1820, he describes a plan “to manufacture logarithms as one manufactures pins.”

Mechanizing calculation drastically shifted its social meaning, transforming it from one of the highest expressions of the human intellect to a process

In Smith’s pin-making example, making a pin — then an artisanal rather than an industrial process — was divided into 18 different semi-skilled tasks. Prony took the division of labor much further. With the assistance of six mathematicians, he divided the task of calculating a logarithmic function into a series of elementary operations of addition and subtraction. A second team of seven or eight calculators prepared worksheets for carrying out these operations. The drudge work of actually carrying them out was assigned to a third group of calculators, numbering perhaps as many as 100. These calculators had extremely limited math education, and it seems likely that they did their work at home as piecework. The legend has it that many of these workers were recently unemployed hairdressers to the aristocracy, although some seem to have been political refugees sheltering “under the aegis of science,” as Lorraine Daston notes.

The project was a surprising success, mathematically, producing a monumental 19 volumes of tables with an unprecedented degree of accuracy. But it was a financial failure; the tables were never published. Nonetheless, Prony’s application of the division of labor to calculation quickly became influential. The English polymath Charles Babbage shared Prony’s dissatisfaction with what he called the “ancient methods” of computing tables. In 1821, Babbage agreed to produce a set of tables for the British Nautical Almanac. As was standard practice, he outsourced the work to a pair of computers, who appear to have done a particularly bad job. In exasperation, Babbage expressed “the wish that we could calculate by steam.” He set out to mechanize calculation, drawing inspiration from Prony. In his description of Prony’s project, Babbage likens the bottom class of calculators to unskilled machine operators who must be overseen alongside their machines.

Babbage’s desire to mechanize calculation led to his plan for a difference engine, which would calculate logarithmic functions using a hand-driven crank to move a complex system of gears. Later he made detailed plans for a general-purpose computer, which he called the analytical engine. Before Prony, it would have been difficult to imagine replacing professional calculators with machinery: Calculation was difficult work, and calculators would work short days in recognition of the mental fatigue. But once calculation had been reduced to a set of merely mechanical operations of addition and subtraction that could be performed by a political refugee or hairdresser, it was much easier to see how they might be replaced by a machine. The division of labor not only deskilled work and workers; it opened the door to tasks being partially automated, furthering the cycle.

Daston argues that mechanizing calculation drastically shifted its social meaning: calculation was transformed from one of the highest expressions of the human intellect to a quasi-mechanical process. This shift is reflected in the way that Smith, Prony, and Babbage thought of the division of labor. For Smith, the division of labor was a mode of semi-skilled production that would enhance workers’ aptitudes at their allocated tasks. For Prony, the division of labor was a trick to minimize the need for skilled labor (an idea that has come to be known as the Babbage Principle). Prony relates that the fewest calculation errors were made by the those “who had the least intelligence, an automatic existence, so to speak.” Babbage takes references to “merely mechanical operations” even more seriously, hoping to transform calculation into an automated process on the model of the newly built factories in Manchester. Far from being a distinctive expression of human capacities, calculation was, in Babbage’s view, “one of the lowest operations of the human intellect.”


With this history in place, we can see that crowdsourcing is not new. Our technologies for calculation and computation — and the way we think about these technologies — have emerged from a history of devalued and deskilled work. This brings home a number of important lessons:

The first is that the need for large-scale calculation and human computing is often driven by the administrative needs of the state. James C. Scott argues in Seeing Like a State that the French National Assembly’s desire for uniform measurement practices was a consequence of their need make land and citizens legible for bureaucratic processes. Other computing projects are similarly driven by the epistemic needs of the state: In Programmed Inequality, Mar Hicks argues that the development of the computing sector in the UK was driven by the need to administer the welfare state, and in When Computers Were Human, David Allen Grier documents the rich history of government-backed human computing, which reached its zenith with the Mathematical Tables project which took place under the auspices of the U.S. Works Project Administration.

In Scott’s view, states need to make their citizens and land legible in order to govern them. A state that is ignorant about its own citizens — what they own, who they are related to, what taxes they owe — will not be able to enforce property rights, regulate inheritance, or raise taxes on its subjects. When technology platforms harness large amounts of (knowing and unknowing) human labor to process social and behavioral information, they are responding to a similar need to make their own users’ actions legible for governance purposes. This is not to say that technology companies literally are states: There are other commercial reasons to collect behavioral information. But in some respects — particularly the regulation of speech through content moderation and processing payments for content creators — technology companies are functioning like states, and like states, their administrative activities generate needs to process large amounts of information about their users.

The second lesson is that workers who carry out the tasks that keep algorithmic systems running are constantly at risk of becoming invisible. In every description of Prony’s project, the contributions of the workers are downplayed in favor of the cleverness of the algorithm they were implementing and Prony’s ingenuity in applying the division of labor to mental operations. The identities of the calculators has been lost and even the idea that they were unemployed hairdressers is something like a rumor.

This downplaying of “unskilled” human labor in the knowledge economy has a long history. Stephen Shapin has uncovered the way in which 17th century scientist Robert Boyle effaced the enormous contributions of the invisible technicians who ran his experiments, working the machinery, writing up results, and carrying out observations. Boyle was not unusual in this regard: In period illustrations, technicians are displayed as faceless and interchangeable, or are replaced by putti — literally angels — who illustrate the operation of instruments.

Workers who carry out the tasks that keep algorithmic systems running are constantly at risk of becoming invisible. The identities of the 19th-century calculators have been lost

As with other epistemologies of ignorance, this effacement is produced both by structural factors and complicit observers. In a discussion of automata in the 18th century, Edward Jones-Imhotep writes, “seeing machines as autonomous, then, has historically meant not seeing certain kinds of labor and the people performing it.” He argues that the century’s audiences for automated theaters, mechanical musicians and digesting ducks, and plans for automated factories were complicit in creating ways of seeing that center the “autonomous” operations of machines while simultaneously obscuring the enormous amounts of human labor that enable them.

Drawing on Sarah Roberts’s idea of a logic of opacity, Karen Frost-Arnold argues in the forthcoming book Who Should We Be Online? A Social Epistemology for the Internet that commercial content moderators and other digital workers are subject to a similar epistemology of ignorance. This system is complex. It relies on public ignorance about the extent and types of digital labor, the prevalence of fictions of automation, the ways digital labor is outsourced and hidden behind anonymizing APIs. It relies on the work patterns of piecework and systems that have historically rendered domestic labor invisible. And it relies on the complicity of tech workers who understand and use digital work platforms on a day-to-day basis but cultivate ways of seeing that ignore crowdworkers.

A third lesson is that — despite the apparently novel forms of technological exploitation (what Shoshanna Zuboff has labeled “surveillance capitalism” comes to mind) — much of the ethical terrain for thinking about crowdsourcing had already been mapped out by earlier theorists observing the emergence of capitalism. As Smith saw in The Wealth of Nations, organizing the workplace around the division of labor threatens a moral disaster: “The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He … generally becomes as stupid and ignorant as it is possible for a human creature to become.” Marx similarly claimed in the 1844 Manuscripts that capitalism “replaces labor by machines — but some of the workers it throws back into a barbarous type of labor, and for other workers it turns into machines. It produces intelligence — but for the worker cretinism.”

These warnings are apt, but we should take warnings about “stupidity,” “ignorance,” and “cretinism” with a handful of salt. Smith and Marx fall into the trap of confusing symbolically deskilled work with the absence of skill. We can accept both that the division of labor symbolically deskills workers and denies them the opportunity to develop skills without arguing that it makes workers unskilled and incapable of intellectual development. The commercial content moderator understands the on-the-ground challenges of moderation better than their boss, as Roberts argues; the Deliveroo rider understands the vagaries of the workflow algorithm and the challenges of route finding better than the tech workers who maintain the algorithm. And, as Daston points out, even the drudgerous calculation involved in producing mathematical tables requires enormous concentration to avoid making errors. Acknowledging crowdworkers’ distinctive skills and knowledge is compatible with recognizing that their work is underappreciated, undervalued, and underpaid.

Part of what stands in the way is the description of digital workers as if they were machines or nonhuman animals. MTurk advertises itself as selling “artificial artificial intelligence,” a seamless assemblage in which “human intelligence tasks” are on par with automated tasks. In a blogpost from 2008, Howe describes MTurk as enabling “clients to farm out the kinds of menial clockwork that we all wish computers could do, but can’t,” illustrating the post with a sketch of a primate hunched over a desktop computer.

Treating workers as if they were machines or animals allows us to see them as fungible, as replaceable both by other workers and machines. If we treat content moderators as if they were part of the machinic algorithmic system for regulating social media, we will forget and overlook the fact that people cannot perform this fraught work without emotional and spiritual damage.


Langdon Winner famously asked whether artifacts have a politics. The history of human labor in computing suggests that computing technologies do, and that it is the politics of the extreme division of labor. Crowdsourcing is not a new phenomenon: It is the continuation of deskilled machine-adjacent labor. We should see the workers on Amazon’s Mechanical Turk as pieceworkers in the tradition of Prony’s Cadastral calculators; we should see the eyeball factories in East Africa that label images for driverless cars as a descendent of the computing offices in Greenwich that produced almanacs for the Royal Navy; we should see projects to use refugees to train algorithms as descended from Prony’s use of political refugees; and we should see citizen science projects like GalaxyZoo as a development of the Harvard College Observatory.

Work under capitalism has always relied the atomization of tasks, on dubious hierarchies of “skilled” and “unskilled” tasks, and on systems that obfuscate the nature, extent, and value of labor. The division of labor is not merely a way to make work more efficient; it is a way to symbolically devalue workers, to reduce their autonomy and control over work processes, to simultaneously diminish and obscure their collective practical knowledge, and to dehumanize them by literally and symbolically embedding them within machinery.

If we want to contest the politics of crowdsourcing, we need to contest the technological systems that enable it, the political and economic systems that enable exploitative machine-adjacent labor, and the epistemology of ignorance that masks how serious the problem has been for centuries.

Joshua Habgood-Coote is currently an honorary research fellow at the University of Bristol. He works on epistemology, the philosophy of language, and the philosophy of technology, and has written for Aeon and the Guardian.