Home

Fair Warning

For as long as there has been AI research, there have been credible critiques about the risks of AI boosterism

The tech industry currently holds unprecedented power and influence. Its companies have reached vast market capitalizations, employing hundreds of thousands of workers, and reshaping a range of other industries to accommodate its prerogatives, when it doesn’t absorb them outright. Its international political reach is expanding, as companies are intervening in political elections and influencing policy and regulations through millions of dollars spent lobbying. On the academic front, work in tech-related fields such as artificial intelligence and machine learning secures funding with relative ease, its merit and necessity seemingly taken for granted. Directly or indirectly, tech companies continue to invest handsomely in creating an attractive image of the industry, the hackers behind the code, and the technologization of society in general.

What emerges from this is a portrait of technology as inevitable progress that must, despite its inevitability, be fully embraced without hesitation. For the tech evangelist, artificial intelligence research is self-evidently necessary, the next triumph in the ever-rising pyramid of human achievement and progress. In this discourse, AI allows humans to surpass their own limitations, biases and prejudices. This view prefers to imagine worst-case scenarios in science-fiction terms: Will AI take over humanity? Will we ever create sentient machines, and if so should we give it rights on a par with human beings? These First World armchair contemplations, far removed from the current concrete harms, preoccupy those that supposedly examine the moral dimensions of AI.

Computer scientists (then and now) shared the fantasy that human thought could be treated as entirely computable

The truth, however, is that the tech industry hardly concerns itself with human welfare and justice. Its practices have been starkly opposed to protecting the welfare of society’s most vulnerable, whether it’s prohibiting employees from protesting against exploitation of the LGBT community, or protecting white men (but not black children) from hate speech, or treating its low-paid workers poorly, or spending millions of dollars lobbying against regulations that protect the disfranchised and vulnerable, or developing such despicable technologies as a “wife tracking app.”

This is not a matter of the industry becoming more conservative as it has assumed more power and has more to gain from preserving the status quo. For all its celebration of disruption and innovation, the tech industry has always tended to serve existing power relations. In a 1985 interview with The Tech, an MIT News service, Joseph Weizenbaum — the computer scientist who developed ELIZA, the first chatbot in 1964 — pointed out that “the computer has from the beginning been a fundamentally conservative force … a force which kept power or even solidified power where it already existed.” In place of fundamental social changes, the computer allows technical solutions to be proposed that would allow existing power hierarchies to remain intact.

Weizenbaum had recognized this pattern decades earlier. During the 1950s, he helped design the first computer banking systems in the U.S. for Bank of America, computerizing the banking process as the banks faced rapid growth. He saw first-hand how the introduction of computers allowed such institutions and their priorities to remain pretty much as they were rather than consider social change, decentralization, or some other form of change to address the rapid growth.

But Weizenbaum’s turn toward critique started with the reception of ELIZA, which he built to imitate Rogerian therapy (an approach that often relies on mirroring patients’ statements back to them). Although he was explicit that ELIZA had nothing to do with psychotherapy, others, such as Stanford psychiatrist Kenneth Colby, hailed it as a first step toward finding a potential substitute for psychiatrists. Weizenbaum’s colleagues, who supposedly had a sophisticated understanding of computers, enormously exaggerated ELIZA’s capabilities, with some arguing that it understood language. And people interacting with ELIZA, he discovered, would open their heart to it. He would later write in his book Computer Power and Human Reason: From Judgement to Calculation (1976) that he was “startled to see how quickly and how very deeply people conversing with ELIZA became emotionally involved with the computer and how unequivocally they anthropomorphized it.” He would ultimately criticize the artificial intelligence project as “a fraud that played on the trusting instincts of people.”

Computer scientists then (and now) shared the fantasy that human thought could be treated as entirely computable, but in Computer Power and Human Reason, Weizenbaum insisted on crucial differences between humans and machines, arguing that there are certain domains that involve interpersonal connection, respect, affection, and understanding into which computers ought not to intrude, regardless of whether it appears they can. “No other organism, and certainly no computer, can be made to confront genuine human problems in human terms,” he wrote.

Upon its advent at the 1956 Dartmouth Workshop, “artificial intelligence” was conceived as a project tasked with developing a model of the human mind. Key figures such as John McCarthy, Marvin Minsky, and Claude Shannon, now considered the pioneers of AI, attended the conference and played a central role in developing AI as an academic field. Inspired by the idea of the Turing Machine and enabled by computer programing, a machine to simulate human intelligence seemed a natural next step. But as the AI project has progressed, it has gradually become less about simulating the human mind and more about creating financial empires.

From the beginning, this enterprise has been epitomized by the attitude that the hacker behind the code can produce a solution for any given problem, that in fact he (yes, always a he) alone is capable of doing so. This eventually paved the way to the hacker culture that ultimately spawned the likes of Bill Gates, Jeff Bezos, and Mark Zuckerberg — “the Know-It-Alls,” as journalist Noam Cohen labeled them in 2017 in his book about Silicon Valley’s rise to political prominence. Although the boundaries between AI as a model of the mind and AI as surveillance tools are blurry in the current state of the field, there is no question that AI is a tool for profit maximization.

Weizenbaum, initially part of the project to simulate human thought, came to see that approach as resting on a gross misunderstanding of humans as mere “information processing systems,” and began to warn against the “artificial intelligentsia” promoting that agenda. In Computer Power and Human Reason, Weizenbaum insists that “humans and computers are not species of the same genus,” since humans “face problems no machine could possibly be made to face. Although we process information, we do not do it the way that computers do.” Even to ask the question, he argues, of “whether a computer has captured the essence of human reason is a diversion, if not a trap, because the real question — do humans understand the essence of humans? — cannot be answered or resolved by technology.”


Beyond being skeptical about the prospects for an “intelligent machine,” Weizenbaum also recognized how computers were beginning to be invoked as an easy way out of complex, contingent, and multifaceted challenges. This attitude — now widespread— was particularly evident in the education field. In the 1985 interview with The Tech, Weizenbaum was asked about the benefits of having computers in the classroom. He promptly dismissed the question as wrongheaded and “upside-down,” loaded with unwarranted assumptions. If bettering education is at stake, Weizenbaum replies, then the question should begin with “what education should accomplish and what the priorities should be” and not “how computers can be used in the classroom.”

As the AI project progressed, it has gradually become less about simulating the human mind and more about creating a financial empire

Once the emphasis is shifted to educational goals, a different set of more far-reaching questions is necessarily raised, about how and why schools fail to address these priorities. Among the reasons such questioning might uncover are students coming to school hungry or coming from a milieu in which reading is regarded as irrelevant to the concrete problems of survival. We might then ask, why is there so much poverty in our world, especially in large cities, and even in supposedly prosperous countries like the U.S.? Why is it that classes are so large? Why are fully half the science and math teachers in the U.S. underqualified and operating on emergency certificates? These questions, Weizenbaum argues, would reveal that “education has a very much lower priority in the United States than do a great many other things, most particularly the military.” The issue is not a shortfall of technology in education but a host of contingent factors, including an ever-widening systemic inequality.

But rather than confront these “ugly social realities,” Weizenbaum says, “it is much nicer, it is much more comfortable, to have some device, say the computer, with which to flood the schools with, and then to sit back and say, ‘You see, we are doing something about it.’” Bringing the computer into the classroom deludes us into thinking that we have solved a problem when in fact we are hiding it and misplacing its root causes.

Today, this flawed approach of turning to computational tools such as software, algorithms, and apps has become default thinking across Western society and increasingly in the Global South. In the education field alone, computers and other surveillance tools are put forward as a solution to the student dropout crisis, to tackling the supposed lack of student attentiveness, and to the pervasive attitude that aggressively pushes the computer as an inevitable part of learning. Implementing these technologies bypasses confronting ugly social realities — the financial challenges and extracurricular workloads that deplete students’ attention or lead to their dropping out. These factors could be better understood not through more surveillance and pervasive tech but by actually talking to the students directly, as well as approaching the challenges they face as multifaceted and structural.

But when one is steeped deep in tech-solutionism discourse, the first step of consulting those at the receiving end of some technology is not so obvious. It might even seem irrelevant. The AI field continues to be marked by utopic visions and immense optimism, and discussions of moral responsibility and structural power dynamics are taxing — tiring even for the dedicated humanitarian. In the face of optimism, “potential,” and excitement, continually pointing out negative impacts is rarely rewarded. In fact, at times, such work can be perceived as a threat to corporations, which results in punishment, retaliation, or suppression of dissenting voices. It is far easier to preach progress and get behind the AI bandwagon or be a “game-changer.” The challenging work of examining inaccuracies, harms, false claims, and promises, on the other hand, casts one as a Luddite. Yet back in 1985 Weizenbaum was already arguing that “it is not reasonable for a scientist or technologist to insist that he or she does not know — or can not know — how [the technology they are creating] is going to be used.”


Many of the “problems” in the social sphere are moving targets, challenges that require continual negotiations, revisions, and iterations – not static and neat problems that we can “solve” once and for all. This attitude is so well engrained within the computational enterprise, a field built on “solving problems,” that every messy social situation is packaged into a neat “problem –> solution” approach. In the process, challenges that cannot be formulated into neat “problems” are either left behind or stripped off their rich complexities.

In 1972, in an article for Science, Weizenbaum called attention to how the AI field masked its fundamental conservatism with a blend of optimistic cheerleading and pragmatic fatalism. This could be found in “the structure of the typical essay on ‘The impact of computers on society,’” of which he offered this description:

First there is an “on the one hand” statement. It tells all the good things computers have already done for society and often even attempts to argue that the social order would already have collapsed were it not for the “computer revolution.” This is usually followed by an “on the other hand” caution which tells of certain problems the introduction of computers brings in its wake. The threat posed to individual privacy by large data banks and the danger of large-scale unemployment induced by industrial automation are usually mentioned. Finally, the glorious present and prospective achievements of the computer are applauded, while the dangers alluded to in the second part are shown to be capable of being alleviated by sophisticated technological fixes. The closing paragraph consists of a plea for generous societal support for more, and more large-scale, computer research and development. This is usually coupled to the more or less subtle assertion that only computer science, hence only the computer scientist, can guard the world against the admittedly hazardous fallout of applied computer technology.

This same pattern persists in many articles about emerging technologies: The potential achievements are applauded, while the dangers are regarded as further proof that the technology is desperately needed, along with more generous societal support.

Today, the rhetorical pattern that Weizenbaum decried looks like this among AI’s current boosters: A machine learning model is put forward as doing something better than humans can or offering a computational shortcut to a complex challenge. It receives unprecedented praise and coverage from technologists and journalists alike. Then critics will begin to call attention to flaws, gross simplification, inaccuracies, methodological problems, or limitations of the data sets. In almost all machine-learning models deployed within the social sphere, the accuracy of the proposed solution will be shown to be grossly inflated, as well as harmful and discriminatory in some cases. Individuals who experience that discrimination will take to social media. In some cases in fields such as medicine, neuroscience, or psychology, where the model is providing “cutting-edge” solution, historians will point out how the particular technological approach revives long discredited and pseudoscientific practices like phrenology or eugenics. Domain-specific experts (be it in medicine, social care, cognitive science) will expose the lack of nuanced understanding of the problem.

Although the boundaries between AI as a model of the mind and AI as surveillance tools are blurry, a tool for “profit maximization” captures current AI

But the outrage and calls for caution and critical assessment will be drowned out with promotion of the next great state-of-the-art tech “invention,” the cry for emphasis on the potential such tech holds, and championing of further technological solutions for problems brought about by the previous tech solutions in the first place.

Among the standard justifications for developing and deploying harmful technology is the claim of their inevitability: It’s going to be developed by someone, so it might as well be me. See, for example, the reasons offered by the researchers who tried to develop algorithms to identify sexual orientation. In his 1985 interview, Weizenbaum rejected such reasoning as absurd, claiming it is like saying, “it is a fact that women will be raped every day and if I don’t do it, someone else will so it might as well be me.”

Another justification is to dismiss the limitations, problems, and harms as minor issues compared with the advantages and potential. When confronted, for example, with the knowledge that data brokers and tech companies collect huge amounts of data on us, apologists may try to dismiss it as just a matter of targeted ads. How bad can it be? You can always ignore them. This may be true if one is in a privileged, nonmarginalized position. But “targeted ads” have deeper and more insidious consequences for those without such privileges — for some, “targeted ads” mean unjust exclusion from housing, education, or job opportunities.

Adopting an AI or machine-learning “solution” rather than a more comprehensive approach to social issues remains widespread. It can be seen in “technology for social good” initiatives, which reduce intricate geo-sociopolitical and cultural challenges to formal code and prioritize technological solutions. It is also evident in automated decision-making in regard to welfare systems (as Virginia Eubanks has detailed), which are interwoven with infinite contingent socioeconomic factors which are either seen as inconsequential or totally ignored. It is seen in algorithmic approaches to mental health issues, which require the utmost sensitivity and delicacy rather than unilateral interventions and gross simplification of nuances and contexts. And it is becoming an integral part of criminal justice systems and policing. It is so prevalent that some view it as a legitimate intervention into geopolitical conflicts (like this one over the border in Northern Ireland), a substitute for political will and decades of negotiation.

Technology has become the almighty hammer to bash every conceivable nail with. And even when its overinflated capabilities, limitations, and harms are brought to the fore — the injustice it perpetuates; the protections and privacies it erodes — the response is not to confront ugly social realities and ask meaningful questions such as “Is this piece of tech needed in the first place?” Rather, what often happens is a call for more data, further tech, and the pledge to highlight the potential for good, just as Weizenbaum had noted in 1972. Any notice of the limits of technological solutions is followed by a “plea for generous societal support for more, and large-scale, computer research development.”

The arguments about technology in this essay are not new, but history has shown they still need to be reemphasized and reiterated. Fortunately, in the light of the repeated exposure of Silicon Valley’s insidious motives and unprecedented power, individual people, especially black women, (both from within and outside) continue to challenge powerful tech empires. You’ll have read this piece nodding your head in agreement — if you got this far. However, the pattern that Weizenbaum described persists. The points raised by this piece and the call for caution and critical engagement may very well disappear into the background, replaced by hype for the new exciting and state-of-the-art tech that will appear tomorrow.

Abeba Birhane is a PhD candidate in Cognitive Science at University College Dublin. Her interdisciplinary research, which intersects between embodied cognition, digital technology studies, and critical data science, explores the dynamic and reciprocal relationships between individuals, society and digital technologies. She is a contributor to Aeon Magazine and blogs regularly about cognition, AI, ethics and data science.