Home

Recommended Writing

Predictive text reduces writing to a science

Full-text audio version of this essay.

“The art of writing is now also a science,” claims Textio, a startup that automates the writing of job advertisements. The platform uses artificial intelligence to scan millions of job ads, collecting and crunching “language performance data” so that users “can stop guessing at what makes good writing.” Textio is just one of a number of services and applications — including Grammarly, Acrolinx, and Lightkey — that claim to help you write better and faster with the help of algorithms, promising that the “right” words are always within reach, like radio frequencies waiting to be found.

Most of us are intimately familiar with some version of word prediction AI. Many of these technologies pop up daily in our conversations, whether we’ve asked them to or not. Google Smart Reply, which debuted in 2017, “reads” email and suggests three short responses you can deploy with a single click. In 2018, Smart Compose followed, first in Gmail and then in Google Docs. It prompts users as we write, readily offering predictions for what we’ll type next. Microsoft has rolled out a similar feature in Word and Outlook.

Apps like these aim to bring our text in line with an algorithmic norm. They define “good” writing as efficient writing; and according to their marketing campaigns, writing can be efficient because its hallmarks are pattern and repetition. This marketing suggests not only that writing is easily outsourced to AI, but that we’d be foolish not to. Google is masterful at inventing metrics that tell us just how inefficient we are in the absence of its products. In 2018, for example, Smart Compose was said to have “saved people from typing over 1B[illion] characters each week—that’s enough to fill the pages of 1,000 copies of Lord of the Rings.” Several months later, Gmail reported that this figure had doubled, making a powerful bid for the time-saving superpower of their technology.

Predictive text apps aim to bring our text in line with an algorithmic norm. They define “good” writing as efficient writing

The assumption here is that writing is labor. And much of the time, it is. Most of us grapple with a constant deluge of email and instant messages, demanding immediate responses and presuming our availability 24/7. If composing an email takes time, and time is money, saving language is necessarily to our advantage. But while, in theory, a quick, clickable response might cut down our email workload, in practice, it might simply mean more emails requiring a response. These apps misidentify — and exacerbate — the problem at the heart of our workloads. It’s not that we don’t have enough time; it’s that too much time is being demanded of us, and the workday steadily encroaches on the rest of our lives. By normalizing the logic of corporate communication — through platforms we use to communicate with friends as well as colleagues, parents as well as bosses — they exert an insidious influence on the way we communicate, and the way we think about communication itself.


If word prediction technology, to borrow a phrase from Ben Green, solves artificially simple problems instead of addressing complex ones, it also masks other problems inherent in the tech. While Google has released a fair bit of information about how Smart Compose works, its research papers and blog posts have emphasized, more than anything else, the integrity of the algorithm’s design: its parameters, its speed, and its innovation. These informational materials raise more questions than they answer: we learn, for instance, that Smart Compose’s training data comprised 500,000 emails leaked from the Enron scandal and a year’s worth of email sent from Gmail.com. Google, however, has not disclosed any further specifics, stating that “our models never expose user’s [sic] private information.” While this stance appears to be a noble safeguarding of privacy, it elides the question of whose words counted. Whose words does the algorithm value?

This question matters, not least of all, because Smart Compose was purportedly developed to support more than 1.4 billion users. This means that when the algorithm reaches into the deep recesses of its database in order to make a prediction, it is searching for words on behalf of a significant fraction of the global population. According to Google, developing an algorithm to function with this objective at this scale meant “do[ing] extensive testing to make sure that only common phrases used by multiple users are memorized by [the Smart Compose] model.” But Google itself defines “common,” and by all sorts of dimensions — not only the number of users employing a word or phrase, but location, context, the phrase’s appropriateness in the corporate world, and so on. In other words, the definition of “common” is actually quite unique.

By normalizing the logic of corporate communication on platforms we use with friends as well as colleagues, they exert an insidious influence on the way we think about communication itself

This tech takes for granted that “common” is both uncontestable and desirable. This is what I call the commonness imperative, and it should be easier than ever to recognize it as a colonial effort. This is the Global North — more specifically, Silicon Valley — making the rules for what constitutes prediction-worthy (read: recommended) language. It looks like several historical efforts to the same end: the delegitimization of orality, the construction of a Queen’s English, and the derision of AAVE, to name a few. All of these efforts have been premised on certain assumptions about whom language belongs to and what it should do.

Algorithmic systems that manage language, and that are created and maintained by profit-driven companies, have serious and often intimate consequences. Safiya Noble, author of the influential book Algorithms of Oppression, writes that language-driven algorithms impact not only how we conceptualize information, but also how we conceptualize ourselves and the world around us. The example Noble begins with is a Google search for “Black girls,” which in 2012 returned her a list of pornography sites. She consequently embarked on a years-long journey to demystify the politics of search and to urge users to understand that what we find online is not uncontestable. The top search results are often at the top because people pay for them to be there, not because they’re what everyone looks for, or says. But search-algorithms’ pretense of objectivity, Noble argues, reproduces racism, sexism, and classism in ways that are particularly insidious because many people believe that Google search yields facts, not arguments.


Search and Smart Compose are programs that run on market logic. But what if we imagined something different? What if we abandoned the commonness imperative, embracing the infinite range of contexts and relationships an online interaction might indicate, or establish? What if we quit designing as though self-optimization and speed were the only goals worth pursuing? Perhaps, instead of offering “Regards” as your sign-off, this algorithm would inform you that you’ve exchanged 26,000 words with your interlocutor over several years, and you could call attention to the care you’ve both put into that. Perhaps it could remind you that you’re writing on the anniversary of his mother’s death, and prompt you to acknowledge it. Maybe this algorithm could flag all the things you hate saying but nonetheless do, and you would take a moment to rethink and rephrase. What word prediction algorithms, in their current form, prompt us to say is not synonymous with what we should.

Algorithms targeting speed and efficiency are not only rife with problematic assumptions; they also uphold a view of human communication as mathematical and mundane. To see language this way is to become hostage to it. But maybe we could conceptualize an algorithm that sought not to predict the most common words and phrases but rather helped us practice empathy, alliance, curiosity, community, and relationships. Designing algorithms to different, value-based, and relationship-building ends would require us to think much harder about what we want to do with language, not rehearse the things it does to us.

Crystal Chokshi is Associate Director of the Environmental Media Lab and a PhD candidate in Communication and Media Studies at the University of Calgary. Her work explores the stakes of letting AI decide what we say.