Home

Look Who’s Talking

How tech companies understand “clarity” reveals what they assume about users

Full-text audio version of this essay.

First we learned to speak how computers seemed to want us to. Best pizza Chicago, we might type into the search bar, dropping yet another preposition. Hit submit, someone might say, punctuating a dreary task with a rhyme that sounds like two verbs but isn’t. Now computers sound increasingly like people. This isn’t just a matter of Alexa and its competitors, or the chatbots that respond when you message brands. While the technologies that enable human-machine “conversation” — speech synthesis, natural language processing — have advanced a lot over the past decade, engineering alone can’t yet breathe life into most interfaces. What is human-like in them is often due to the way they are written.

User-experience (UX) design is the process of ensuring that websites, apps, and other products meet the needs of their intended users, at least to the extent that those needs can help generate revenue. Discussions of UX tend to focus on how things look (the color of a button, the placement of a menu) and how they work (what happens when you click or tap). But the way they sound or read is increasingly just as meticulously designed. UX writers — who may also have titles like product writer, content designer, or content strategist — do a lot of different things depending on where they work. They might write help content, emails, or error messages (“We’ve seen this problem clear up with a restart of Slack, a solution which we suggest to you now only with great regret and self-loathing”). They might script digital assistants and chatbots (“What can I help you with?”). They might compose the text that tells you what will happen if you click on a link, button, or menu. (Perhaps you’ve noticed that “learn more” is losing ground to more specific pleas like “read the full story.”) When you use a consumer app, website, or device created by a large technology company, a UX writer has probably had a hand in the words you’re interacting with.

If a company’s voice expresses how it thinks of itself, its definition of “clarity” expresses how it thinks of its users

One of UX writing’s main directives is to make interfaces sound “human,” “natural,” or “conversational.” Amazon’s instructions for people writing Alexa scripts include tips like, “Use variety to inject a natural and less robotic sound into a conversation and make repeat interactions sound less rote or memorized.” Across organizations, guidelines urge the use of contractions, first- and second-person pronouns, and words that people on the other side of the device might say. Recently, when a cash register expressed hope that I’d had a good “Target run,” I thought, Hey, that’s what my mom calls it too! The writer for the self-checkout kiosks had hit their mark.

Sara Wachter-Boettcher, in her 2017 book Technically Wrong, dates the current “talk-like-a-human movement in tech products” to 2011. This was the year Apple launched Siri and IBM’s Watson won Jeopardy! But Wachter-Boettcher doesn’t mention these events; she thinks the email marketing company Mailchimp kindled the trend with its then new voice and tone guidelines. The guidelines, which soon started “popping up in countless conference talks as the way forward for online communication,” framed Mailchimp’s relationship with customers as explicitly conversational. They used speech bubbles to demonstrate the tone the company’s writers should take for a variety of types of content. The idea was that Mailchimp should modulate its voice, as a person would, depending on the situation of whomever it was talking to. All the while, the voice itself should remain recognizable. Mailchimp defined this voice as having eight consistent parameters, the first two of which were “fun but not childish” and “clever but not silly.”

Other apps and websites followed suit — many with voices more obtrusive than Mailchimp’s. Critics zeroed in on the silliness. In a 2016 essay for Real Life, Jesse Barron pointed out that apps like Yelp and Seamless spoke in the voice of a “cool babysitter,” practicing “cuteness applied in the service of power-concealment.” Wachter-Boettcher makes similar critiques of pop-ups with “cleverly” passive-aggressive opt-outs (“No thanks, this deal is just too good for me”) and notifications that are inappropriately “fun” (like Tumblr’s infamous “Beep beep! #[trending tag] is here!” — which notified users about tags like #neo-nazis and #mental-illness). Clever copy, she writes, “wraps tech’s real motives — gaining access to more information, making us more dependent on their services — in a cloak of cuteness that gently conditions us to go along with whatever’s been plunked down in front of us.” Like we’re being run over by a pink bulldozer, as my friend Monica put it.

But the bulldozer is no longer quite so pink. While cuteness is not dead (Tumblr still uses “beep beep”), it is waning. In 2013 and 2014, the email confirmations I received from Grubhub — Seamless’s parent company — began “Hip Hop Hooray! A whole heap of delicious in on the way.” By 2017, the messages went straight to the point: “We’ve confirmed your delivery order from [restaurant]. Your food should be ready by [time].” Groupon made a similar switch. Once known for voicey coupons, it now describes deals with a single plain-faced sentence. Like most trends, the interface speech patterns of the early and mid-2010s have come to seem dated. (“Hip hop hooray”?) Perhaps more significant, plainer writing in apps and on websites better approximates the sort of face-to-face conversations it replaces. If we’re helping someone buy something, or rendering them a service, we’re probably not — with some exceptions — trying to show off our inner selves. Wachter-Boettcher quotes Mailchimp executive Kate Kiefer Lee, who says she thought the company had been “trying too hard to be entertaining” and was now focusing “on clarity over cleverness and personality.” (Mailchimp’s voice has since gotten an overhaul.)

If a company’s voice expresses how it thinks of itself, its definition of “clarity” expresses how it thinks of its users. UX writers tend to follow “human-centered” design practices, which — depending on the situation and your perspective — aim to solve problems people actually have or convince them they have problems in need of solving. The way for writers to accomplish either goal is usually to make things “clearer.” Clear language can and often does make apps less frustrating, forms more inclusive, and websites more accessible to more people. But “clarity” can also structure how we think about technology without our understanding or noticing. To warn UX writers against using this power irresponsibly, the handbook Writing is Designing (2020) by Michael J. Metts and Andy Welfle quotes information architect Jorge Arango: “If you are the person who controls the form of the environment by defining its boundaries through language,” you can ensure that “persuasion will happen without me even knowing it’s happening.”

To UX writing teams, “clarity” often means making what the company wants you to do the clear choice for you

However well-intentioned an individual writer might be, the main point of this persuasion is almost always going to be profit. In a foreword for the 2012 edition of the popular handbook Content Strategy for the Web, Facebook’s first content strategist, Sarah Cancilla, mentions that she revised a few links at the bottom of the homepage to make them “clearer and more compelling.” The increased clarity appears to have come largely from placing “Mobile” and “Find Friends” at the beginning of the list, and it’s telling what Cancilla cites as proof of this: “six million more people found friends, invited friends, and tried Facebook Mobile every week, purely as a result of those tiny improvements.” For UX writing teams, “clarity” often means making what the company wants you to do — try mobile, find friends — the clear choice for you.

“Clarity” promotes not only corporate-friendly actions but corporate-friendly (i.e., superficial) understandings of the world. For example, Metts and Welfle praise Pinterest’s terms of service for including “a summary of each section in simple terms, to help users understand what they’re agreeing to.” To illustrate, they excerpt a paragraph of legalese that ends with a “more simply put” translation, which reads: “Pinterest has links to content off of Pinterest. Most of that stuff is awesome, but we’re not responsible when it’s not.”

Like a lot of “clear” writing found in apps and on websites, these sentences seem more likely to make people feel like they understand than to ensure they actually do. “Most of that stuff is awesome” does not have the same implication as what the legalese actually says: “we don’t endorse” any of it. “Risk,” the legal term, is not the same as mere lack of awesomeness. And the replacement of “no liability” with “we’re not responsible” (instead of something like “you won’t be able to sue us”) makes the stakes for the user less clear, not more, preemptively replacing whatever you might have thought with what the company wants you to think.

Of course, “clarity,” as Metts and Welfle acknowledge, “is in the eye of the beholder.” As is usefulness. As is what sounds “human.” These concepts — as they appear in style guides, how-to articles, and guidebooks for UX writers — are often presented as universal ideals. Like mainstream journalistic “objectivity,” however, they are not the neutral values they pretend to be. What “human” and “clear” share as interface-writing values is not only that they find expression in common words and simple sentences but that they can easily become vessels for the kind of false universality that merely imposes one powerful group’s interests on everyone else and asks them to feel good about it.

This kind of problem has been embedded in personal computing from the start. Cynthia L. Selfe and Richard J. Selfe Jr.’s 1994 article “The Politics of the Interface: Power and Its Exercise in Electronic Contact Zones” details how interfaces can reinforce sexist, racist, classist, and colonialist hierarchies. They observe, for example, that by using the desktop as its organizing metaphor, the personal computer fails to “represent the world in terms of a kitchen countertop, a mechanic’s workbench, or a fast-food restaurant — each of which would constitute the virtual world in different terms according to the values and orientations of, respectively, women in the home, skilled laborers, or the rapidly increasing numbers of employees in the fast-food industry.”

Since then, biased interfaces have abounded. Wachter-Boettcher’s book, subtitled “Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech,” is a useful compendium of recent examples: The Etsy push notification that assumed women would want to shop “for him” on Valentine’s Day. The Facebook “real name” requirements that deemed many Native American names fake and endangered drag queens and kings. The many forms that still needlessly require you choose one of two genders. These are the front-end counterparts of the racist and sexist algorithms that have been all over the news for the past decade or so: The predictive software that erroneously links race to the probability of committing a crime. The search engines that for years ranked porn sites at or near the top of results for queries like “black girls” and “Latinas” — and in some cases still do, if the test searches I just did on Bing are any indication. (Google appears to have corrected for this.)

In her 2018 book Algorithms of Oppression: How Search Engines Reinforce Racism, Safiya Umoja Noble argues that racist and sexist search results are so insidious in part because search engines are “allegedly neutral technologies.” Having socialized us into believing that they provide “accurate information that is depoliticized,” they lend credibility to their own harmful results. Google has nurtured its neutral image by de-emphasizing human involvement in search and playing up its reliance on machines. According to a passage Wachter-Boettcher recounts from Stephen Levy’s In the Plex (2011), Marissa Mayer once rejected a design at Google because, in Mayer’s words, “It looks like a human was involved in choosing what went where. It looks too editorialized.”

The language, while syntactically clear, will be opaque to many people with a stake in the information it pretends to convey

Today, following a stream of high-profile criticism and media investigations into the disastrous effects of many algorithms, machines might be starting to seem less trustworthy. Since the 2016 U.S. election season, when commentators widely blamed major platforms and their algorithms for the spread of “fake news,” Facebook, Twitter, and Google have all taken steps to minimize disinformation. A 2019 Google white paper describing the company’s efforts emphasizes that humans still never choose where a link appears in search results; it has merely changed its algorithms to better prioritize authoritative sources. But many of the changes the paper describes — such as the addition of “Knowledge Panels” — make Google’s products look more editorialized.

More recently, the company’s special Covid-19 search results pages have gone further, separating results into clearly labeled categories like “Top stories” and “Local news,” and adding a sidebar menu with links like “Testing,” “Treatments,” and “Prevention.” The overall vibe is that you’re on a site like WebMD or mayoclinic.org. Google achieves this effect with labels that look like they were written by a human and in some cases probably were. In any case, the company seems to have bet that clear and human-sounding products might now convey a more trustworthy form of “neutrality” than products that appear to be produced and overseen entirely by machines.

Meanwhile, as Noble argues, “algorithmic oppression is not just a glitch in the system but, rather, is fundamental to the operating system of the web.” And clear, humanoid voices can legitimate it just as efficiently — perhaps now more efficiently — as mechanical neutrality can. In their chapter on developing voices for companies and products, Metts and Welfle highlight Airbnb, which defines its voice as straightforward, inclusive, thoughtful, and spirited. To show how these qualities are expressed in practice, they offer a screenshot of an Airbnb page about “safety,” including a section on “Watchlist & background checks.” The section reads, in full, “While no screening system is perfect, globally we run hosts and guests against regulatory, terrorist, and sanctions watchlists. For hosts and guests in the United States, we also conduct background checks.”

This probably doesn’t feel inclusive to a formerly incarcerated person whose account Airbnb deactivated after a background check. And it might not feel straightforward to someone who wonders whether the background checking companies Airbnb works with (whose names it has declined to provide) will use biased algorithms in their risk evaluations. The language, while syntactically clear, will be opaque to many people with a stake in the information it pretends to convey. What it actually communicates is that Airbnb prioritizes those who comfortably assume they have no reason to worry that discriminatory practices would exclude them. In this respect, it is all too human.

Megan Marz has written about books, language, and technology for the Baffler, the Washington Post, and other publications.