Full-text audio version of this essay.

During the year I spent running the Twitter account of a brain tumor research institute, my job was, naturally, to post about brain tumor research. Knowing nothing about brain tumor research, I did what I could: copy and paste lines from the researchers’ studies. Any time I wanted to change so much as a word, I had to google it. “Histone deacetylase,” I would write, having looked up what “HDAC” meant, still perfectly ignorant of its scientific significance.

It was always easy to identify which part of a study expressed its main idea. “We conclude,” it might begin. “More research is needed,” it might end. Recognizing and mimicking the form knowledge took didn’t require any understanding of its content. Not understanding actually made it easier to ventriloquize the knowledge I imagined the researchers were generating. I didn’t have to deal with the internal friction I might have felt if I’d been capable of discerning whether the research questions were worthwhile, the studies poorly designed, the scientists unduly influenced by their ties to pharmaceutical companies. Since I hadn’t dipped even a toe into their pool of knowledge, I saw a glassy, unperturbed surface of simple facts.

“There will be only answers,” the writer Marguerite Duras said in 1985 when a TV show asked her to make a prediction about the year 2000. “The demand will be such that there will only be answers. All texts will be answers… about [man’s] body, his corporeal future, his health, his family life, his salary, his leisure. It’s not far from a nightmare. There will be nobody reading anymore.” Duras, who proceeded to complain about screens, was probably thinking of TV. But no medium has transformed texts into answers — obliterating certain kinds of ignorance and camouflaging others — more systematically than search engines.

Searches — and there are billions every day — imply questions, even if they don’t take a question’s form. If I search for “glioblastoma,” I might be wondering: What is it? (The most lethal kind of brain tumor.) How long can a person live with it? (Usually not more than two years.) What does it feel like? (Google can’t really tell me, though my search turned up many lists of symptoms.) It pays to provide these answers: Thousands if not millions of webpages have surely been published for the primary or sole purpose of ranking as a top answer to popular queries. One of the most iconic examples is HuffPo’s extremely brief 2011 article, “What Time Does the Superbowl Start?” — a successful gambit others copied in later years. HuffPo’s article was called an act of “trolling,” and an attempt to “game” the system. In fact, it was using the system perfectly. When the game starts, what the acronym stands for, how old the actors in that movie are: questions with a clear intent, that demand no more response than a fact, are what Google does best.

Google’s algorithms famously flatten the social into the statistical, and encourage you to rely on the same kind of logic

When the object of a search is less knowable, the results can conceal as much as they reveal. In mid-November 2016, Nikil Saval published an essay on the proliferation of overconfident responses to the election. “Writers have launched blithely into trivial essays on what the voters wanted, what the vote represented… But where is the evidence? How are mutely inexpressive votes… legible?”  Statistics on how and why people voted — the kind you might find if you google — collapse public opinion into neat columns, but the conversations he’d had while door-knocking revealed a “morass” of “inchoate” ideas that wasn’t remotely coherent or conclusive: “The social world does not fit, in this instance, the statistical world.”

Google’s algorithms famously flatten the social into the statistical, and encourage you to rely on the same kind of logic. It can’t tell you whether you have a disease, for example, but it might answer your query by telling you how many other people have it. If the disease is rare, you might feel better, as though you’ve gotten a favorable answer, but you won’t actually know anything new about yourself. If you go to the doctor, they too might google it. A recent article in the Rhode Island Medical Journal argues that doctors’ ability to easily look up medical information causes overconfidence that leads to misdiagnoses. “Self-questioning,” the authors write, “morphs from ‘What do I know?’ to ‘Where can I find it?’”

Comfortingly for the searcher, this metamorphosis elides the possibility that the answer is “nothing.” But not-knowing, however uncomfortable or painful, is intrinsic to life. Science, art, religious practice, relationships with other people, attempts to understand politics or history: all arise from the kind of curiosity we ask Google to release us from. To the extent that it hides the unknown behind a scrim of facts, and encourages us to see the world’s plurality as something we can skim, Google also reduces our equipment for living.

As Google gorges on ever more data, the range of what it makes knowable should theoretically expand. “We’re in the early phases of moving from being an information engine to becoming a knowledge engine,” said Johanna Wright, a Google product manager, in 2012. She was being filmed for a video introducing Knowledge Graph, a database that uses other databases to provide answers right on the search results page, without your having to click anything. Knowledge Graph’s sources include Wikipedia; the CIA World Factbook; licensed data about sports scores, stock prices, and weather forecasts; and other “verified” organizations. It furnishes many of the boxes and panels that show up at the top or to the right of your screen when you google a famous person, a movie, or a common disease. Rather than show you where to find an answer, these boxes and panels give you the answer — or what Google thinks is the answer — directly.

Last year an investigation by the Markup found that Google was devoting 41 percent of the first page of its mobile search results to its own products, including answers delivered by Knowledge Graph and properties like Google Flights, Google Translate, and YouTube. This pushes competitors down or out: Search traffic to Genius.com plummeted after Google started displaying song lyrics on results pages; TripAdvisor laid off 200 people after losing traffic to Google’s competing services. “Google makes the most money when, long term, they can addict searchers to their platform,” the SEO expert Rand Fishkin told the Markup. “If Google can train you, don’t go to Genius.com, don’t go to TripAdvisor, don’t go to the restaurant’s website, just come to Google — always come to Google — then they win.” As long as they’re winning, the range of the googleable, in practice, contracts.

If search began by systematically reducing texts to answers, then Google is now further reducing those answers to a single definitive one. But it cannot do this with its own products alone. As SEO expert Pete Meyers points out, the human-curated Knowledge Graph “can never keep up with the nearly infinite questions that we can ask.” That is why, he conjectures, Google launched “featured snippets” in 2014. Featured snippets appear in a box, usually at the top of the page, that makes them look just as authoritative as their more vetted cousins. This gives the impression that Google, rather than pulling the answer out of its ether, has found you an expert. This text is sourced, however, from wherever Google’s algorithms find it.

The first featured snippet I encountered in the course of writing this essay was an answer to the query “how large is the search engine optimization industry.” A large, bold “$80 billion,” with a few lines in lighter and smaller font underneath, appeared in a box at the top of my screen. Under most circumstances, I would have been satisfied. Many people would be: In June 2019, for the first time, third-party data showed that a majority of Google searches conducted in web browsers did not result in any clicks. Mine did. I found that the snippet came from a consulting agency’s blog post, which contained this sentence: “SEO statistics by Forbes cite Borrell Associates to emphasize that by 2020, businesses in the U.S. will be spending as much as $80 billion on SEO services.” This heavily hedged statement seems factual, if you don’t really read it.

How many times have I taken for granted a “fact” that may not be one at all, or mistaken my knowledge of a fact for understanding? How much do I mistakenly think I know, because it’s been presented to me in a form I’ve come identify with knowledge? In The Googlization of Everything (2011), Siva Vaidhyanathan cites a 2008 study showing that, as more journals started publishing online between 1998 and 2005, scientific literature as a whole cited fewer sources. Researchers, in his words, became more likely “to echo a prevailing consensus and to narrow the intellectual foundation” of their research. When I write emails in French to a French friend, I google many of the clauses I come up with in quotation marks. If there are thousands of results, I feel like I’ve gotten it right. If there are no results or only a few, I rephrase — even though that’s no sign I’m mistaken. Most possible combinations of words don’t yet exist, but I can’t resist the confidence and the shelter from embarrassment I get by limiting myself to those that do.

Maël Renouard begins his 2016 book Fragments of an Infinite Memory by recalling an evening when, walking down the street, he had the urge to google what he’d been doing at 5 p.m. two days earlier. Linda Besner, in a 2019 essay for Real Life, recounts a similar experience: “I was walking down the street when it crossed my mind to wonder if my grandmother had ever had a nose job, and I thought, I’ll google it when I get home.” Realizing instantly, like Renouard before her, that she can’t actually do this, she calls the phenomenon “ungoogleability.”

There are plenty of ungoogleable things that we google anyway. In Everybody Lies (2017), Seth Stephens-Davidowitz writes about the thousands of yearly searches for things like “people are annoying” and “I am sad.” He describes them as attempts to use search “as a kind of confessional,” a way of venting “uncensored thoughts into Google, without much hope that it will be able to help us.” But confession is often a request for help, made in the hope of recovery (James 5:16: confess your faults one to another, and pray one for another, that ye may be healed). Sometimes it is also an attempt at self-knowledge. The questions a confessional search implies are: Who am I? What is wrong with me? Can I be redeemed? Google can’t answer these questions, but it can reduce them to an answerable form.

How much do I mistakenly think I know, because it’s been presented to me in a form I’ve come identify with knowledge?

The first page of my search results for “I am sad” includes: “7 Things to Do When You Are Really Sad”; “6 Powerful Happiness Tips”; “5 Ways to Feel Happy.” Google can’t solve the problem of being sad, but it can reconfigure the problem through a different logic, so that “sadness” seems less like an existential concern and more like a DIY home repair query. It can encourage you to forsake the ungoogleable for the googleable. Your sadness isn’t gone, but you have an answer — or at least you know there is an answer — which feels good.

Poets and philosophers have warned almost forever against this kind of comfort. Socrates famously claimed that his wisdom came from knowing he knew nothing. John Keats defined “negative capability” — a quality he attributed to men “of Achievement” — as the ability to stay “in uncertainties, Mysteries, doubts, without any irritable reaching after fact & reason.” Donald Barthelme, in his 1985 essay “Not-Knowing,” argues that it is “what permits art to be made.” Georges Bataille thought “non-knowledge” was inherent to the human condition. The “unknowable immensity” of the universe “infinitely eludes an individual who seeks it,” he writes in Inner Experience (1943). Knowledge is “the means [by] which man attempts to take himself for the whole of the universe.” Designed to reduce the world to the answers it can provide, Google belies (or tries to belie) the universe’s infinite elusiveness.

In April, an older relative posted an article to our family’s group text about a video in which two emergency medicine doctors question the seriousness of Covid-19. The doctors, who have since been censured for their commentary by the American Academy of Emergency Medicine and the American College of Emergency Physicians, argue that the disease is no more deadly than the flu. I spent hours in the group text trying to convince my family member otherwise. I googled furiously, sending article after article in an attempt to discredit the doctors. “I don’t think you understand the statistical argument they’re making,” she responded.

It’s true that, while I was certain the doctors were wrong, I wasn’t totally sure what was right. The pandemic made it impossible to ignore the fallibility of the authorities I was citing. In January and February, the New York Times and the Washington Post ran articles that downplayed the virus, only to cover it weeks later as the crisis it was. In March, the CDC said most people didn’t need to wear masks, before deciding in April that they should. Public health experts repeatedly failed to sufficiently explain the differences between low-risk and high-risk activities. Universities, medical journals, and pharmaceutical companies promoted study after study, and journalists reported on them. Under intense pressure to provide answers — compounded by the normal pressure to attract attention — many of these sources framed the results as more definitive than they were. One of the ways they did this was by failing to present each finding in the context of the others.

Under intense pressure to attract attention and provide answers about Covid-19, many medical journals and reports framed the results as more definitive than they were

Among the innumerable reasons for this failure was the quantity of new information. Between May and August, an average of 400 new studies about Covid-19 were published every day. At this rate, even the people doing the research cannot possibly keep up. When they search, as they must, to make sense of what’s out there, there’s a good chance they will rely on Google Scholar. In October, two researchers published an article suggesting they shouldn’t: While Google Scholar, with its “efficient-slick” interface, is “perfectly suited” for “lookup searches” that have clear goals and don’t require much exploration of, it “fail[s] miserably” at helping researchers synthesize what is known. Like the search the rest of us use, Google Scholar can make you feel you’ve learned what you need to know long before you have. “Unfortunately,” the authors write, it seems focused not on “what researchers need to accomplish in all their search tasks,” but on “making users satisfied (and not smarter) sooner.” The world’s answerers themselves face a glassy plane of answers. But there are systems, like PubMed, better suited for the kind of work they need to do.

In a sense, then, the question Duras was answering remains open. It was a strange one for a TV interview, as oblique as her response was categorical. “Men have always needed answers,” the interviewer began, “even if one day they prove to be false or only provisional.” What he wanted to know was, in the future, where will the answers be?