One day several months ago, I filled a glass with tap water from our sink and noticed that it tasted unusually good — a bit creamy somehow, and a bit savory in the manner of club soda, which is superior to seltzer because of the sodium. It seemed especially thirst-quenching, yet so tasty I kept drinking more of it. A brewer I know told me Colorado water has more mineral content in the spring months due to mountain runoff, which might explain the change in flavor. I loved that idea.

But the explanation didn’t hold up. My husband, John, thought our water tasted the same as ever. More strikingly, every liquid I drank started to taste better to me. Wines tasted richer and more buttery; cheap wines tasted like they’d been aged for years in oak. Bourbon tasted sweeter and creamier too, almost like coconut. Canned seltzer tasted especially great, like an indulgence instead of a substitute. This went on for weeks. The internet told me I might be pregnant or diabetic. I was quite sure neither was true. Instead I grew increasingly suspicious that I might have a brain tumor.

When people quit social media because it doesn’t make them happy, I think, We don’t deserve to be happy. My own version of liberal guilt run amok, morphed into a longing for punishment

I mentioned the symptom to my brother in a text thread: “I think I have brain cancer? My palate suddenly changed.” He responded right away: “I had that! It was a virus. Everything tasted weird and I couldn’t handle spicy food at all. It lasted a few weeks.” It didn’t sound like what I had, but I was heartened nonetheless. But if it was a virus, why didn’t anyone else have it? I mentioned it to everyone I saw, hoping for a sign of recognition. “It’s not unpleasant,” I’d always say. Finally, about three weeks in, John and I were reading on the couch; he took a sip of water and literally smacked his lips. “Mmm,” he said. “That tastes delicious.” “Oh my god!” I grabbed his arm. “You have the virus!” John’s virus, if it was a virus, didn’t last as long, or maybe he just wasn’t as attuned to it as I was. In any case, it went away for both of us. Boxed wine tasted cheap again.

Years ago, when I was in grad school, I saw the philosopher Daniel Dennett deliver a lecture about memes — memes in the Richard Dawkins sense (“a unit of cultural transmission”). He talked about a kind of parasite, the Dicrocoelium dendriticum, or lancet liver fluke, that infects ants; it makes them “want” to crawl to the tops of tall blades of grass. (What does desire feel like to an ant?) But that is not the end goal for the parasite. Those ants high up in the grass are more likely to be eaten by grazing cows, and that’s what the parasite “wants.” This mechanism is called parasitic mind control: The fluke wants to be inside the cow; it thrives in the guts of the cow and then gets the reproductive benefit of being shat out into the pasture, where it can infect more ants.

Both parasites and micro-parasites (viruses and bacteria) can hijack our minds; they make us act weird. Toxoplasma, a parasite found in cat feces, makes mice less afraid of cats; this is an evolutionary strategy, making it easier for the parasite to get from the mouse to the cat. It can spread to people too, where it may increase risk-taking in general. One bizarre study found that people, presumably cat owners, with toxoplasmosis “are more likely to major in business.” An NBC News story suggested optimistically that the parasite “may give people the courage they need to become entrepreneurs.”

If true (and I doubt it), that would be an extreme case of a microscopic parasite altering the course of your whole life. But ordinary viruses change our behavior too. A 2010 study found that people became more sociable in the 48 hours after they were exposed to the flu virus, the period when they are contagious but not symptomatic. The infected hosts, researchers noted, were significantly “more likely to head out to bars and parties.” Even symptoms we think of as purely physical reflexes can be construed as behavior changes. In Guns, Germs, and Steel, Jared Diamond writes that “many of our ‘symptoms’ of disease actually represent ways in which some damned clever microbe modifies our bodies or our behavior such that we become enlisted to spread microbes.” The tuberculosis bacterium, for example, makes us want to cough, atomizing it into breathable air. According to the Mayo Clinic, this can happen when a tubercular patient “coughs, speaks, sneezes, spits, laughs or sings.” This makes me wonder if consumption ever makes patients giggly, or more likely to burst into song, despite the chest pain and malaise. A disease called kuru can cause what are often described as “pathological bursts of laughter.” It is not, however, spread by laughing. It was common among the Fore tribe in Papua New Guinea until the cause was discovered to be prions, spread by the practice of funerary cannibalism — eating your dead.

One of the creepiest behavioral changes caused by a virus is hydrophobia, a symptom of “classic encephalitic rabies,” also known as “furious rabies.” It’s not an exaggeration: People and animals infected with rabies become morbidly terrified of water. Or perhaps more accurately, they’re of two minds about water — they both want it and can’t stand the thought or sight of it. (It’s the opposite of my virus, then, which made beverages extra-appealing.) Here’s how Bill Wasik and Monica Murphy describe it in their book Rabid: A Cultural History of the World’s Most Diabolical Virus: “Present the hydrophobic patient with a cup of water and, desperately though he wants to drink it, his entire body rebels against the consummation of this act. The outstretched arm jerks away just as it is about to bring the cup to the parched lips. Other times the entire body convulses at the thought.”

Why does this happen? It’s not so you’ll die of thirst. The virus’s goal is not to kill you — though it does do that; once symptoms appear, close to 100 percent of rabies patients die — but to spread. The “sole mission” of a virus, according to Connie Goldsmith, author of Pandemic: How Climate, the Environment, and Superbugs Increase the Risk, “is to get inside a cell and turn it into a factory to produce new viruses.” Viruses, unlike bacteria and parasites, are not even alive, yet they too have “desires.” According to the Wikipedia article on rabies:

Saliva production is greatly increased, and attempts to drink, or even the intention or suggestion of drinking, may cause excruciatingly painful spasms of the muscles in the throat and larynx. This can be attributed to the fact that the virus multiplies and assimilates in the salivary glands of the infected animal for the purpose of further transmission through biting. The ability to transmit the virus would decrease significantly if the infected individual could swallow saliva and water.

This explains the telltale foaming at the mouth in rabid dogs, as well as their rage, which drives them to attack and bite. This rabid madness, and its spread through biting, gave birth to mythological monsters from zombies to werewolves and vampires. The association of vampires with bats stems from their acting as a “reservoir host” for rabies — bats can carry the virus without dying from it.

Rabies, like malaria, Zika, typhus, bubonic plague, and all flus, is a form of zoonosis, a disease that makes the leap from animal to human. That leap, the transmission, is called spillover. In his book Ebola: The Natural and Human History of a Deadly Virus, David Quammen calls zoonosis “a word of the future” — “destined for heavy use in the 21st century.” Dangerous infectious diseases persist only when they have a reservoir host. We were able to eradicate smallpox, Quammen notes, because it’s not a zoonosis; it only infects humans, and once we’ve cured them all, it has nowhere else to hide.

The reservoir host for Ebola is still not known, although where there are outbreaks of Ebola in humans, there are also dead gorillas. It seems likelier that, as with the lyssaviruses that cause rabies, the host is some kind of bat. When Quammen went hunting for reservoirs in Bangladesh, the epidemiologist Jonathan Epstein told him, “Keep your mouth closed when you look up.” You don’t want an Ebola bat flying overhead to shit in your mouth.


During the 1950s and ’60s, there was great optimism that the world would soon be rid of all deadly infectious disease. The U.S. spent huge sums of money on a campaign to eliminate malaria in the so-called Third World — an act of charity, in a way, since malaria was not a threat in affluent countries. The plan not only failed — perhaps we gave up too soon, or perhaps it was an impossible task — it actually made the problem worse. The campaign reduced local immunity to malaria, while the virus evolved resistance to known treatments such as chloroquine. At the same time, heavy use of pesticides killed off many beneficial insects while the mosquitos became resistant to the chemicals. As Laurie Garrett puts it in The Coming Plague: Newly Emerging Diseases in a World Out of Balance, “almost overnight resistant mosquito populations appeared all over the world.” Rachel Carson, the author of Silent Spring, said at the time, “The insect enemy has been made stronger by our efforts.” After seeming to die down, cases of malaria resurged, now in a new, iatrogenic form — “created as a result of medical treatment,” as Garrett writes.

Malaria had become something strange and ill-defined. “What is malaria?” Kent Campbell, a doctor with the Centers for Disease Control, has asked. In many parts of Africa, it’s endemic and omnipresent but often asymptomatic. Previously, children with malaria either died or survived and became largely immune. Now children might survive but lapse into fatal anemia years later, requiring blood transfusions — not ideal when AIDS is epidemic too. Further complicating matters, when a child in Africa has a fever, it’s standard procedure to give the child antimalarials, but just because a child has malarial parasites does not necessarily mean that any given fever is caused by malaria. In this way, it’s like climate change: Every bad storm feels related to global warming, but storms don’t require climate change to exist. And, as Campbell says, “we cannot continue to treat every fever as if it’s malaria, because the roster of drugs is getting shorter.” Yet a malarial fever can lead to death, and malaria is still a top-10 cause of death in low-income countries.

To answer his own question about the nature of malaria, Campbell eventually concluded simply that “malaria is a disease that responds to antimalarial drugs.” He did not name a specific drug, since the drugs have to change. It made me think of a conversation I had years ago with an ex-boyfriend, a physics major, who told me that temperature is not as simple a concept as it seems. It is not synonymous with heat or energy, he said. Temperature, essentially, is what thermometers measure. I never really understood this, but I think about it often. Or maybe I should say, what I think about is the elegant way the construction reduces what we understand.


There’s a social media phenomenon I’ve started calling the performative death wish. It happened recently when archaeologists found an unopened black sarcophagus in Alexandria. Some feared that the tomb might contain a deadly virus or unleash a curse — after King Tut’s tomb was opened, in 1922, a number of people associated with the excavation died. “I’m pinning all my hopes on the creature in the sarcophagus,” one woman tweeted. Another tweet went:

2012: oh no Mayan calendar says the world might end and we could all die

2018: PLEASE let the black Egyptian sarcophagus carry a curse that will collectively put us out of our misery

Whenever a story on the threat of an “extinction-level event,” like an asteroid or comet headed for Earth, is making the rounds, people quote-tweet it to add, “Finally, some good news!”

In this age of horrible news all the time, we get it instantly: Ironic suicidal ideation. But there’s something kind of real behind it — the fantasy of the swift death, the instinct to just get it over with. Of course, the asteroid that killed the dinosaurs didn’t kill them instantly, unless they were at ground zero. It wasn’t “IPU annihilation,” to use philosopher Galen Strawson’s term: instant, painless, unexperienced. It took tens of thousands, possibly even hundreds of thousands of years of living on a sick and barren planet before they all finally went extinct. This makes the prospect of death by asteroid much less merciful.

Death may represent a kind of escape, but is this posturing really about wanting to escape, or is it about wanting to suffer? Lately, when I luck into an excellent bag of cherries or an especially luscious peach, I think, automatically: I don’t deserve this. When people say they’re quitting social media because it doesn’t make them happy, I think, Wrong reason. I think, We don’t deserve to be happy. It feels related to the death wish, my own version of liberal guilt run amok, morphed into a longing for punishment.

Why can’t we or won’t we save ourselves?

During the Black Death in Europe, bands of flagellants would roam from town to town, like a traveling theater troupe, putting on public performances of violent self-punishment. They would flog themselves with leather and iron whips while crying out to God, “Spare us!” As historian Barbara Tuchman writes in A Distant Mirror, “The flagellants saw themselves as redeemers who, by re-enacting the scourging of Christ upon their own bodies and making the blood flow, would atone for human wickedness and earn another chance for mankind.” In the absence of a better explanation, medieval people blamed themselves: “hardly an act or thought, sexual, mercantile or military, did not contravene the dictates of the Church … the result was an underground lake of guilt in the soul that the plague now tapped.” But the flagellants didn’t take all the blame; they also scapegoated the Jews, accusing them of poisoning the wells. In Basel in 1349, Tuchman writes, “the whole community of several hundred Jews was burned in a wooden house especially constructed for the purpose” — a medieval concentration camp.

Centuries later, we still have a tendency to interpret epidemics as punishment, divine or otherwise. When AIDS emerged in the early 1980s, it was seen as a natural and necessary corrective to gay liberation. People called it “the gay plague.” Televangelist Jerry Falwell said, “AIDS is not just God’s punishment for homosexuals; it is God’s punishment for the society that tolerates homosexuals.” Former senator Jesse Helms thought people with AIDS brought it on themselves through their “deliberate, disgusting, revolting conduct.” As Susan Sontag noted in AIDS and Its Metaphors, AIDS, in the U.S. at least, “is understood as a disease not only of sexual excess but of perversity,” which makes it easy to view infection as retribution.

When we don’t understand the cause of a disease or how to treat it, we resort to magical thinking. Ebola was initially blamed on sorcery or ezanga, a Bakola word meaning “some sort of vampirism or evil spirit,” as Quammen writes: “Ezanga could even be summoned and targeted at a victim, like casting a hex.” But the sorcery explanation fell apart the more the illness spread: “Sorcery does not kill without reason, does not kill everybody, and does not kill gorillas,” one Mbomo woman said — even sorcery has its logic. The apparent senselessness of a new epidemic makes it even more frightening, so that every plague is a double plague of contagion and fear. “The mystery of the contagion was ‘the most terrible of all the terrors,’” Tuchman writes, quoting a Flemish cleric. “Ignorance of the cause augmented the sense of horror.” Guy de Chauliac, a doctor who treated three popes in succession, wrote that he lived in “continual fear.” (Why is it so hard for me to imagine their fear? The centuries have a neutralizing effect; I imagine they accepted what they called  “the great mortality” as a fact of history in the same way I do.)

The best medical science at the time of the Black Death almost came close to an approximate understanding of how the plague spread. “That the infection came from contact was quickly observed but not comprehended,” Tuchman writes. Some thought it was transferred by sight — the evil eye. This is why plague masks had crystal eyepieces in addition to beaks stuffed with aromatics, to protect the wearer against both malevolent glances and miasma, the foul air that was really the stink of death. One physician determined that infection was “communicated by means of air breathed in and out” — which was true. The plague took several forms and could be spread through the blood or through coughing, like tuberculosis or the flu. But since no one knew what germs were, “he had to assume the air was corrupted by planetary influences.” In 1348, the medical faculty of the University of Paris delivered a report on the cause of the pestilence: “a triple conjunction of Saturn, Jupiter, and Mars in the 40th degree of Aquarius.” As late as 1918, physicians named “cosmic influence” as a factor in the mysteriously deadly Spanish flu — influence and influenza in fact have the same etymology, a “flowing in” as of unseen, ethereal forces.

If the plague had been sent to punish people for their sins — Matteo Villani, a 14th-century historian, compared it to the great flood “in ultimate purpose” — you might think that the period following the plague years would be one of great austerity. If anything the opposite is the case. Survivors of the plague did not become ascetic. Instead they may have sensed a baffling meaninglessness to their being spared — survivor’s guilt being a kind of miserable apprehension of one’s own good luck. “If the purpose had been to shake man from his sinful ways,” Tuchman writes, “it had failed.” People embraced “a more disordered and shameful life … behavior grew more reckless and callous, as it often does after a period of violence and suffering.” You could also say not much had changed at all, though a third of the world had died:

What was the human condition after the plague? Exhausted by deaths and sorrows and the morbid excesses of fear and hate, it ought to have shown some profound effects, but no radical change was immediately visible. The persistence of the normal is strong.

Or, as William McNeill writes in Plagues and Peoples, the plague on some level was “a routine crisis of human life.” Many at the time seemed to take this mass die-off in stride, “like the weather.” It’s paradoxical, how quickly we adapt to suffering.


A Distant Mirror, published in 1978, was so named because Tuchman felt the awfulness of the 14th century — for years, she says, historians tended “to skirt the century because it could not be made to fit into a pattern of human progress” — had clear parallels in the awfulness of the 20th century. (In my notes, I wrote, You thought that was bad? then crossed it out many times.) Her follow-up, in 1984, was called The March of Folly, which explores why people and particularly governments frequently act against their own interests: “Why do holders of high office so often act contrary to the way reason points and enlightened self-interest suggests?” Why, for example, “does American business insist on ‘growth’ when it is demonstrably using up the three basics of life on our planet — land, water, and unpolluted air?” Our inaction in the face of global warming seems a clear case of this. And we’ve had 40 years to get our shit together. As a recent New York Times feature noted, “Nearly everything we understand about global warming was understood in 1979.” Why can’t we or won’t we save ourselves?

A few weeks ago my husband handed me a book he must have seen on the new books shelf at our library. It was Pandemic, by Connie Goldsmith. Relevant to my interests, certainly, but, I pointed out to him, “This is a children’s book.” More accurately, it’s young adult nonfiction. I tossed it on my giant pile of plague books and assumed I wouldn’t get to it. As it happens, it was the last book I read, and it was surprisingly good and helpful. Goldsmith lays out how five global trends — climate change, disruption of animal habitats, increased air travel, crowding and megacities, and overuse and misuse of antibiotics — all increase the risk of a pandemic.

Mosquitos and other “vectors” are getting a leg up from global warming. No wonder new and re-emerging diseases feel like punishment

“Pandemic” sounds to me like automatic hyperbole, like “pandemonium,” but it’s fairly well defined in epidemiology: Unlike an “outbreak,” which affects limited people in a limited area for a short time, or an “epidemic,” which affects a larger number of people in multiple areas at the same time, “pandemics affect many people in many parts of the world at the same time.” The Black Death is the most pandemic of pandemics, but the Spanish flu is a more recent example, and HIV could qualify as well. In Western countries, we almost think of AIDS as a solved problem, since antiviral treatments have dramatically improved both quality of life and lifespan for people infected with HIV. But about half the people who have contracted the virus in history have died from it, making it as deadly as Ebola, and its transmission is far from “contained.”

All five of Goldsmith’s risk factors are, in essence, our fault: “Scientists do not yet know what will cause the next pandemic. It could be a new bacterium that resists all available medications. Or it could be a mutated virus to which people have no immunity. What scientists and epidemiologists do know is that human activity is largely responsible for the spread of disease.” No wonder new and re-emerging diseases feel like punishment. Mosquitos and other “vectors” (usually biting and stinging insects that help carry diseases between humans and other animal hosts), for example, are getting a leg up from global warming. They like warm, wet environments, so as temperatures rise and flooding increases, their territory expands. More monstrously, “hotter temperatures make mosquitos hungrier,” and “warm air incubates the virus faster.”

Relatedly, ticks have become more of a problem in part due to suburban development in wooded areas where ticks live. In the U.S., Lyme disease is now the most common vector-borne illness as well as one of the fastest-growing infectious diseases. And it is not well understood; there is disagreement over whether chronic Lyme disease even exists. But many patients continue to experience symptoms after treatment. This could be due to persistent bacteria not killed by antibiotics, or to permanent immune damage that causes your body to respond to the infection even after it’s gone, like phantom pain. Goldsmith quotes epidemiologist Ali S. Khan: “We humans act like we own the planet, when really it’s the microbes and the insects that run things. One way they remind us who’s in charge is by transmitting disease, often with the help of small animals, including rodents and bats.” This is zoonosis as revenge, by the animal kingdom or mother nature writ large.


Back in 1981, a toxicologist named Mark Lappé wrote a book titled Germs That Won’t Die, warning of the microbe mutations he saw happening in hospitals. “We have organisms now proliferating that never existed before in nature,” he wrote. “We have changed the whole face of the earth by the use of antibiotics” — the medical Anthropocene. Critics at the time felt Lappé was grossly overstating the problem. Now so-called superbugs, or antibiotic-resistant bacterial strains like MRSA, kill about 700,000 people in the world every year, similar to the number killed by mosquitos, the deadliest animal by some margin, deadlier even than humans.

Antibiotic resistance is often blamed on people not taking their medications correctly, but it’s not that simple. Only 20 percent of antibiotics in the U.S. are used on people; the rest are for animals. Often these are the same antibiotics that humans take. Bacteria in animals then develop resistance to those antibiotics, and when they infect humans, the drugs don’t work for us. Worrisomely, new antibiotics are not an exploding area of medical research; we’ve had fewer new ones every decade since the 1980s, and most are just variations of existing drugs, which are unlikely to remain effective against already resistant bacteria for long. Goldsmith suggests that the bulk of pharmaceutical R&D budgets may go to long-term maintenance medications rather than those used to cure one-time infections, since the former are more profitable over time.

Many experts think the most likely culprit of a future pandemic is some version of the flu; flus are common, highly contagious, and dangerous especially when there’s a new strain to which people have limited immunity. There’s hope another pandemic on the level of the Spanish flu could be avoided through the development of a universal flu vaccine, which could be possible with enough resources and support. But to state the obvious, vaccines only work when we take them.

In Rabid, Wasik and Murphy note that “immediately upon the creation” of the smallpox vaccine in 1796, there were “scientists and laypeople” who believed the vaccine was “poison.” In other words, antivaxxers are as old as vaccines. Twentieth-century advances like the polio vaccine strengthened public support — “for two decades,” in the 1950s and ’60s, Garrett writes in The Coming Plague, “insurance carriers, politicians, drug companies, and the judicial system adhered to the basic principle that the rights of an immunized society superseded those of small numbers of individuals.” But a scare in 1976, when recipients of a new swine flu vaccine seemed to have higher than average incidence of Guillaine-Barré syndrome, caused permanent PR damage. Guillaine-Barré, which can lead to (usually reversible) paralysis and other neurological symptoms that require hospitalization, can occur after any infectious disease, including the flu. The ensuing panic would “haunt all vaccine efforts inside the United States for decades,” Garrett writes, though the increased risk was very small: “Approximately one additional case of GBS for every 100,000 people who got the swine flu vaccine,” according to the Centers for Disease Control and Prevention. Typical flu shots most likely do not increase the risk: “Studies suggest that it is more likely that a person will get GBS after getting the flu than after vaccination.”

The antivaccination movement now seems to be in another waxing period — there have been outbreaks of formerly very rare diseases like measles both in the U.S. and Europe. We can’t blame it on anything other than ignorance. We know the flu vaccine works (I’m tempted to write, “we” “know” it “works”), but most people don’t know how it works. I’m not saying people don’t understand how it works on a technical level; we don’t even grasp the basics. I’ve heard intelligent people I know say, “I never got the flu until the one year I got the flu shot. Never again!”

People treat the flu shot like a matter of personal choice. They think if they don’t get a shot and then they get the flu, that’s their own bad luck. But the flu shot, like other vaccines, is only truly effective when taken en masse; it reduces overall infection in a population so that the most vulnerable people — usually the elderly, but for some strains it’s children — are less likely to be infected. This is collective, social action — collective inoculation. Further, the folk idea that some years the flu shot “doesn’t work” is inaccurate. Flu shots always contain a mix of vaccines against several different strains that are believed most likely to be dominant that flu season. Vaccine makers don’t always get it right, but the range of effectiveness is more like 30 percent to 60 percent; it’s not zero or 100. Even in an off year, the flu shot increases your immunity, and you’ll probably be less sick and not for as long — and therefore less contagious — if you do catch the flu. (If you take nothing else from this essay, please start getting annual flu shots.)

The immunologist Anthony Fauci has said that “in some respects, vaccines are the victims of their own success” — meaning that, when used properly, they can almost eliminate incidence of an infectious disease. But then people stop dying of that disease, and it stops seeming like a threat. So we get lax about the “cure” — though often, these diseases can’t be cured, only prevented. This feels related to the false assumption that World War II had cured humanity of fascism forever. By Garrett’s account, “every problem seemed conquerable” in the decade after we stopped Hitler. Instead, white nationalism had just gone quiet for a while, a virus hidden in an unidentified reservoir host. In Siberia recently, long frozen anthrax emerged from thawing permafrost, killing a bunch of reindeer and some children. What other ancient plagues are in there, preserved cryogenically?

“Folly is a child of power,” Tuchman writes — the result of feeling invincible, maybe. We make stupid decisions because we think, having come this far (as a culture? as a species?) we’re indestructible. On some unexpected level, I almost fear we are. Even if we don’t succumb to the worst doomsday scenarios involved in climate change (a.k.a. “Hothouse Earth”), the climate is certain to get worse and less hospitable to humans in the near term. A lot of people, especially near coasts, especially near the Equator and in poverty-stricken areas, will be displaced, will suffer, and will die.

Yet it seems unlikely, when we remember the slow tapering off for the dinosaurs, that we’ll actually be wiped out in some kind of purifying “clean sweep,” that anti-fantasy. (Have we started to think of humanity itself as a plague on the planet? A friend of mine, when I mentioned that male sperm counts have been dropping for decades, basically said, “Good riddance.”) There’s some evidence that reduced population and reforestation after the Black Death helped trigger a mini ice age; that wasn’t any fun for them, considering it led to famine. But if there were another “great mortality,” it might have a tiny bit of upside in the long view of history: A pandemic, asteroid, or nuclear war could all lead to global cooling. It could offset some of our graver errors and reset the planet. When you look at it that way, it’s almost as though we are acting with a higher collective intelligence — a hive mind employing folly as a strategy. But perhaps that’s too generous, to call it intelligence; perhaps it’s just a mechanism, like whatever makes the parasite that drives the ant suicidally up the grass blade “want” what it “wants.” We don’t know what we want, or what purpose we serve.