About a month ago, I took a part-time job teaching chess to middle schoolers on Manhattan’s Lower East Side. I am merely competent at chess and not highly experienced at teaching either. But a friend of mine who runs an after-school program had been looking for someone to fill this position for a long time. Chess has a well-documented history of attracting people with aggressive — some might say paranoid — dispositions, more often than not male. Perhaps it creates such types. Either way, as my friend discovered, these qualities don’t always make chess experts great with kids. When I asked what happened to the previous instructor, my friend made her hands into a steeple. “How can I put this in a way that’s helpful?” she said. “He was just too into chess.”
To make sure I don’t seem similarly obsessed, my students spend most of each hour-long class in unstructured play, following a very brief, very casual lesson. As they attack and rob each other’s pieces, I wander from table to table, doing my best to make sure they don’t attack each other. During this portion, I feel slightly superfluous. Should I be coaching them more, making more comments? I’m not sure. Instead of the chessboards, I watch their faces. Maybe my expectations are low, but I can’t stop being thrilled that they’re interested at all.
Deep Blue’s victory, many felt, didn’t prove that it was intelligent so much as that chess wasn’t
One day, though, the eighth graders, having just received the results of their applications to New York City’s highly competitive high schools, decided en masse not to show up. That left just me and Willie, a seventh grader who clearly wished he’d gotten the memo. I stuck to my usual lesson plan and pulled up a chess puzzle I’d found online. After a couple of false starts we found the answer, a move that pinned the enemy’s queen in front of its king. The site responded by shifting the king over a space, rather than the more obvious choice of sacrificing the queen to capture the attacking bishop.
Willie wasn’t impressed. “That’s so stupid,” he said.
“It does seem like a weird move,” I replied, not sure yet myself why the computer chose it. “Let’s try to figure out why it did that.”
“It’s a CPU,” he muttered. CPU stands for computer processing unit, but in video games it means a player controlled by the computer. “Of course it’s stupid.”
“Actually,” I said, “CPUs these days can beat world champions.”
He rolled his eyes. “It doesn’t matter,” he said.
“Because people made the CPUs.”
When chess master Garry Kasparov lost to a computer — IBM’s Deep Blue — in 1997, some saw this as a death sentence for the game. Deep Blue’s victory, many felt, didn’t prove that it was intelligent so much as that chess wasn’t. The computer’s algorithm operated by making a deep and indiscriminate search into the tree of possibilities, an inelegant procedure known as “brute force search.” Advances in chip architecture and parallel processing had made this process fast enough to succeed against Kasparov, but as Noam Chomsky said, this was “about as interesting as the fact that a bulldozer can lift more than some weightlifter.”
An anonymous poster on Quora described their reaction to Deep Blue’s win: “I was once one of the promising chess players within my country,” the person wrote, but didn’t say which country. After Deep Blue’s win, “I got shocked. Because for me all the tactical beauty of the chess game, with all the genius of Alekhine, Tal and all of the other creative playing great masters was over … At that point chess was over for me, and I stopped playing, and never played afterward.” I wondered how many quit for similar reasons. Some, it seems, migrated to Go and poker, games which have been more difficult for AI systems to master, though this past year, world champions of both were defeated by computers.
But people still play chess, in the U.S. and elsewhere. In 2012, a YouGov survey found that 23 percent of Americans had played chess at least once in the past year, and that worldwide, an estimated 605 million adults play regularly.
Ironically, chess’s integration with computers might be what preserves it. Since taking the teaching job, for instance, I’ve spent an embarrassing amount of time on Chess.com, where I can start a match against a complete stranger with such eerie instantaneity that it’s as if the notification precedes my click. Because my opponents are algorithmically selected to be about as mediocre at chess as I am, it’s almost guaranteed to be a good match. I can also be confident that I’m playing against a human, not a bot — both because the site has additional algorithms to detect bots and because any bot would quickly beat me.
Sometimes during a match I’m stumped, or trapped, or uncertain, and in these moments I find myself resorting, with a faint sense of shame, to a feature in the Chess.com app called “Analysis,” which allows the user to test moves and map possible futures freely, something you’d normally have to do in your head. This greatly expands my capacity to explore the central question of chess: How are they trying to get me? Like other technologies of thought — writing, for instance — this one lets me indulge my paranoid imaginings to an extent otherwise unimaginable, while also offloading them onto my environment, which feels healthier psychologically.
Analysis doesn’t use anything that could be called AI, but as I’ve become aware of my own creeping dependence on software, I have become curious about what more serious chess players must be experiencing. Discussion in the chess community about how computers have affected the upper levels of play is ongoing and inconclusive, but there are a few consequences upon which everyone agrees: Chess masters now hire fewer assistants or “seconds” than they used to (they’ve been replaced by chess engines with names like Komodo and Stockfish); the international leaderboards are no longer dominated solely by Russians; using computers to cheat in tournaments has become a problem in recent years; and unprecedented access to resources and opponents has caused a proliferation of ever-younger prodigies, the recent youngest being 12 years, 7 months old.
Like other technologies of thought, the automated feature that allows users to test moves lets me indulge my paranoid imaginings while offloading them onto my environment, which feels healthier
Deep computer analysis has also made chess into more and more of a memory game. Whereas playing as many games as possible used to be the best way to build the vast store of patterns great players must hold in their heads, they can now accomplish this by studying massive, easily accessible databases of every chess game ever recorded. As a result, much of upper-level chess skill now comes down to forcing the opponent into a line of play that you’ve analyzed more extensively than they have. As masters memorize an ever-growing set of contingencies, draws become more and more inevitable. A big worry today is that chess is headed toward “draw death,” a phenomenon similar to what happens in tic-tac-toe when both players know the best strategy.
Computer analysis discourages risk-taking; humans tend to have a psychological resistance to retreating, whereas algorithms don’t even possess the concept of “backwards.” As we learn from these machines, we also adopt their tendencies. The term “computer moves,” when it isn’t simply an accusation of foul play, is often used to denote moves that are far-sighted and counterintuitive. I’ve also seen it used to refer to moves that are tedious, uninspired, or oppressively safe. There seems to be a tinge of old-man nostalgia to this attitude: Sure, the kids these days can beat us, but where’s their sense of style?
Bobby Fischer, arguably the best American chess player of all time, was frustrated enough with the increasing centrality of rote memorization that in 1996, the year before Deep Blue’s victory, he introduced a variant of the game called Fischerandom — now known as Chess960 — in which the arrangement of the opening pieces is randomized. Less theory, more creativity, is the idea. Fischer is most famous for winning the 1972 world championship match against Boris Spassky, a victory that has since come to symbolize the defeat of Russian communism by scrappy American exceptionalism. That match, too, was framed as “man versus machine” — but in that case it was the Russian chess machine, a reputedly unscrupulous institution that had produced a series of prodigies like Spassky for propaganda purposes. Decades later, Chess960 would stem in part from Fischer’s enduring conviction that many Russian players, including Kasparov, had thrown tournament matches to manipulate the brackets. (As Chess960 was random, matches couldn’t be fixed.) Compared with Fischer’s publicly stated belief that the U.S. was run by a secret Jewish world order, members of which had, among other crimes, stolen millions of dollars’ worth of personal memorabilia from his storage room in Pasadena, it was actually one of his less delusional conspiracy theories.
Kasparov, too, since shortly after losing to Deep Blue, has been trying to promote a chess alternative: Advanced Chess, in which players have access to computers during play.
It goes weirdly unmentioned in most things I’ve read about computer chess that humans using computers can still easily beat both computer-less humans and humanless computers. Even at this advanced stage of AI development, hybridity continues to trump computation.
There’s a famous moment in Deep Blue vs. Kasparov that I find revealing. After staying up all night with his team trying to figure out a particular Deep Blue move, an exhausted Kasparov accused IBM of cheating. He didn’t say this flat out but instead declared that the move reminded him of Diego Maradona’s infamous “Hand of God” goal in the 1986 World Cup. The move was genius, incredibly far-sighted, far above any move that Deep Blue had played so far, so much so that Kasparov believed it must have been illegal: He was convinced that only a human could have made it. He demanded the readouts of the computer’s analysis. IBM refused. Reading Kasparov’s remarks now, they have an unintended implication: They suggest that Deep Blue didn’t just defeat the best human player — thereby fulfilling the research program outlined by the computer scientist Claude Shannon back in 1954 — it also passed a chess-based version of the Turing test.
Years later, it came out that Kasparov was kind of right. IBM wasn’t cheating, but the suspicious move wasn’t really a result of Deep Blue’s thought process; Deep Blue had selected the move at random, something it was programmed to do in the event of a certain malfunction. But experts also believe that the move in question wasn’t as brilliant as Kasparov thought it was either. Instead, it was weird and unexpected, which can be, in certain cases, even more devastating. A main reason human-computer hybrids do so well at Advanced Chess is because the human ability to make a strategically random decision is still unmatched.
We once used chess-playing as a barometer of intelligence and became disturbed when computers got smarter than us. Deep Blue’s defeat of Kasparov was understood as a “humiliation” not only for chess masters, but also for humankind more generally. With stakes this high, our subsequent downplaying of Deep Blue’s intelligence starts to look like a fit of existential sour grapes. But the reality is that even before Carnegie Mellon grad students started the Deep Blue project, the dream that computer chess could be a means for approaching general artificial intelligence — the sort of broad, multifaceted adaptability possessed by humans and many other animals — was already on its last legs.
A main reason human-computer hybrids do so well at Advanced Chess is because the human ability to make a strategically random decision is still unmatched
John McCarthy, one of the founders of AI, had once considered chess “the drosophila of AI,” comparing it with the key role of the fruit fly in the study of genetic inheritance. But by 1989 this idea had fallen out of fashion, and McCarthy and others were ready to admit that the rapidly improving abilities of computers at chess revealed nothing positive about human intelligence. The race to make a computer chess champion was “as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races,” McCarthy lamented in 1993. Behind Deep Blue, a book about the project by one of Deep Blue’s creators, Feng-Hsiung Hsu, makes it abundantly clear that the Deep Blue team had zero interest in producing a system that would reveal something about human intelligence. Their attitude toward such efforts was derisive. Hsu declares that he initially picked chess over more lucrative work because he had an idea for improving the chip design and thought (correctly, it turns out) that making the world’s most powerful chess computer would secure him a place in history.
The popular emphasis on Deep Blue’s brute-force methods, while reassuring, obscures the years of human effort that went into tuning its evaluation function. This is the part of the program that determines whether a position that has been discovered is actually worth pursuing. Hsu and his confederates consulted a surprising number of grandmasters for this part of the programming, such that it seems fair to say that much of the intelligence Deep Blue did exhibit was really the coded knowledge of the chess masters who had worked on it. Reading about these geniuses who helped IBM plan openings and hardwire endgame strategies, I was tempted to see them as sad analogies to certain workers in factories, who must help program and train the robots that are replacing their colleagues and, eventually, themselves. But unlike those factory workers, these chess players weren’t actually hastening the demise of their profession.
One of the reasons that chess algorithms seem so strange, I think, is that we don’t normally think about automating the things we do for fun. Many of us view automation fearfully — apocalyptic scenarios of robot overlords, humans used as batteries, and so on. Such visions encode reasonable worries. The work that is my livelihood may be rendered superfluous by machine labor, leaving me to starve. Against this bleak horizon, it’s hopeful to me that chess’s automation seems to point not toward a dystopia of human replacement, but toward a future where humans and computers support each other, and where the ability to automate the things we do for enjoyment doesn’t, of necessity, diminish our joy.
Chess may be a trap, in the sense that it can quickly take over a certain kind of life; it may even be “stupid,” in the sense that it demands only a narrow swath of our mental faculties. But as chess obsessive Marcel Duchamp recognized, one thing it isn’t is commodifiable. He renounced his career as an artist in his early 30s to devote himself more fully to chess. In the chess community, his statements sometimes pass as inspirational, despite, on closer inspection, depicting chess as one part refuge from capitalism, one part ruinous quagmire. “I am still a victim of chess,” Duchamp declared in a 1952 interview, even as, in the next breath, he praised the game for being more impervious than art to the market forces that had frustrated him during his career. “It has all the beauty of art — and much more. It cannot be commercialized. Chess is much purer than art in its social position.” “Chess has no social purpose,” he said elsewhere. “That, above all, is important.”
At any rate, I’ve found little evidence that today’s serious players are much fazed by computer dominance, especially the younger ones. Magnus Carlsen, who became world champion in 2013, a few days before his 22nd birthday, now markets a chess-playing app called PlayMagnus; he’s joked about punching walls in frustration when it beats him, but there’s no indication that he’s genuinely threatened. When asked, he responds, “We’ve known for a long time that computers are better, so the computer never has been an opponent. It’s a tool to help me analyze and to help me improve at chess.” Paradoxically, while Carlsen plays more “computer moves” than anyone alive, he claims to use computers sparingly for training. “I find it much more interesting to play humans,” he says.
It once was hard to imagine maintaining self-respect in a world where machines can better us at one of the few things we believe we do best. Now we live in that world. And yet, 20 years after Kasparov’s defeat, the only person who seems to have any real reason to be embittered is Kasparov. In a recent interview on his podcast Waking Up, Sam Harris remarked to Kasparov: “You will go down in history as the first person to be beaten by a machine in an intellectual pursuit where you were the most advanced member of our species.” When I think about having to field statements like that for the rest of my life, I can see why Kasparov seemed so anguished in his game against Deep Blue. Before Deep Blue, he had, incredibly, never lost a professional match.
Which reminds me of a line from the otherwise forgettable 1993 chess drama Searching for Bobby Fischer. “Maybe it’s better not to be the best,” the anxious chess prodigy says before his tournament. “Then you can lose, and it’s okay.”
In 1997, I wasn’t keeping track of Deep Blue and Kasparov. I was six. But I was already familiar with the experience of being defeated by a computer, because I was already familiar with Super Mario Bros. on the Nintendo Entertainment System. In video games, death was the condition of pleasure, even of meaning. As a child, it was one of the most stable, predictable sources of meaning I knew. And although it meant different things at different times — it could also be a source of fear, or humility, or frustration — death rarely meant that a game could no longer be meaningful to me. Not even when it was inevitable. The desires that games facilitated were more complicated and numerous than the drive to win. A game that refused victory wasn’t broken unless it also refused progress, discovery, exploration, charm.
Applying shame to the act of losing to an algorithm is to engage in a weird sort of anthropomorphism. It’s also to ascribe their unique brand of stupidity to ourselves, as if we too had only one definition of success
So I can’t help feeling that applying concepts like humiliation and shame to the act of losing to an algorithm is to engage in a weird sort of anthropomorphism: We imagine that we engage the computers as equals, that we actually care what they think of us when we lose. But it’s also to ascribe their unique brand of stupidity to ourselves, as if we too had only one definition of success.
It isn’t my intention to romanticize dependence on computers. I’ll admit I find it ugly and a little depressing that today’s chess masters must, like just about everyone else, spend their time hunched in front of screens. But I also find something liberating in the fact that by consistently beating us at our own game, computers have given us permission to lose at it.
After his pyrotechnic victory against Boris Spassky in the 1972 World Chess Championship, Bobby Fischer returned to his home in Pasadena and stopped playing tournament chess. Three years later, he forfeited his title to Russian grandmaster Anatoly Karpov by refusing to show up. “Bobby was always afraid of losing,” said Arnold Denker, a former U.S. chess champion. “I don’t know why, but he was. The fear was in him.” He had become the youngest U.S. chess champion at 14, but at 29, he could still barely hold a conversation. Many have conjectured — and the available evidence suggests they’re right — that Fischer couldn’t stand the thought of losing the only thing that had ever made him remarkable, and preferred to stop playing rather than risk no longer being considered the best. “Hating to lose, and having the myth destroyed, was a big part of him not playing,” said the chess pundit Shelby Lyman. The tragic remainder of Fischer’s life, from his overt fascism and racism to his conspiracy theories to his final searing hatred of the U.S., read as a familiar story — the story of a discarded military veteran. He died with very few friends in a hospital in Reykjavik, less than two miles away from where he had defeated Spassky, as a result of a curable urinary tract blockage on which he hadn’t trusted doctors to operate.
I can’t help wondering whether his life would have been easier if the stakes of his war had been lower, if he had grown up under a silicon ceiling and never had quite as much to lose. Would he have found a different reason to play?
In Arthur C. Clarke’s 2001: A Space Odyssey, the intelligent supercomputer HAL-9000 is programmed to lose at chess and other games of skill 50 percent of the time, to keep up team morale. “Thank you for a very enjoyable game,” he says to Dr. Poole, in the film version, after checkmating him.
Why, I wonder, does that line always sound so creepy? Is it because we think HAL couldn’t really be enjoying himself, or because we think Dr. Poole couldn’t be? Later on, HAL calculates that the only way to clear up a contradiction between his directive to tell the truth and his directive to lie is to kill the people he would have to lie to. In the sequel, HAL’s creator explains all this and then, with a sigh, sums up the situation: “He became paranoid.”
But this seems an inaccurate use of the word. HAL’s conundrum wasn’t paranoia; paranoia was what we were going through, watching 2001: A Space Odyssey. And it’s what the crew was going through, asking all those questions about “mission” and “purpose,” forcing the computer to lie.
In Turing’s famous version of the “imitation game,” humans and computers are hidden away in separate rooms and communicate by teleprinter. In Deep Blue, the labor of programmers and grandmasters (not to mention the workers in factories producing computers) is hidden within the computer chassis. Something similar could be said about this essay. The work that went into it is also hidden. You can’t request the readouts. Your assessment of my intelligence will be an estimation of energy and input: how many ideas you think I’ve cribbed; how many times you think I’ve rewritten this paragraph.
Intelligence lives in a shadowy little box, and when you shine your light in the box, it vanishes. Turing’s insight was that the darkness and what’s inside it can never be told apart.