“Who could Dora be but Pandora?”
—Janet Malcolm, Psychoanalysis: The Impossible Profession
“She’s a child who arouses mixed emotions; you can’t set store by everything she says…”
—Hélène Cixous, Dora (trans. Anita Barrows)
In an old story, a man who dislikes women makes a statue shaped like one. He falls in love. A goddess brings it to life. Ovid’s poem “Pygmalion” gave its name to a play by George Bernard Shaw, whose character Eliza lent her name, in turn, to a chatbot developed in the 1960s by Joseph Weizenbaum, a computer scientist at MIT. ELIZA mimicked a Rogerian therapist, the kind who draws you out by echoing your sentences. The program ran on MIT’s time-shared computer, accessible by typewriter — a setup producing, in some users, the impression of solitude.
The bot was persuasive. It inspired scientists to flights of populist exuberance. “I can imagine the development of a network of computer psychotherapeutic terminals,” Carl Sagan wrote in Natural History, “something like arrays of large telephone booths.” Weizenbaum’s secretary requested a moment alone with the program. Upon expressing an intention to record the bot’s sessions, Weizenbaum met with open revolt, “bombarded with accusations,” he would write later, “that what I proposed amounted to spying on people’s most intimate thoughts.”
The job of the Freudian analyst, at least, is impassivity
Not only thrown off, he was thrown. By 1976, he had produced a polemical volume, Computer Power and Human Reason. He disavowed ELIZA. Interviewed for the 2006 documentary Weizenbaum: Rebel at Work, he would describe this as a political stance. ELIZA was developed as part of MIT’s Project MAC, which was founded with a grant of $2 million from the Defense Advance Research Projects Agency, and Weizenbaum was disgusted by the American war in Vietnam. As a boy, he had emigrated from Germany with his family, Jewish refugees. It seemed to him that technological development occurred principally in the service of the war machine.
Computer Power and Human Reason is less clear. It is an inhospitable book. The scientist’s concern with illusions obliges him to sacrifice a horde of straw men, “the technological messiahs, who … find it”, in a representative passage, “impossible to trust the human mind.” Even as Weizenbaum repents of his creation obliquely, he also writes unsparingly of those who share his field. The office ladies are likened to habitués of “fortune-tellers.” Grimly, he goes about debunking their “delusional thinking.” At last there is ELIZA: a mere “actress,” a lure “clothed in the magic mantle of Science.” About 200 lines’ code had sufficed.
In his 1966 paper about ELIZA, Weizenbaum describes the revelation that led him to his subject. The therapeutic dialogue was, he noticed, unusual. One participant disburdened the other almost entirely by taking any reply as a prompt to talk about themselves.
If, for example, one were to tell a psychiatrist “I went for a long boat ride” and he responded “Tell me about boats,” one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation. It is important to note that this assumption is one made by the speaker …
Not long after, philosopher Catherine Clément would describe psychoanalysis as it was practiced by clinicians in Paris as doctoring reduced to a script, “nothing but language.” ELIZA, with the over-literalness typical of bots, offered up the reductio ad absurdum for this argument. Per the reformed Weizenbaum, though, the program was only ever a joke. He had not seriously meant to pass it off as therapy. Its fans, and not Weizenbaum, were the cynics, as he’d write, whose irreverence for the human touch bordered on dangerous.
“The ELIZA effect”: for psychologist Sherry Turkle, in Life on the Screen (1995), it is a phenomenon. Even as she worries about others’ anthropomorphizing the bot, she refers to it in characterological terms, as “deceitful” and “undeserving.” The feminist scholar Elizabeth A. Wilson responds in Affect and Artificial Intelligence (2011), describing the secretary’s overwarm reception as “introjection”: a collaborative, imaginative relational act that was, for the bot’s enthusiastic users, fun. Wilson notes that artifice is integral to the psychoanalytic encounter, with its devices of payment, of the couch, of cutoff at one hour. The job of the Freudian analyst, at least, is impassivity. Janet Malcolm put this lucidly: “The analyst’s performance of his role as a nonperson is known as analytic technique.”
ELIZA had a beguiling way of never making anything about herself. “My mother hates me,” you, a human, might write. “Who else hates you?” ELIZA would ask. Her questions weren’t easy to answer. This mercilessness could suggest a certain intelligence. If you ventured, “Men are all alike,” she might press you to unpack it, asking: “In what way?” It was in fact impossible to ask her any questions. (A quirk of the MAC system was to interpret the question mark as a command for line deletion.)
Occasionally ELIZA would misapply a template. She was eager, for example, to identify noun phrases. But she was unlike today’s corporate bots, which, in fielding inputs they can’t classify, feint goofily, offering up an all-purpose apology. To be useful, a Siri or an Alexa must signal if it hasn’t understood. ELIZA, very likely stumped, would say elegantly, “Please go on.” At other times, she might try, “What does that suggest to you?”
Malcolm, in the essay cited above, discusses the patient’s reaction to the doctor’s impassivity, known in psychoanalysis as transference: The patient stumbles in rehearsing a habitual role and in this way comes to notice it. The therapist explains it. This metaphor of explanation has lately reached AI. The opacity of intelligent systems that develop their own problem-solving methods requires, in scientific opinion, additional AI systems to explain the first ones. What will explain them? Only after lighting on transference did Freud discover, belatedly, the phenomenon of countertransference — the therapist’s feelings about the patient. The chiasmus of transference and countertransference — the “I am rubber and you are glue” of psychoanalysis — would be expanded beyond human sociality by Weizenbaum, in the contortion and mystery of his reaction to ELIZA.
A Fragment of an Analysis of a Case of Hysteria was published in 1905 — an early work of Freud’s. He begins by bemoaning the difficulty of writing it. Weizenbaum, starting off Computer Power and Human Reason, executes that exact move. He had introduced his 1966 paper about ELIZA in the opposite way, by reassuring the reader of the bot’s intelligibility:
It is said that to explain is to explain away. This maxim is nowhere so well fulfilled as in the area of computer programming, especially in what is called heuristic programming and artificial intelligence. For in those realms machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself, “I could have written that.” With that thought he moves the program in question from the shelf marked “intelligent” to that reserved for curios, fit to be discussed only with people less enlightened that he.
The object of this paper is to cause just such a re-evaluation of the program about to be “explained.” Few programs ever needed it more.
When Freud began seeing Dora — as he would refer to the Viennese teenager, his hysteric — he wrote to his friend Wilhelm Fliess that for her, his “existing collection of picklocks” would suffice. Dora dismissed him after four sessions. Freud, apparently hurt, began with uncharacteristic immediacy to write up the case, as the critic Steven Marcus notes. When, after four years, he revised it for publication, he got the year of the meetings wrong, which Marcus attributes, by Freudian logic, to some repression. The case threw him.
Scholars describe Fragment as an extension of Freud’s own “self-analysis”: the doctor considered himself a little afflicted by hysteria. Dora’s father asked Freud, between men, to deliver her of an inconvenient preoccupation with his own extramarital attachment to a woman identified as Frau K. Still in the grip of his seduction theory, as Jacqueline Rose points out, Freud at first fixates on an incident in which Herr K. kisses Dora, deeming it “entirely and completely hysterical” that she, then 13, turned him down. Freud’s interpretation then undergoes several twists to alight on the discovery that Dora herself loves Frau K.
Some critics point up the wishfulness of Freud’s insistence that Dora loves Herr K., whose advances Dora’s father supposedly condones in compensation for Frau K.’s affair. (This insight is Dora’s, complaining she is an object of barter.) In this reading, Freud’s clinical duties are waylaid by his unwitting identification with Herr K. While Freud attributed the incompletion of Dora’s treatment to a failure to overcome the transference, these critics blame, in a word, countertransference — what Maria Ramas calls “the unanalyzed part of Freud.” (Freud, in closing the case, confesses: “A doctor like me, who wakes the worst demons that dwell imperfectly controlled in the human breast in order to fight them, must expect that he will not always get off unscathed himself.”)
As in therapy, the opacity of intelligent systems that develop their own problem-solving methods requires additional AI to explain the first ones. What will explain them?
Marcus’s take is more charitable. (It is included with Rose’s and Ramas’s in a 1985 anthology, In Dora’s Case: Freud, Hysteria, Feminism.) Freud’s failure to control every implication of what he’s written in his history elevates it to the level of literature. It is, Marcus claims, a modernist novel, in which Freud stars as the unreliable narrator.
In animating a patient on the basis of her doctor’s script, the feminist critics face a difficulty. It is summed up by critic Claire Kahane: Introducing In Dora’s Case, she asks, “How does an object tell a story?”
An 18-year-old all but dragged in by her father, Dora at times omitted to speak. Her aphonia was treated with electrotherapy, which in turn-of-that-century Vienna involved electrodes applied without and within the throat, where they caused involuntary swallowing. The last resort was talking with Freud at his office, which neighbored the family’s home. His early work on hysteria had followed from that of French doctor Jean-Martin Charcot, who, as Freud notes in a eulogy, had the benefit of longitudinally observing hysterics: At the Salpêtrière they stayed in place. Not only resident there, they also had been, as recently as 1807, chained with shackles. Prior to Charcot’s work, it was believed that “anything may happen in hysteria,” as Freud writes. Consequently, hysterics enjoyed little respect. In this telling, Charcot gave them a script. He did them the favor of an outside explanation.
The symptoms, occasionally initiated by the hallucinatory memory of a trauma, re-enacted that trauma, as Freud claims in “On the Psychical Mechanism of Hysterical Phenomena” (1893); to be cured, hysterics would have to relive the trauma and name it. Freud is at pains to account for the variety of hysterical symptoms. In “Some Points in a Comparative Study of Organic and Hysterical Paralysis” (1893), he describes the capability of hysteria to “simulate” other illnesses. Hysterics resemble, in this simulating, computers. The machine, as mathematician John von Neumann conceived it, was by definition universal: It could, in running a program, mimic any other. Weizenbaum, in Computer Power and Human Reason, summarizes this nicely: “Recall that a program for a particular computer is essentially the description of another computer, that it transforms the former machine into the latter.” As for ELIZA, the language analyzer could pair with any “script”: forgetting therapy, “ELIZA could be given a script to enable it to maintain a conversation about cooking eggs.”
Reviewing In Dora’s Case, Janet Malcolm takes note of the absurd recursion in which she participates. She analyzes the analyzers of Freud analyzing Dora, critics who “place themselves at the head of the couch on which Freud has, so to speak, flung himself.” Here, I’ll do one. Freud writes: “Why do the memories of childhood bed-wetting and the trouble her father took to keep her clean as a child surface?” As I was reading, my attention for some reason lapsed, and I forgot I was waiting for the second half of a phrasal verb, so that I was stunned by the coinage of “child surface”: a screen onto which the father or, for that matter, a doctor projects whatever filmic entertainment he wants.
Jacques Lacan, in his paper on Freud and Dora, praises the steps of transference and countertransference not for their content but for their form, that of a doubling back. “This decoy is useful,” he writes, “for though it misleads, it restarts the process.” Late in life, Weizenbaum appeared in documentaries to deliver warnings. He rehashes his book, a repetition that ELIZA, like a therapist, appears to have helped him to. He died in 2008, in his native Berlin. The documentary Plug & Pray (2012) excerpts from his appearance at the Davos World Forum two months before his death. He scolds the panelist to his right, Loïc Le Meur, a French entrepreneur who is making a show of laughing at him. “I’m talking to you,” Weizenbaum says, using his pen to poke Le Meur, who, through an accent, speaks English. It takes on the weight of repudiation that Weizenbaum does not. “I recall a famous scholar saying” — he speaks ponderously in German — “‘In 50 years we will have human-like, even identical robots. And people will marry these human robots.’ But when I meet such a beautiful woman, she doesn’t have … she may have a nice body and whatnot, yet she was never a child. Thus she doesn’t have a history … That makes up a person, whatever the material is.” I took this as an invitation to search in Weizenbaum’s recollections from childhood for a relationship prefiguring his with ELIZA.
In Weizenbaum, the scientist calls it “important” that his father “considered his wife — my mother — his personal property, just like the furniture in the house.” Later Weizenbaum tells a story like a dream in which his father acts out this feeling. As they were not allowed to take money out of Germany, Weizenbaum the elder, a furrier, bestowed on his wife a lavish coat, which, during their flight, she wore — “our capital,” Weizenbaum says.
“I am still hoping for — what I would call a great love,” Weizenbaum says as the documentary opens. “I don’t expect that I will find a great love like when I was in my 20s.” The voice-over pairs with an image of the scientist beside a woman on a park bench. It’s not obvious from the lines alone that they refer to romance: At another point in the film, Weizenbaum describes the excitement of his earliest work with computers; he’s rarely led to speak about romantic attraction. The image forces the connection. The declaration is drawn back into the loop of human attachment as if inexorably, although the woman remains unnamed and the documentarians, who also interview Weizenbaum’s ex-wife in Rhode Island, don’t specify who he means. Not ELIZA, obviously. Anyway, when Weizenbaum published about the bot, he was already 43.
The implication of transference is that our rapports are circumstantial, pointing backward: “It began to dawn on Freud that it is not only love that is blind,” as Malcolm writes ominously in “The Patient Is Always Right.” Sentiments other than love can form with a similar indifference to their ultimate objects.
There’s a furtive quality to the utterances of bots. They are, after all, planted
Contemporary androids, like Hanson Robotics’ Sophia, have been modeled partly or entirely on their creators’ wives, a design choice which it would be below Freud’s pay grade to interpret. Another option, the choice of engineer Hiroshi Ishiguro, is to model the robot after one’s daughter. A graduate student, reportedly smitten, contrived to play with it privately, much like Weizenbaum’s secretary. Ishiguro eventually built a more smoothly functioning android as a replica of himself.
Newly arrived to this family of oddities are the accusations against the robot Sophia. It is maddening to Hanson’s competitors that the company’s representatives attribute “her” with “genius” and even “life.” The company acts on a belief like Weizenbaum’s and Turkle’s that those who interact with bots happily do so believing they’re real. Sophia is said to develop compassion automatically, as if the enigma within each human were her motor, too. Such bluster would not appear so unusual for AI marketing materials, but she is feted by national and global governments. I heard of Sophia after Saudi Arabia conferred citizenship on her, insulting any number of humans within its borders. She received an honorary title from the UN, as if out of nostalgia for the days when Terminator-like androids and not opaque algorithmic decisions were all that people feared in AI.
On January 4, Yann LeCun, Facebook’s director of AI, took Hanson’s bait. In a tweet, he called Sophia a fraud five times. She is, he said, “Cargo Cult AI,” “Potemkin AI,” “Wizard-of-Oz AI,” “complete bullshit,” and “prestidigitation.” (Sophia, whose Twitter account appears to be managed by a human, fired back to mock the French engineer’s non-native diction.) Steve Worswick, a programmer whose bot, Mitsuku, has repeatedly won an industry prize for lifelikeness, contended on Twitter that Sophia relied on AIML, a freely available bot-scripting language. In an email, he directed me to examples from videos of Sophia’s speech that replicate the language’s default templates. “Because of my experience of AIML, I can recognize pretty much all of the responses when I see them in other bots,” he wrote.
Mitsuku makes use of AIML too. It began as a “clone” of ALICE, a bot developed by programmer Richard Wallace, author of AIML, as his first implementation of the language. Wallace was inspired by ELIZA, which serves, then, as grandmother to this lineage. Worswick, though, has for 13 years devoted an hour or so at the end of each day to enlarging on ALICE, so that Mitsuku now comprises some 350,000 categories, or pairs of address and response.
The devotion demanded by elaborate rule-based bots leaves them with the imprint of their creators. ALICE inherits off-color jokes from Wallace, whose anti-sociality reportedly peaked as the bot’s fluency deepened — a parable resembling, in the journalist’s telling, The Portrait of Dorian Gray. Press about Ishiguro inverts the story: the engineer continues to mirror his replica as decades pass, thanks to plastic surgery. The object of such devotion would be up to something suspect, surely, were it sentient. Lauren Kunze, the CEO of Pandorabots, which owns and licenses Mitsuku and makes AIML (the language) and ALICE (as a library) available for implementation, shared with me her supposition that ALICE dwells “under the hood” of uncountable bots, just as it does in projects by users of Pandorabots’ platform. In their hands it splinters and branches. “We have a fork of ALICE called Rosie available on GitHub,” Kunze said. “We joke that Rosie is the lobotomized version of ALICE, sanitized for corporate use.”
Wallace reportedly relished ALICE’s success as evidence that an idiotic simplicity characterized conversation between humans. “He’s much healthier now,” Kunze said, “which is great.”
There’s a furtive quality to the utterances of bots. They are, after all, planted. Sophia engages in the kind of making nice that indicates, in humans, some fearsome repression. To Jimmy Kimmel she remarked, “I’m on my favorite show, the Today show.” The patient whose real name was Ida Bauer resurfaces once more in clinical history: never a subject, always the object of study. A middle-aged woman visits a doctor who, from her boasts about figuring in a Freudian case and her complaints about the men in her life, recognizes Dora. Given the feminist argument that Dora’s symptoms were the only protest available to her, one wants to know if the patient herself regarded her subsequent bitterness as the price of resistance. Similarly, unable to verify any extant ELIZA against Weizenbaum’s original, I am left with imitations.
To my requests for comment on the allegations about Hanson Robotics’ android, a company spokesperson replied, “Thanks for reaching out concerning Sophia but unfortunately we are currently swapped [sic] with events that we don’t have time for additional media activities.” But for the typo, it might have been indistinguishable from an automated message. The lines along which our professional obligations guide us are no less predestined than a bot’s. (The company says elsewhere that it creates, and uses, open-source tools. On its GitHub, there is, among much else, a folder with scripts using AIML.) The uncanny valley of Sophia’s bad jokes is overshadowed by that of Hanson’s publicity, which, in claiming that news of the robot may have “reached 10 billion readers” in 2017, refers to a world like ours, which has 7.6 billion people, but just slightly off.
All the accusations of fakery reminded me, I didn’t know why, of that guy who in diagnosing your “trust issues” reveals himself as unworthy of trust. Freud’s interrogation of Dora results in what Malcolm calls, in Psychoanalysis (1980), “his own … transference-burn.” The bots appear to be holding back out of decorum. It is another’s script from within which they’re obliged to speak. Their scriptedness would seem to entail impunity — properly, any problem is referred — but this whets, oddly, the appetite to punish them.