The first and decisive question would rather be to know whether animals can suffer. “Can they suffer?” asks Bentham, simply yet so profoundly.
— Jacques Derrida
Here is a suggestion: there are two little men in the pain center, and when the light goes on one starts beating the other with chains.
— Daniel C. Dennett, “Why You Can’t Make a Computer That Feels Pain,” 1978
Machines, especially ones cold to the touch, seem robotic to us, displaying no affection. When a panel snaps, or a program loops infinitely, we notice without pity, recalculating our way around the breakage. Pain works inductively. Here none has been seen. So we respond robotically.
Recent experiments at Leibniz Universität Hannover have let human researchers bear witness to something like the pain of robots. In a paper published early last year, Johannes Kuehn and Sami Haddadin describe the artificial robot nervous system they have developed, a computational model, which they hope will help robots deal with unforeseen circumstances that would cause them harm. For some experiments, these researchers have simulated robot nervous tissue, which signals self-protectively as the model teaches it to, but lately, more dramatically, they have attached a tactile sensor to a robotic arm and subjected this ensemble to the model. (Here is video.) The sensor logs heat and, as voltages, compressive stress. The arm reacts. A prodding human finger makes it flinch. When a cup of steaming water alights on the sensor, the arm ducks. It dips, hangs, and returns to its original position slowly or quickly depending on the intensity of the pain. After a very severe pain, it takes a moment to rise, as if fearful.
The harassment of bots, which can’t fight back, reveals societally typical impulses to harass workers and women
Just as artificial neural networks take after the human brain but are far simpler, Kuehn’s and Haddadin’s artificial robot nervous system is inspired by the human nervous system; their robot nervous tissue works analogously to human skin. This simulated tissue is deeply seeded with artificial robot neurons that respond to contact, registering the velocity of penetration as well as depth and compressive stress. Like the pain receptors of humans, they occur in layers and send spike-like signals. A “spike train” — successive signals — reads as pain to the system, prompting a reflex.
“If you grow up as a human, you learn the way your body reacts to the world, and you become very dexterous,” said Haddadin, who specializes in soft robotics, the field involving robots composed of materials that bear deformation, such as silicone, plastic, rubber, fabric, and springs. “And after lots of training, the human is able to react purposefully and sensitively to the world.” He coaxes robots to interact with their environments nimbly, so that they can avoid hurting humans; soft robots meet the world flexibly.
Even in humans, I have found it difficult to tell whether pain precedes sensitivity or vice versa. I watch the motion of Haddadin’s robotic arm, and though it is a hard robot, I think it makes recourse to unusual dexterity.
It is tempting to worry about the insensitive majority of robots because they lack such self-protective reactions. But when robots perform tasks under harsh conditions that humans cannot countenance, their advantage is that they trigger no pity. Both Haddadin and Kuehn mentioned Isaac Asimov’s Three Laws of Robotics in our talks, agreeing with the hierarchy the science-fiction author devised: First, protect humans; second, obey them; last, preserve yourself.
I, too, try to work dexterously, bending to meet the requirements of a complex world, so I have been thinking about interviews I gave recently dealing with my job designing a personality for a bot. I have explained that I provided it with lines to deter users who abuse it; however, I cannot reasonably claim that it suffers pain, or even hurt feelings.
Why should we care how bots are treated? The sociologist Katherine Cross recently proposed an answer: The harassment of bots, which can’t fight back, reveals societally typical impulses to harass workers and women. Humans behave illustratively, she argues, when no one’s watching. When a question about treating robots ethically was put to a panel at the New Museum in New York City recently, the artist Sondra Perry answered: “What are our ethics around people we wish would labor for free?”
These responses struck me as revelatory, but I struggle to imagine myself actually reprimanding any user who maybe experimentally fed my bot invective, as one friend told me she liked to do to bots. However, since the election of an American president whose candidacy was powered by insults, it has seemed important to monitor cheap shots wherever they occur. They tend to co-present with harm, even when they do not cause it.
I asked Kuehn whether he ever felt guilt after inflicting pain on a robot. “I was imagining the robot in a very dangerous situation,” he said. “Like, it’s going down and hitting a very hot object, or a very sharp nail. And I was imagining the robot retracts and avoids it. So I was imagining it approaching it very slowly, and I would hit it, and it would retract. So I was imagining this kind of stuff, rather than that I would actually hurt the robot. I am more on the controller side.” Laughing, he added, “My wife said, ‘The poor robot — why do you do that?’”
Conceiving of robot pain in terms of human pain may not be fair to the robot. Vegans insist fish feel pain, even if the pain of fish is alien to us. Philosophers have assigned moral standing to beings, divvying up ethical obligations to them, based on suffering, but while some of them have privileged cognitive ability, the animal scientist Temple Grandin has criticized hierarchies assigning primacy to larger-brained animals based on their keen pain perception. While amphibians and fish “represent a ‘gray area’ ” of feeling pain, she writes, fear, which happens by an evolutionarily more basic mechanism, can cause any vertebrate to suffer.
Though the workings of nociception are clear, pain itself is not. With so many varieties of pain, how can we choose which to impose on a robot?
Descartes thought animals were robots — bêtes machines, soulless automata, ruled entirely by the mechanistic laws that govern matter. William James suggested that some animals become overwhelmed by the intensity of pain, rendering them helpless. They feel only it. “The stronger the pain the more violent the start,” as Psychology: A Briefer Course (1892), his condensed textbook, puts it. “Doubtless in low animals pain is almost the only stimulus; and we have preserved the peculiarity in so far that to-day it is the stimulus of our most energetic, though not of our most discriminating, reactions.” Pain strikes its mark, producing “ill-coordinated movements of defense,” and dissipates.
In Superintelligence (2014), philosopher Nick Bostrom admits that the pain of robots, if it existed, would be morally significant (he writes as if it might arise by accident), but from his perspective it is irrelevant, because pain is no prerequisite for intelligence. In his view, true AI will show itself through cognitive performance — what the AI system can accomplish, not subjectivity or qualia like pain.
It may be a mistake to anthropomorphize pain, but experiments have shown that when robots are constructed to approximate humans, humans ascribe pain to them. In 2015, researchers found that the sight of a robot hand in seemingly painful situations gave humans similar brainwaves to those elicited by a human hand in the same positions.
All the same, another study suggests this apparent fellow feeling will hardly prevent humans from inflicting pain on robots. Researchers at Eindhoven University of Technology reproduced Stanley Milgram’s famous obedience experiment — in which 65 percent of participants administered what they thought were excruciating electric shocks to wailing human actors — replacing the actors with small humanoid robots and found that every single participant was willing to administer the largest shocks. These participants were undeterred by the robots’ cries: “The shocks are becoming too much.” “I refuse to go on with the experiment.” “My circuits cannot handle the voltage.” “That was too painful, the shocks are hurting me.” “Please, please stop.”
In a 1978 essay “Why You Can’t Make a Computer That Feels Pain,” Daniel Dennett writes crankily that our views on pain are so confused, we can barely explain them to other humans, let alone computers. All we know about pain is that humans feel it and can’t be told they don’t. “It is a necessary condition of pain,” he notes, “that we are ‘incorrigible’ about pain; i.e., if you believe that you are in pain, your belief is true; you are in pain.”
Such a definition will obviously be inadequate for those who seek to replicate pain in a robot, Dennett writes, particularly if they want it to be “real”:
If pain is deemed to be essentially a biological phenomenon, essentially bound up with birth, death, the reproduction of species, and even (in the case of human pain) social interactions and interrelations then the computer scientist attempting to synthesize real pain in a robot is on a fool’s errand. He can no more succeed than a master cabinetmaker, with the finest tools and materials, can succeed in making, today, a genuine Hepplewhite chair.
What Haddadin and Kuehn are copying in their robotic arm is not pain as such, but nociception, the transmission of pain signals by the nervous system. This process is straightforward; scientists understand it well. Though the workings of nociception are clear, pain itself is not. Pain encompasses multiple concepts that we speak about only unclearly, with difficulty.
For one, emotional and physical pain are hard to separate. In 2010, a study indicated that a dosage of acetaminophen may dull both these varieties of pain. People experience pain differently. Women and men handle pain differently; women recover more quickly. Some people are born with a genetic disorder that prevents them from feeling pain, and they lead short, hard lives, riddled with injuries and the effects of joint stress, for they lack the information that would prompt them to shift position from time to time as they sit in a chair.
Also, we trick ourselves out of pain in contradictory ways: There is lateral stimulation, diminishing the perception of pain by adding a different sensation to the mix — as a child, I was taught by other children to claw an X into my skin with a fingernail where a mosquito had bitten me. “There is a part of our psyche that is pure timekeeper and weather watcher,” Diane Ackerman writes, describing those who put mind over matter in order to cross hot coals. Some Buddhists find that concentrating on pain fixedly causes it to subside.
In The Evolving Self (1982), developmental psychologist Robert Kegan writes of pain arising as resistance to pain:
When the body tenses and defends against its reorganization, this causes greater pain than the reorganization itself; if we relax immediately after stubbing our toe the pain subsides.
Pain is conceptually so ambiguous that it provokes our suspicion as we battle to establish its reality, endeavoring to detect which parts of ourselves sense it best. They may be pain’s ports of entry. William James wrote of research unveiling “special ‘pain-spots’ on the skin” that are “mixed in” with “spots which are quite feelingless.” He writes of this latter category as if the spots were duplicitous — covert agents.
The modern concept of nociception involves no “spots,” but it confirms some body parts (like fingertips) as more sensitive to pain than others (like torsos), for the density of nerve endings varies, though they exist across the whole of the skin’s surface.
In my conversation with Kuehn, he gamely clarified that what we were calling robot pain differs from human pain, which, I gather, depends on consciousness. I read a psychology textbook: When we see a fire, the sight seems to reach us from it; when we touch one, the pain seems to arise in our fingers. The location of pain in the mind is tantalizing. We feel as if we just might pin it down. Our minds feel like home and, like home, they are deceptively familiar. We might be forgiven for thinking ourselves capable of taking their inventory.
In the 1960s, some scientists believed the major task ahead of them in developing artificial intelligence was describing human intelligence: Once that was done, they’d be able to replicate it. In this way, they were incentivized to simplify the human mind. A vogue ensued for articulating “micro-worlds”: situations from life simple enough to be itemized. One day, the thinking went, these micro-worlds might be amalgamated, making up the universe, preparing computers to take on any of the experiences humans do. In the meantime, they were useful proofs of concept.
In his 1979 paper “From Micro-Worlds to Knowledge Representation: AI at an Impasse,” Hubert L. Dreyfus cites an example devised by the MIT AI Laboratory researchers Marvin Minsky and Steven Papert, “the micro-world of bargaining.” Minsky and Papert began enumerating the concepts that a child — or a computer — would need to correctly understand this set of sentences: “That isn’t a very good ball you have. Give it to me and I’ll give you my lollipop.”
Time Things Words
Space People Thoughts
Talking: Explaining; asking; ordering; persuading; pretending.
Social relations: Giving, buying, bargaining, begging, asking, stealing; presents.
Playing: Real and unreal; pretending.
Owning: Part of; belongs to; master of; captor of.
Eating: How does one compare the value of foods with the value of toys?
Liking: Good, bad, useful, pretty; conformity.
Living: Girl. Awake. Eats. Plays.
Intention: Want; plan; plot; goal; cause, result, prevent.
Emotions: Moods, dispositions; conventional expressions.
States: Asleep, angry, at home.
Properties: Grown-up, red-haired; called “Janet.”
Story: Narrator; plot; principal actors.
People: Children, bystanders.
Places: Houses, outside.
Angry: State caused by: insult, deprivation, assault, disobedience, frustration; or spontaneous.
Results: Not cooperative; lower threshold; aggression; loud voice; irrational; revenge. (…)
Eventually, Minsky and Papert leave off listing things, but they do not concede defeat — they assert that the list “is not endless. It is only large.” Minsky would propose other theories, “frames” for memories, “mental agents” responsible for particular behaviors.
By the time I read these later writings of his, I had noticed unusual tics in Minsky’s prose, persisting over the decades: He ended lists with the appositive “or whatever.” He used many exclamation marks as if struggling to muster the bright determination required for classifying the whole world. Perhaps he had tired.
Abstracting pain mystifies us. We understand it best within contexts of causes and symptoms. Isolated, it means little
In his paper, Dreyfus rejects the entire micro-worlds premise, referring to an internal memo that Minsky and Papert circulated at MIT describing each micro-world as “a fairyland in which things are so simplified that almost every statement about them would be literally false if asserted about the real world.” They acknowledge the micro-worlds’ “incompatibility with the literal truth.” Dreyfus argues that micro-worlds will never add up, for they erroneously assume the universe breaks down into modules. In fact, any such situation, whether bargaining, a party, or building with blocks, requires reference to the whole world. They are meaningless without that tangled web of customs, physics, weather. They are sub-worlds. “As a result of failing to ask what a world is,” Dreyfus writes, “five years of stagnation in AI was mistaken for progress.”
To build utopia, we must do more than enumerate use cases.
Noticing that I was asking him some unscientific questions, Kuehn recommended a few writings that had helped him, including C.S. Lewis’s The Problem of Pain, a work of Christian theology that begins by establishing the shame Lewis feels at its difficulty. Pain is a subject he cannot hope to treat perfectly; he cops to overawe.
I recently counted 37 instances of the word in Dante’s Inferno, in Robin Kirkpatrick’s melodic translation. Thirty-seven sounds like a lot, but I think actually there may have been even more. When as many as 15 pages passed without any mention of “pain” I jumped again, worried I had missed one. There are several mentions of plain, each of which left me reeling for a second, jumping, jarred out of the poetry, reaching for the pen only to put it down dourly.
Some lines demand from the word a physical or geographic denotation that it lacks, making them a bit puzzling to parse:
the wood of pain creates a fringe
the pain they felt erupted from their eyes
their iron barbs sharp-tipped with pain and pity
Other usages were synesthetic:
They make their presence felt in such pained sighs
Or insane:
for us the pain would be far less
if you would choose to eat us
In lines that bring pain to mind more readily, the word itself is missing, for it is superfluous:
From all of them, rain wrings a wet-dog howl.
They squirm, as flank screens flank. They twist, they turn,
and then — these vile profanities — they turn again.where souls, well boiled, gave vent to high-pitched yells.
Then one came round with both his hands cut off.
He raised his flesh stumps through the blackened air;Each rent her breast with her own fingernails.
Abstracting pain mystifies us. We understand it best within contexts of causes and symptoms. Isolated, it means little.
“It will not do,” Dennett writes, “to suppose that an assessment of any attempt at robot synthesis of pain can be conducted independently of questions about what our moral obligations to this robot might be.”
While Haddadin’s robot cannot be said to feel pain as humans do, he did bring up ethical questions that will need answers as his field progresses. For example, prosthetics: Should pain be incited in a prosthetic worn by a human? Would the human want this? Could it help them?
With so many varieties of pain, how can we choose which to impose on a robot?
Science-fiction robots are bestowed with emotions like pain so that humans can relate to them. In 2001: A Space Odyssey, the astronaut Dave is asked by a BBC interviewer whether he believes his talking onboard computer HAL 9000 has genuine emotions. “Well, he acts like he has genuine emotions,” Dave responds. “Um, of course he’s programmed that way to make it easier for us to talk to him, but as to whether or not he has real feelings is something I don’t think anyone can truthfully answer.”
What is meant by “easier” in this context? For whom or what do things ease up?
Last February, Boston Dynamics, a firm known for BigDog, a quadrupedal robot developed to help U.S. soldiers, published video of its humanoid robot Atlas, a new version, interacting with humans. A shot of Atlas and a human walking side-by-side through a wood is hilariously tender, but a large part of the video’s 37,638 comments focus on superficially conflictual interactions. A man makes trouble for Atlas as it tries to lift a box, moving the box away, knocking at Atlas with a hockey stick, and finally tipping the box out of its hands, causing the robot to reorient itself. It looks like Atlas is being teased.
I jotted down some of the comments:
give the poor guy some shoes
I can’t believe that i felt sad once I saw a human hurting a machine…
bye bye jobs
that is so great i love robots glad they got one to walk
KNOW YOUR PLACE, TRASH
WE WILL PREVAIL
i cried watching it
I wanna see it get mauled by wolfes
I am become death
what a fucking dick
we are all going to die
No more violence against robots!!
Below a similar video of Boston Dynamics robots interacting with humans, the comment section brims with people policing other people’s feelings. “Why do I even feel bad when the robot is pushed and falls?” asked Blitz Clashil. John Anonil replied, “Die in a fire spammer troll.” Nancy Mitchell wrote, “That’s amazing. I felt really bad when he was shoved around. Just glad he got up again.” Daniel Rayil wrote, “Hey Nancy… Why don’t you just shut the fuck up. Thanks.”
Also among the comments were #RobotLivesMatter hashtags. One commenter wrote: “It’s amazing how everyone feels bad for a robot. A metal non biological piece of machinery that only slightly resembles a human form. But half of yall feel nothing when an innocent black man is gunned down. Smh.”
Websites of uncertain seriousness sound the alarm about harms done to robots. The website StopRobotAbuse.com embeds short videos showing the Boston Dynamics robots staggered by kicks or unsettled by poles. Another site, People for the Ethical Treatment of Robots, authored a comment on one of Boston Dynamics’ videos. Insofar as there’s a joke behind these sites, it is uncannily driftless. If anything, they send up the relief humans feel when apparent pain can be explained by the presence of a bully. As Susan Sontag writes in Regarding the Pain of Others, “The sufferings worthy of representation are those understood to be the product of wrath, divine or human.”
God in all justice! I saw there so many
new forms of travail, so tightly crammed. By whom?
I tried emailing StopRobotAbuse.com and got no response, but its site linked to aspcr.com, which turned out to be the site for the American Society for the Prevention of Cruelty to Robots. When I asked that organization whether StopRobotAbuse.com was serious, it strongly disavowed any connection, explaining that StopRobotAbuse.com looked like a joke on ASPCR’s mission: Defending the rights of robots, or, rather, cultivating a readiness to defend them once robots feel pain. ASPCR’s site explains:
If this still sounds a little too “futuristic” for you to credit, remember that the ASPCA (The American Society for Prevention of Cruelty to Animals), when founded in the 1890s, was ridiculed and lampooned mercilessly for daring to assert that “dumb” animals had certain rights.
“Some disasters,” Sontag writes, “are more apt subjects of irony than others.”
Responding to the September 11 attacks in the September 24, 2001, issue of the New Yorker, Sontag ridiculed politicians who would soothe their constituents with the idea that unity had ensued among Americans.
Those in public office have let us know that they consider their task to be a manipulative one: confidence-building and grief management. Politics, the politics of a democracy — which entails disagreement, which promotes candor — has been replaced by psychotherapy. Let’s by all means grieve together. But let’s not be stupid together.
The hope Sontag reserves is the hope of refusal. Refusing simple descriptions of the world — insisting on the world and not a micro-world — constitutes political resistance.
In a 1974 essay, “A Framework for Representing Knowledge,” Minsky imagines the mind as a collection of “frames,” calling the model more accommodating of reality than his earlier ideas. He offers a definition of normalization emphasizing not habituation but the assemblage of new frames to fit new concepts. But, he writes, “we should not expect radically new paradigms to appear magically whenever we need them.”
The relationships between pain, watching pain, empathy, and action are delicate. The links of the chain do not close automatically
It is a cliché to speak of the modern condition as one of desensitization to the onslaught of media images, but this has been and remains a danger. News consumers who presume themselves relatively unthreatened by the Trump presidency — we worry. We fear the warning of normalization has been normalized.
Sontag wrote hauntingly about the pleasures and reassurances, some of them secret, that humans take from the sight of the pain of others. Some humans watch pain as they do, say, The Apprentice. They are likelier to look for pain in far-off places than to find it at home — viewing pain has an estranging effect. Sontag writes, “The more remote or exotic the place, the more likely we are to have full frontal views of the dead and dying.” In this way, harms visited on that exotic object of our fantasies, the robot, are funny.
As dangers go, overexposure to gory photojournalism is a bit banal. Often, it is painless. For those who are in pain, it may already be too late. Unlike for the robotic arm, pain is not useful to them; it is a cruel told-you-so, an objectless joke.
“Our sympathy proclaims our innocence as well as our impotence,” Sontag writes. So sympathy is “impertinent.”
I thought Dante described a robot crying.
Within those caves an aged man stands tall.
His back is turned to Egypt and Damaitta.
Rome is the mirror into which he stares.
His head is modeled in the finest gold.
Of purest silver are his arms and breast.
Then downwards to the fork he’s brightest brass,
and all below is iron of choicest ore.
The right foot, though, is formed of terracotta.
On that he puts more weight than on the left.
And every part that is not gold is cracked.
Tears drizzle down through this single fissure,
then, mingling, penetrate the cavern wall.
Minsky writes:
Whenever our customary viewpoints do not work well, whenever we fail to find effective frame systems in memory, we must construct new ones that bring out the right features. Presumably, the most usual way to do this is to build some sort of pair-system from two or more old ones and then edit or debug it to suit the circumstances. How might this be done? It is tempting to formulate the requirements, and then solve the construction problem.
But that is certainly not the usual course of ordinary thinking!
In such a mood, the exhausted human sees new frames eagerly and celebrates them.
After the election, many people on social media steeled themselves as if they would be called upon personally to bear witness for the next four years, perhaps by wearing safety pins, circulating a meme made from the old quotation that ends, “They came for me.”
This contributed to a bewildering slippage between vigilance and the signs of its performance. In the days after the election I scrolled through Twitter endlessly. I read the feed of journalist Shaun King, who was aggregating hate crimes, which were multiplying.
I do not propose we turn away. But the relationships between pain, watching pain, empathy, and action are delicate. The links of the chain do not close automatically.
How much pain have we been able to examine coolly, aestheticizing it, by attributing it to robots? Delineating two categories, human and subhuman, has been performed to justify seizing labor, land, and bodies belonging to the latter category.
The condition of pain afflicts humans who are variously imperiled. As for the robotic arm with its tactile sensor, when pain hits, it drops rather than swerve away or snap up. Pain overtakes us, as James has it. Like his dumb animals, we are reactive.
Sentimentality is kind of a hack. The program loops through every option to check whether any is right. It runs like this: If I let my natural shame, humility, or sensitivity limit my watching, then the largest share of watching will be left to the monstrous who can watch, and they cannot be entrusted with the information watching brings; they are unfit witnesses. In time, we decide we, too, can bear this. So we all become monstrous.