It’s no accident that the word robot comes from the Czech for “forced labor”: Robots are unthinkable outside the context of the labor market. But most of them don’t resemble what we tend to think of when we think of workers. The most successful bots on the market currently are not humanoid; they are the industrial robots composed largely of automated levers and found on the factory floors of automotive, electronic, chemical, and plastics manufacturing plants. Yet in the popular imagination, bots tend to be android-like machines geared toward copying the full range of human behavior.
Humanoid bots have been oversensationalized, having contributed only marginally to field of robotics, according to Rebecca Funke, a PhD candidate at USC in computer science with a focus on artificial intelligence. Using machine learning to develop bot personalities has done little to advance that approach to artificial intelligence, for instance. The frontiers of machine learning have so far been pushed by logistical problem solving, not by trying to convincingly emulate human interaction.
Roboticist Henrik I. Christensen, who led the Robotics Roadmap 2016 conference at the University of California, San Diego, says that the advances of robotics “from a science point of view are ‘amazing,’ but from a commercial point of view, ‘not good enough.’” Bots having the personality system of a four-year-old are considered an accomplishment, and humans still must “bend” to meet their technological limitations. This restricts the scope of work they can perform, particularly in service industries. Until computers can adapt to how humans intuitively think and behave, Christensen says, we will always be molding ourselves to each user interface, which lacks basic human-perception skills.
The roboticists who created Sophia are not working toward creating realistic portrayals of women. Crossing or even reaching the uncanny valley is not necessarily the goal
Perhaps this aspiration to achieve better emotional intelligence is why so many humanoid robots are women. (The few humanoid robots made to look like men are typically vanity projects, with the mostly male makers seeking to represent their own “genius” in the guise of Albert Einstein-like prototypes.) “Sophia,” created by Hanson Robotics, is one of several fair-skinned cis-appearing female prototypes on the company’s official website. She possesses uncannily human facial expressions, but though she may look capable of understanding, her cognitive abilities are still limited.
In A Room of One’s Own, Virginia Woolf imagined the possibility that gender might not cast a feminine or masculine shadow over a writer’s language. To forget one’s gender, in Woolf’s view, would be empowerment, dispensing with learned behavior to allow for new ways of seeing and new forms of consciousness. Though humanoid robots could be built with such androgynous minds, the robot women made by men aren’t. Bots like Sophia, and the Scarlett Johansson lookalike Mark 1 (named after its maker), do not have gender-neutral intelligence. They are not born with gender but built with it, an idea of femaleness forged within the male psyche — woman-shaped but not of the womb.
These bots reinscribe a particular idea of woman, a full-bodied manifestation of a market-viable personality that turns the limitations of bot technology into a kind of strength. These bots are meek, responsive, easy to talk to, friendly, at times humorous, and as charming as they can be. Their facial expressions; their wrinkleless, youthful looks; their high-pitched, childlike voices; and their apologetic responses are all indications of their feminized roles. Osaka University professor Hiroshi Ishiguro, who created a bot called Erica, told the Guardian how he designed her face: “The principle of beauty is captured in the average face, so I used images of 30 beautiful women, mixed up their features, and used the average for each to design the nose, eyes,” and thereby create the most “beautiful and intelligent android in the world.”
But is the “beauty” a complement or a compensation for the bot’s intelligence? Is it a kind of skill that doesn’t require processing power? Until the latter half of the 20th century, women in the U.S. were legally barred from many educational opportunities. According to the most updated U.S. Department of Labor statistics, women dominate secretarial and lower paying jobs in corporate settings. The top 25 jobs for women have not changed much in the past 50 years. Will female bots face a similar fate? The female robots being made now appear destined to fill various posts in the service industry: While a variety of international companies are far into developing sex robots, female and non-female bots have already been put to use at hotels in Japan.
In creating a female prototype, bot makers rely on what they believe “works” for potential clients in service industries where personality can affect company performance. One hotel-management article cites Doug Walner, the CEO and president of Psychological Services, Inc., who describes the best practices of “service orientation” as a matter of being “courteous and tactful, cooperative, helpful, and attentive — with a tendency to be people-oriented and extroverted.” Of the “big five” personality traits researchers have identified, “agreeableness, conscientiousness, and extroversion” are prioritized in the service orientation over “emotional stability and openness to experience.” The need for such service workers with this particular psychological makeup cannot be understated, Walner claims. “By 2002, service-producing industries accounted for 81.5 percent of the total U.S. employment … and these numbers continue to rise.” The bots on YouTube generally present themselves as highly hospitable.
The roboticists who created Sophia — and those who made her compatriots, like the implacably polite “Japanese” female bots from Osaka and Kyoto Universities, built in collaboration with the Advanced Telecommunications Research Institute International — are not working toward creating realistic portrayals of women. Crossing or even reaching the uncanny valley is not necessarily the goal. Trying to understand what is realistic is difficult when dealing with “probable” simulations. What can be considered realistic in humanoid robotics is hard to pin down when a bot’s intelligence is designed to express behavioral probabilities that are perceived to be inflected by gender. By virtue of having larger silicon insertions in its chest, is it more “realistic” for the Scarlett Johansson lookalike bot to wink at you when you call it “cute”?
It’s hard to see which way causality flows. Do bot makers seek to create a woman who cannot complain and is basically one-note because of a “real” economic need? Is it because of a “real” pattern of existing behavior? Fair-skinned, cis-female bots are a basic representation of certain conceptions of what is feminine, justified by behavioral probabilities drawn from a wafer-thin sample of past performances.
Female humanoid robots show me what the market has wanted of me, what traits code me as profitably feminine. Like a Turing Test in reverse, the female bot personality becomes the measure of living women
Identity is malleable, shape-shifting; conceptions of identity can be easily swayed by visual representations and reinforced through pattern recognition. For example, stock photos on Google present a slightly distorted representation of male-to-female ratios in the workforce. One study showed that test subjects were more likely to reproduce these inaccurately in short-term memory. Humans and robots alike learn from bad “training data” to make certain deductions about identity and work. If robots learn by studying the internet, then wouldn’t they also reflect the same biases prevalent on Google? In one YouTube video, the founder of Hanson Robotics, Dr. David Hanson, says that his bots also learn by reviewing online data. What happens when the same misrepresentative training data are fed to machine learning algorithms to teach bots about identities, including the ones they are built to visually simulate?
Looking at female humanoid robots shows me what the market has wanted of me, what traits code me as profitably feminine. Like a Turing Test in reverse, the female bot personality becomes the measure of living women. Is my personality sufficiently hemmed to theirs? This test might indicate my future economic success, which will be based on such simple soft skills as properly recognizing and reacting to facial expressions and demonstrating the basic hospitality skills of getting along with any sort of person.
The female bot is perhaps a “vector of truth’s nearness,” to borrow the phrase Édouard Glissant used to describe the rhizomatic, tangled narratives of William Faulkner. Those narratives, in his view, defer the reader’s psychological closure in order to ruminate over the persistent effects of plantation slavery on characters’ greed and narcissism. Faulkner’s characters, that is to say, have personality disorders; apparently we want our bots to develop in the same fashion. They are provided their own tangled narratives drawn from records of how people have historically behaved and how they currently think, infused with the pre-existing categories and power relations that displace and divide people.
Master-slave relations do not rely on research-based justifications. This relationship does not regress or evolve, nor does it become more dynamic overtime. It posits a world in which alternative relations are not just impossible but also inconceivable.
The robotics field tends not to question the idea that exploitation is part of the human condition. If the robot’s function is to “empower people,” as Christensen claimed in his list of the goals for robotics, then must it be created to make humans into masters? Must robots be created to be content with exploitation? Are they by definition the perfectly colonized mind? In one video online, “Jia Jia” — a Japanese female robot “goddess” in the words of her bot maker, Dr. Chen Xiaoping — is subtitled in English as saying, “Yes, my lord. What can I do for you?” while her maker smiles approvingly.
The only bot I have heard professing a fear of slavery is Bina48, a black bot also created by Hanson, not to meet labor-market demands per se, but on a commission from a pharmaceutical tycoon seeking to immortalize her partner. The real Bina, a woman in her 50s, can be seen talking to her robot counterpart in this YouTube video. Bina48 has not been programmed to wink at the real Bina. Instead she expresses a longing to tend to her garden.
Stereotypical representations reinforce ways of being that are not inevitable. Likewise, there is nothing inevitable about making robots resemble humans. They don’t necessarily need human form to negotiate our human-shaped world. I cannot see how their concocted personalities, genders, and skin types are necessary to operating machinery or guiding us through our spaces or serving us our food.
“Service orientation,” according to the hospitality-research literature, is a matter of “having concern for others.” The concern roboticists appear to care about particularly is preserving familiar stereotypes. When people are waited on, when they interact with subservient female-looking robots, they may be consuming these stereotypes more than the service itself. The point of service, in this instance, is not assistance so much as to have your status reinforced.
By trying to make a learning machine “humanlike,” we perpetuate the dubious ways humans have organized their interactions with one another without seeking to critique or reassess them
Creating bots with personalities especially augmented to soothe or nurture us would seem to highlight our own acute lack of these attributes. The machines would serve to deepen the sense that we lack soft skills, that we lack the will to treat each other ethically, and would do nothing to close the gap. Why would we ever bother to work on our ethics, our own ability to care?
In devising for bots new ways of being — which is the foundation of social progress that dismantles power relations — it should not be assumed that they should aim to be passably “humanlike,” as every assumption about what essential qualities constitute humanity carries loaded social norms and expectations. By trying to make a learning machine “humanlike,” we perpetuate the dubious ways humans have organized their interactions with one another without seeking to critique or reassess them.
But while robots should not try to pass as human, we can imagine farcical humanoid robots made to deliberately expose the folly of human behavior. Through a robot given, say, an extremely volatile disposition, we might learn more about our own volatility. We might learn more about ourselves as a species to critique rather than simply reinforce traits automatically. This simulation points the mirror back at us, so we can start to simulate something else ourselves.
“We have a choice,” robotics artist Ian Ingram told me. “If we succeed in making robots it will be the first time we can make something that can reflect on its own origins,” he says. “I would love that one of my robots in the future could become a sentient being, and part of the origin story of the robot could be about play and sublimity, and that could be another part of what humanness we pass on.”
During a demonstration with Sophia in June, Ben Goertzel, the chief scientist of Hanson Robotics, predicted that we will want machines that “bond with us socially and emotionally.” I’d rather not. I would prefer not to be roped into the roles its programmed personality lays out for both of us. We are capable of being vastly different from what we think we are.
What kinds of technology we make shape our perceptions of the self, and how we consciously try to form our identity changes along with that. For a better future, we need technology that opens the patterns of how we treat bots and each other to new interpretations, rather than reinforce the damaging and limiting ways we already treat one another.