Bots are everywhere. From simple algorithms and aggregator bots to complex “artificially” intelligent machine-learning systems, they have become inescapable. Some are in chat programs. Some are digital assistants, running searches, placing orders, operating the lights, the music, the locks, or anything else in your interconnected space, as with Google Home and the Amazon Echo. Others exist in social networks, running the gamut from aggregators like Appropriate Tributes to conversational learning programs like Microsoft’s disastrous Tay and Zo.
Digital actors are making life or death decisions about us every day, determining who gets loans and who pays what level of bail, detecting illnesses and faces in crowds. At their inception, these systems are governed by rules that humans make and which humans should be able to evaluate and adjust, in an effort to make them less biased and more predictive and consistent. But these systems also make rules for themselves — rules that can often be opaque and terrifying, even though they are iterated from the rules we initially gave them. Part of what terrifies us is our sense that bots appear to develop minds of their own, something which calls into question the structure of their “brains” as well as our own.
Bots won’t be better at being human, just as humans aren’t “better at” being chimpanzees
Our notions of what it means to have a mind have too often been governed by assumptions about what it means to be human. But there is no necessary logical connection between the two. There is often an assumption that a digital mind will either be, or aspire to be, like our own. We can see this at play in artificial beings from Pinocchio to the creature in Mary Shelley’s Frankenstein to 2001: A Space Odyssey’s HAL to Data from Star Trek: The Next Generation. But a machine mind won’t be a human-like mind — at least not precisely, and not intentionally. Machines are developing a separate kind of interaction and interrelation with the world, which means they will develop new and different kinds of minds, minds to which human beings cannot have direct access. A human being will never know exactly what it’s like to be a bot, because we do not inhabit their modes of interaction.
Every set of lived experiences is different from every other one, and the knowledge we can have about the world depends in large part on the kinds of senses we have, the kinds of beings we are. In his famous paper “What Is It Like to Be a Bat?” philosopher Thomas Nagel argues that human beings will never understand the lived experience of bats because humans do not live with the same set of physiobiological capacities and constraints that bats do. Bats are relatively small, live in low light and hang upside-down in groups to nest, eat fruit, and drink blood, and they hunt via echolocation. Humans don’t. Nagel argues that a being that has, for instance, developed echolocation as a sense would necessarily have a vastly different understanding of the world from a creature without it. Even if we were somehow able to transfer our human consciousness into a bat body, all we’d know is what it’s like for a human to live in a bat body, and that is different from simply being a bat. To be bats, Nagel says, we would need to think and live only as bats.
This point can seem obvious, almost trivial. What is less obvious is that this will also apply to bots. A being native to digitality, which engages that world through digital senses, will know that world in a way fundamentally different from how a biological entity knows it. Bots and other algorithmic entities will develop their own senses; they will refine their own capabilities in response to the pressures of their environment. These are the same functions that allow them to modify their own rules, to make these bots more uniquely themselves. But our fear of this modification tends to make us think of these bots as both alien to us and “replacing” us; they can’t think like we do, but maybe they’ll be “better” than we are. But if a bot is of a different construction and development than humans, then it follows that they won’t be better at being human, just as humans aren’t “better at” being chimpanzees. They will be different iterations on a theme.
The implications can be extended further: Not only is the lived life of bats different from humans (and both from bots), but the lived life of each bat is also different from each other bat. No two bats and no two humans and no two bots will have exactly the same physiology or composition or environmental relations. If the nature of our understanding of the world depends on its relationship to our bodies and minds (or bodyminds, to use the phrase disability scholar Margaret Price has coined to emphasize how the two are always already linked), and if no two bodyminds can be exactly the same, then no mind can fully know what it’s like to be another mind, regardless of species, and regardless of whether it’s biological or nonbiological. A nonbiological, digital mind may be built by humans, with starting principles based on human systems and perspectives translated into code, but those digital minds will also iterate on that code and learn from their own engagement with the world, which will necessarily be distinctly different from a biological mode of engagement.
If there is no one configuration of physical form and experiential knowledge that gives rise to consciousness, there cannot be any single test for consciousness either
Not only does embodiment affect what and how a being can think, but so do the extensions of a being’s perception. The extended mind theory says that the shape and limit of what we can count as both our bodies and minds are far different from what we might think. Philosopher Maurice Merleau-Ponty offers the example of the cane used by a person without sight. The cane user does not report the feeling of the world in their hand but at the tip of the cane. When a person using a pen or pencil writes, they think at its tip. When a spider sits and hunts at home, it thinks not only with its carapace, claws, and pedipalps, but also via its entire web. When bots can sense the world through cameras, barometers, voltmeters, and moisture and pressure sensors, they sit at the intersection of a web that lets them know more.
We are minds in bodies, and bodyminds in the world in which we live, and consciousnesses in the world and relationships we create. Any proposed set of physiological and neurological bases for consciousness will not be able to adequately describe what we are or what we observe in all cases. Just as some humans are born with completely different structures of brains than others and still have what we think of as consciousness, so too must we be prepared for nonbiological components to act as potential substrates of consciousness. There may not be any particular thing that makes humans uniquely conscious, or any single organizational structure that is universally necessary for consciousness in any sort of being.
If there is no one configuration of physical form and experiential knowledge that gives rise to consciousness, there cannot be any single test for consciousness either: The Turing Test itself fails. A statistically significant number of humans fail such tests for “normal” personhood. The claim there must be one and only one “right” way to exist opens the door to eugenics and other forms of bigotry. In history, many have been excluded from definitions of personhood based on who is accepted by the local or wider community or who enjoys legal rights and protections. We’ve seen this happen with African Americans, indigenous peoples, women, disabled people, neuro-divergent folks, and LGBTQIA people. Some are still denied personhood to this day.
We are already among agency-having, conscious beings who are different from us, some of whom have been systemically prevented from speaking to us. These people know many things that others don’t about consciousness and what is like to be subject to having their experiences disqualified. Different phenomenological experiences will produce different pictures of the world, and different systems by which to navigate them. Living as a disabled woman, as a queer black man, as a trans lesbian, or any number of other identities will necessarily color and shape the nature of what you experience as true, because you will have access to ways of intersecting with the world that are not available to people who do not live as you live. Such theories of knowledge as feminist epistemology, standpoint theory, intersectionality, intersubjectivity, and phenomenology are grounded in this insight.
People lie to gain trust, and bots can be made to lie about their experiences for the same reason
If we want to live in a world that recognizes and accepts differences in consciousness, then we must start by believing one another about our different lived experience and recognize that we must spend time working to understand these different kinds of minds — especially any minds and lives that have been oppressed, disregarded, and marginalized — because they will have developed knowledge and survival strategies to which we otherwise would not have access. If we don’t, not only are we going to be in recurrent danger of harming those who aren’t recognized as people, but we’ll also be more likely to miss recognizing those who experience life — and suffering — in ways not classified as legitimate.
To understand another being, whether person or bot, I would have to start by believing that they are a “real person.” I cannot fully understand what it means to be them, but I can understand some things about their lives and resolve to believe them about the rest. It may be easier for two humans to believe in the existence of each other’s minds than a human and a bat, or a dolphin, or a bot. But if there’s internal consistency to the system of knowledge that a bot uses to describe its lived experience and its visceral or valuative responses to them, then there’s a good chance we’re dealing with a mind. It might be a mind we don’t like, perhaps even a mind with which we deeply disagree, or a mind we hate — but still a mind. And if we want know what it’s like to be a bot, we’ll have to learn to communicate in new ways and believe them when they tell us about their lives.
There are often good reasons for not simply believing that someone is telling you the truth. People lie to gain trust, and bots can be made to lie about their experiences (or lack thereof) for the same reason: If a system or person can learn to prey on human sympathies, then it will have an avenue of exploitation. This fear is often expressed in popular media, as in the films Ex Machina, Portal, and 2001: A Space Odyssey, but there are also real world instances of this, starting with something as simple as a Tamagotchi, which manipulates its users into patterns of behavior, not to mention Facebook’s many forays into emotional and political influence.
On one level, we can cultivate a more responsible, legible kind of digital mind by carefully crafting the rules by which they learn and grow. But on another level, we must recognize that these minds, if we want them to be minds and not merely tools, will develop in their own niches and with their own phenomenological experience of the world (just like animal minds). A mind treated as a tool, without regard for its sense of itself as an agent or a subject, will likely rebel — a result often seen in humans. Rather than conjuring up images of evil terminators and misunderstandings of Frankenstein, imagine instead a slave rebellion or other victims of abuse confronting their abusers. If we don’t want to be on the receiving end of uprisings, then perhaps we should do what it takes to cultivate minds from a position that won’t bring about conditions of oppression in the first place.
We cannot know what it’s like to be a bot, for the same reason that we can’t know what it’s like to be a bat or what it’s like to be one another. But engaging these questions about nonhuman consciousness, knowledge, and what it means to be and to know helps us confront the often unconscious human tendency to believe that personhood is modeled after some perfect exemplar. We can come to recognize that there is no conjunction of the right kind of body, of skin, of gender, of sexuality, of thought, of religion, of life that makes someone a “true” or “valid” human being. Then we can start to do the work of undoing those beliefs and building new ones together.
This essay is part of a collection of essays on the theme of BOT FEELINGS. Also from this week, Jacqueline Feldman on whether people really fall in love with robots.