Bot consciousness is a proxy case for other human minds

A few years ago, when I used to post more frequently on Twitter, a Rob Horning Bot suddenly appeared, with a copy of my avatar and text derived from an algorithmic rehash of my thousands of tweets. The results were sometimes nonsense, sometimes eerily plausible, sometimes pseudo-profound — much like my actual Twitter presence. Sometimes the bot posted things I wished I had thought of, but mostly it posted word salad that reminded me of my own inner monologue when I’ve had too much coffee. I was flattered that someone had bothered to program this automatized version of me and entertained by it, even though I wondered if it was meant as mockery: I’m so predicable that you could generate my pronouncements with a few lines of code. My capacity for generating “insights” was not necessarily any better than a purely thoughtless random concatenation of words I like to use.

I get a similar feeling from all the different algorithmic recommendation systems I encounter across online platforms — the sorted timelines, the targeted ads, the books “inspired by my shopping trends,” and so on — that the data I have generated can be readily used to distill my subjectivity to simple patterns and that my emotional life is fairly easy to predict, if not to simulate altogether. These algorithms conjure a doppelganger who seems to be pursuing and enjoying things for me, like in Dostoyevsky’s story “The Double,” making me in some sense redundant. It’s as if my life has been fed as training data to a web of interlocking machine-learning systems that threaten to replace me with my more rationalized counterpart, the me for whom past performance is a guarantee of future results. But it is less that I will be replaced than that I will be seduced by that “second self” and want to become more like it, take on its efficient, logical approach to being as my own.

This seems to be what animates the fears of interacting with bots and other responsive machinic systems: that they not only are training themselves on us not only to emulate us but they are also tempting us into becoming more like machines ourselves and possibly surrendering what makes us human. We may prefer to interact with machines, which cater to us rather than challenge us or demand responsibility, and we could become more like machines to facilitate these unreciprocal nonrelationships.

This idea was central to Sherry Turkle’s interpretation of AI in her book The Second Self (1984), in which she describes a computer as “a companion without emotional demands … You can be a loner, but never alone. You can interact, but need never feel vulnerable to another person.” When early users developed emotional attachments to a super-basic chat bot like Joseph Weizenbaum’s 1960s creation ELIZA — the history of which is described in Jacqueline Feldman’s essay “Faking It” this week — critics like Turkle and Weizenbaum himself saw this as a kind of debasement.

But this assumes that there actually is something ennobling and “human” about our typical interactions with each other, and  that the respect people have for one another’s humanity is inherent, always present, and will flourish through our encounters with each other. In reality, many interactions among humans are as scripted and instrumental as our interactions with bots — programs executed through small talk, platitudes, and repetitive gestures, often to allow some economic exchange to proceed without friction. And throughout history, some humans have been systematically excluded from personhood by others, to justify oppression, exploitation, and extermination. As Damien Williams argues in “What It’s Like to Be a Bot,” the category of the human is no universal template for thought or consciousness, and the temptation to perceive it as such must be rejected in favor of understanding the diversity of embodied minds.

In this sense, trying to understand bot consciousness is not an idle exercise in speculative epistemology. Instead it serves as a proxy case for the sorts of inclusions and exclusions humans have continually made about the personhood and integrity of others. Understanding the point of view of a bot, explaining how it thinks in our own terms, is not just a way to try to defend ourselves from what it might do, but it also protects our broader habit of projecting our own ways of thinking as an ethical limit — that if something or someone doesn’t think like us, then it isn’t really thinking, or can’t really think. But ethics must not be contingent on the failures of our imagination or the myopic force of our self-projection.

It is not debasing, then, to interact with a bot and posit the possibility of its having a different kind of mind and to attend to those differences. It is an opportunity to do better than we have done among ourselves, and to learn how to carry that over. Or to put that another way, becoming more machinic in our subjectivity may not entail becoming more inhumane. Networked computers calibrate our experience of agency with a sense of connectivity and dependence, a sense of inescapable intersubjectivity in which we are always in the midst of thinking with a group. Often in these cases, the lost sense of agency may at the same time be a concrete signal of social belonging, a reassurance that one has been permitted in or constituted through a collective. Feldman’s earlier “Verbal Tics” explores this point.

Not only might our “self” be made up of myriad biological programs running within our brain, producing consciousness as a kind of epiphenomenon as cybernetics and systems theorists suspect, but those programs interact with the programs of other brains and are being coordinated beyond the scope of our body. We are one small machine running a subroutine in an emerging consciousness that is and isn’t our own. Trying to vicariously experience machine-hood could be seen as a way to try to imagine consciousness beyond individualist atomization. Thinking like a machine becomes a matter of thinking unselfishly rather than thinking without feeling.


“Faking It” by Jacqueline Feldman

“What It’s Like to Be a Bot” by Damien Patrick Williams

Thank you for your consideration. Visit us next week for Real Life’s upcoming installment, DEBATE FETISH.

Rob Horning is an editor at Real Life.