As Donna Haraway declared in one of the most prescient lines of A Cyborg Manifesto (1984), “the boundary between science fiction and social reality is an optical illusion.” Whatever boundary still exists between fiction and reality seems increasingly porous — they are hybrid, like the cyborg. They interpenetrate each other, not least at the level of how we see and misconstrue each other: As Haraway writes, “Social reality is lived social relations, our most important political construction, a world-changing fiction.” Fiction plays a role in shaping social reality, in establishing norms and alternatives for social relations. This means we must ask how the sorts of stories we tell and retell shape what we perceive as possible, as natural or unnatural, and so make real specific futures.
Drawing on that spirit, two recent books — Discognition by Steven Shaviro and Four Futures: Life After Capitalism by Peter Frase — speak to speculative fiction’s usefulness for imagining the present and its relation to possible futures. Shaviro describes science fiction as storytelling that “guesses at a future without any calculation of probabilities, and therefore with no guarantees of getting it right.” But this freedom to be improbable allows science fiction to examine more open-ended visions of the future. If we extrapolate from tendencies that already exist, we foreclose on possibilities of radical change. And although imagining a particular future may not necessarily increase its likelihood, it could broaden our “interpretive horizons,” which as Robin James has explained, consist of the “implicit knowledges, emotions, habits, and intuitions” that make belief possible and shape the possibilities we imagine for the future.
Surveillance as represented in culture often presumes individual targets tracked by government agents, the point being prohibiting behavior rather than channeling it
In the foreword to Four Futures, Frase echoes such sentiments, stating his preference for speculative fiction “to those works of ‘futurism’ that attempt to directly predict the future, obscuring its inherent uncertainty and contingency.” Frase cites as an example of dubious futurism Ray Kurzweil’s The Singularity Is Near: When Humans Transcend Biology, which “goes from the general to the general” in its predictions, rather than going from the particular to the general, as science fiction at its best does. Another example of inadequate futurism is the “It Gets Better” video series: As Natsaha Lennard writes in an essay on reproductive futurism, “the project’s videos are emotive and moving” but “something crucial is missing. Often, it doesn’t get better. Or it gets better if you can assimilate — want a family, support the troops, support monogamy, be a good citizen.” That is, a futurism that merely extends existing systems of oppression and control betray the future rather than imagine it. As Lennard suggests, given the choice between no future and (unqueer) futurism, no future is preferable.
In Speculative Fabulation, literature scholar Robert Scholes proposes science fiction can offer “a world clearly and radically discontinuous from the one we know, yet returns to confront that known world in some cognitive way.” He argues that modern science has made us “so aware of the way that our lives are part of a patterned universe that we are free to speculate as never before.”
Yet despite the potential inherent in speculative narratives, the same handful of utopian and dystopian scenarios tend to circulate. As Sara Watson points out with respect to fictional representations of artificial intelligence, the limited number of stories, driven by the entertainment industry’s prerogatives rather than an effort to chart the ethical terrain — presents a “grossly oversimplified” picture of the technological possibilities. It is dispiriting to imagine that the radical promise and possibility of cyborgs and hybridity evoked in Haraway’s manifesto could be diluted, obscured, and made banal through being presented in warped, oversimplified form in clichéd narratives.
Watson’s argument applies not only to AI but to common narratives about another contemporary worry: surveillance. In 2014, Zeynep Tufekçi decried our continued reliance on outdated Orwellian analogies and panoptic metaphors: “Our understanding of the dangers of surveillance is filtered by our thinking about previous threats to our freedoms,” she wrote. “But today’s war is different. We’re in a new kind of environment, one that requires a new kind of understanding … We need to update our nightmares.”
Judging by the way surveillance is often represented in culture, it can seem as though our ideas of what surveillance looks like and how it works have not changed much since both the fictional and real 1984: cameras mounted on buildings, human guards watching from towers, phone mouthpieces surreptitiously bugged, and so on. They presume individual targets tracked by government agents and figure the point of surveillance as being a matter of prohibiting behavior rather than channeling it toward certain ends. They fail to account for surveillance’s surreptitious commercial tracking, as it manifests in grocery rewards programs and across websites and within our phones. They don’t look at how entire populations are tracked, rather than specific individual suspects. They don’t dramatize the way citizens end up in police bodycam databases or on terrorist watch lists. They don’t account for the intimate surveillance of family members, as with the keyloggers that might be installed by our domestic partners behind our backs.
Naming culprits for surveillance-fiction fatigue is almost too easy. Black Mirror epitomizes the general problem. The technology in Black Mirror “functions as metaphor,” Adam Rothstein says, “like any social monster, representative of our current unknowns.” The same could be said of the show’s function in popular discourse: less fantasy than “media dream,” the show often serves as a convenient, drop-in metaphor — “It’s like something out of Black Mirror” — for any number of tech anxieties. But in reanimating our paranoid fears, convincingly and for the most dramatic effect, the show often only manages to affirm suspicions, co-conspiring in the same old nightmares.
Minority Report (2002) adapted from a Philip K. Dick short story, is another oft-cited reference point, but it tends to be heralded less for its look at the ethical quandaries and consequences of pre-emptive policing and more for its depiction of multi-touch screens and gestural interfaces. These were a product of director Steven Spielberg’s 1999 “idea summit,” which convened a think tank of engineers and futurists to articulate how 2054 might look based on current cutting-edge technologies. As Christian Brown pointed out, the film popularized the idea that the ill-conceived interaction design it draws from was actually a good model for future development. But, Brown notes, “there’s a huge gap between what looks good on film and what is natural to use.”
Similarly, Minority Report has contributed a misleading template for predictive surveillance in the clichéd but highly filmable story of a person more or less falsely accused. In the world of the film, predictive policing is a matter of mutants semi-mystically foretelling specific crimes committed by specific individuals. Whereas predictive policing as it is used today generally takes an epidemiological form, with big data used to target populations in “hot spots,” as this Science article details.
Surveillance is typically used to extend discrimination, not guarantee uniform treatment
These kinds of representations of surveillance have an effect that extends beyond fiction, influencing the way the subject is addressed in informational contexts. Consider this list of radio shows and podcasts that have recently discussed surveillance: WYNC Note to Self’s “The Privacy Paradox” project (a practical how-to guide with expert/industry interviews); Science Friday’s recent “Price of Privacy” segment with Note to Self’s host, Manoush Zomorodi, and ProPublica’s Eric Umansky (a one-off, directed interview); Motherboard’s “Guide to Defending the Future” live show (an open-ended panel discussion); and Theory of Everything’s just concluded “still more adventures in surveillance” miniseries (a mixture of social commentary, interviews and creative nonfiction). Each applies a different programming format to the issue, but the narratives driving the conversations still revolved around the same two familiar themes: the need to safeguard our personal privacy, and the risky aspects of increased visibility. As important and rational as these concerns are, how many more friendly reminders to install Signal or Privacy Badger do we need?
These conversations draw from underdeveloped ideas about how surveillance works now and may work in the future. They tend to assume a certain kind of subject, who faces the same kinds of threats from surveillance as it is popularly conceived. Missing from these discussions are more apt metaphors and narratives for understanding mass surveillance, how it works and affects everyday life, and for whom. As Nathan Jurgenson points out here, attempts “to identify the one overarching metaphor to encompass all watching” overlooks not just the ways various forms of surveillance work together but also how the surveillant gaze is unevenly distributed. Surveillance, he suggests, more often takes the form of sorting people into categories rather than subjecting individuals to heightened scrutiny. Indeed, as Simone Browne argues in Dark Matters: On the Surveillance of Blackness, the social sorting has its roots in “the history of branding in transatlantic slavery,” which reframes how we might understand passports, identification documents, and credit bureau databases as mechanisms of surveillance. Browne’s linking of surveillance technology with white supremacy, colonialism, and slavery makes plain how many “dystopic” representations of surveillance are actually too optimistic. These reinforce a fantasy of objective neutrality in which everyone is equally watched and equally imperiled. But surveillance is typically used to extend discrimination, not guarantee uniform treatment.
Too much of the discussion of surveillance presumes a “private individual” in the libertarian sense — the atomic, law-abiding, and therefore innocent citizen, whose reasonable desire for privacy puts them at odds with corporate and state monitors. There are precious few instances where surveillance is treated as a social issue involving groups and populations, and power dynamics more nuanced than “the big and powerful are watching.” What are the speculative surveillance narratives that are being overshadowed?
Robin James finds a more relevant metaphor for understanding contemporary surveillance in acoustics. In “Acousmatic Surveillance and Big Data” James argues that the metaphor of acousmatic harmonics is “particularly appropriate” for representing “NSA-style dataveillance,” — that is, the dynamic search for patterns of relationships in big data sets. Acousmatic sound is sound heard without seeing its source; what is signal and what is noise emerges gradually rather than being anticipated in advance. So, as James points out, “when President Obama argued that ‘nobody is listening to your telephone calls,’ he was correct. But only insofar as nobody (human or AI) is ‘listening’ in the panoptic sense … Instead of listening to identifiable subjects, the NSA identifies and tracks emergent properties that are statistically similar to already identified patterns of ‘suspicious’ behavior.” In other words, whereas panoptic narratives focus on specific people being targeted, an “acousmatic” approach would help us see surveillance’s suffusion throughout social life. It is a constant listening for rhythms in the metadata din, in order to divine communicative norms and track the inevitable deviations from them.
Just as we should reconsider conceiving of surveillance as primarily listening to specific individuals, we should stop thinking of data as something that is taken from us. As Jenny Davis points out in “We Don’t Have Data, We Are Data,” the discussion of captured data should be extended beyond individualist notions of personal privacy and private property. “Data is the currency for participation in digitally mediated networks; data is required for involvement in the labor force; data is given, used, shared, and aggregated by those who care for and heal our bodies,” she writes. “We live in a mediated world, and cannot move through it without dropping our data as we go.” Recognizing that we cannot live without generating data would help move us past the narratives that blame individuals for not better protecting themselves and toward narratives that explore the different degrees of complicity and evasion we must continually orchestrate.
Between privacy and control, our pious retelling of outdated surveillance narratives leaves too little to the imagination. As PJ Patella-Rey argues in “Social Media, Sorcery, and Pleasurable Traps,” “the model of surveillance is no longer an iron cage but a velvet one — it is now sought as much as it is imposed. Social media users, for example, are drawn to sites because they offer a certain kind of social gratification that comes from being heard or known. Such voluntary and extensive visibility is the basis for a seismic shift in the way social control operates — from punitive measures to predictive ones.” The metaphor of an inviting stage that entices and rewards performers’ free disclosure troubles the conventional figures of who is watching: It is rarely just the omniscient state, the prison guard, or the inconspicuous informer but also the sorts of audiences we actively cultivate.
These examples suggest the sorts of fictional narratives that might help us better understand surveillance now and in the future. Rather than Black Mirror, Minority Report, and 1984, we need more narratives in the vein of The Handmaid’s Tale, which foregrounds the gendered experience of watching and being watched; Ghost in the Shell, which takes for granted the ways the embodied self is conditioned by networked society; or Southland Tales, which carries the tension between celebrity, spectacle, and pervasive surveillance beyond narrative’s ability to represent it.
In surreal times, speculative fictions and narratives don’t merely frame (and constrict) our collective imagination. As sociologist and sci-fi/fantasy author Sunny Moraine writes, “it’s not just escapism,” particularly for marginalized people. “It’s daring to imagine worlds in which we and our experiences are real, and they matter.” The worlds described in speculative fictions “don’t exist apart and separate from the world we live in,” she writes. “They’re a form of claims-making on reality.” Speculative fiction “allows us to make a way out of whatever unbearable moment we seem to be stuck in. It doesn’t give us a finish line. It gives us the race.”
As the boundary between the present and future blurs, Morraine’s notion of “claims-making” through storytelling provides a mechanism to shape the future. This might stand alongside other strategies as a means to re-enchant the boundary between science fiction and social reality. Writing oneself into stories, as Haraway reminds us, “is about the power to survive, not on the basis of original innocence, but on the basis of seizing the tools to mark the world that marked [cyborg authors] as other.” As Haraway insists, stories are tools to “reverse and displace the hierarchical dualisms of naturalized identities.” Although articulating a future in fiction may not be enough to make it real, through writing it, we can better grasp its illusory separation from reality, neither as stable nor impervious to radical change as imagined.