The worthiest human efforts are those intellectual pursuits that specifically seek the uninterrupted delimiting of infinity into convenient, easily digestible portions.
—D503, in Yevgeny Zamyatin’s We
Critics may worry about algorithmic surveillance, but my university students often complain to me that their computational companions — Spotify, Netflix, Google Maps, Amazon, FitBit, or Apple Watch — don’t know them as well as they should. My students are frustrated, they tell me, when they get a Spotify or Netflix recommendation they don’t like or a seemingly inaccurate number of steps from their fitness trackers.
Despite all the negative attention tech companies have received recently, this attitude may still be more pervasive than skepticism. Whether we admit it to ourselves or not, it is often taken for granted that computational products should be able to measure us accurately, not only in physiological but psychological terms. Hence the wearable thermometer Embr Wave is marketed as being able to control your moods by changing how hot or cool you feel. Medibio, a “mental health technology company,” claims to be “identifying the link between physiologic measures and mental health,” focusing primarily on information captured during sleep — which the new Apple Watch can gather — to purportedly determine a user’s overall levels of depression, anxiety, and stress, as well their likelihood of suffering from bipolar disorder and schizophrenia. And Fast Company is eager to report the promise that an algorithm “can tell if you’re depressed just from your Instagram posts.”
Whether we admit it to ourselves or not, it is often taken for granted that computational products should be able to measure us accurately
With sleep tracking this becomes particularly pronounced. Sleep tracking technologies can tell us how many hours we slept and the relative “depth” of that sleep based on, for example, heart rate and skin conductance as well as the frequency, duration, and intensity of our movement — but they aspire to tell us about our interior lives through data. Apparently, we are in the golden age of the “smart mattress,” with older firms like Sleep Number competing with startups like Eight and others to sell beds with embedded sensors that can connect with an app to help users optimize their sleep with data-driven insights about when to go to bed and when to wake up. One might come to expect a tracker to accurately assess how sleep felt to them the same way some expect spot-on recommendations from Netflix.
I know from my own experience in wearing a FitBit, which claims to track steps, exercise, heart rate, and sleep patterns to help users stay on track with fitness goals and lead healthier lives under the presumption that self-knowledge is only possible through self-quantification. “Know yourself to improve yourself,” its website proclaims. I had used a FitBit for about six months, until I broke a rib playing basketball and stopped wearing it. (I was exercising less and felt sheepish about my sudden lack of data.) But after not wearing it for a few days, I began to miss it. I’d wake up and think, I need to know how tired I am — which I know makes no sense. But FitBit’s measurements of my quantity and quality of sleep became a kind of empirical assurance, a metrical safety blanket that said, “Yes, you can feel this tired,” or “You should feel spry and alert.” Does it know better than I do how much sleep I got, or how tired I should feel? Does it matter if these measurements are actually accurate? What would “accurate” even mean when it came to how I felt? How did I come to implicitly trust it over my own consciousness?
But can sleep really be characterized by how long we’ve slept or whether it was “deep sleep” or “light sleep” according to, for example, your Sleep Number “SleepIQ score”? For many of us, sleep is defined by our dreams. And so far, tech companies have yet to figure out how to quantify them and, like everything else they measure, turn them into monetizable data.
That’s not to say no one is trying. Recent research has explored new ways of trying to manipulate and record dream states. A Fast Company article from April describes an MIT-designed device that extends the hypnogogic phase between sleeping and waking — a state that, as one the researchers claims, “holds applications for augmenting memory, learning, and creativity” — and prompts users to narrate their dreams before going into deep sleep.
More sophisticated is the clinical neuroimaging work being done in the U.S. and Japan. The Gallant Lab at the University of California has been working on this front for over a decade, declaring in 2011 that “quantitative modeling of human brain activity can provide crucial insights about cortical representations and can form the basis for brain decoding devices.” That is, fMRIs can be used to essentially transcribe what the brain is seeing. The researchers claim that different kinds of brain activity correlate with visual representations of different objects. If we know what the brain is doing when it sees a specific thing, like a bird, then we can infer that when similar brain activity is present, a subject is imagining, seeing, or dreaming of a bird. Operating on those assumptions, the lab developed machine-learning algorithms to re-create imagery on the basis of the brain patterns recorded while watching it, drawing from footage from YouTube. In 2017, using a similar process, researchers from Kyoto combined fMRI data with neural networks to produce images which they billed as having been “decoded from brain activity” and then posted on Twitter.
The products work on us rather than for us, selling us a fiction about subjectivity: that we are completely machine-readable, or at least aspire to be
Developing enough correlative data to “accurately” visualize the contents within our dreams seems pretty far off. However, as brain-scanning machines get smaller and startups begin to “innovate” with these new developments (with the requisite TED-talk-cum-business-pitches), it seemed plausible enough to CNN for them to ask “How close are we to video-recording our dreams?”
The efforts to record dreams suggests that companies and researchers will seek ways to turn even our most inner sanctum into data, on the idea that it can be “better” known externally than experientially. Even if it remains a dream itself, the project of supposedly objective dream capture still accomplishes another important goal: Reinforcing the idea that we need trackers and data companies like FitBit, Medibio, and Eight to tell us who we really are. How did we come to think that machines could know us better than we know ourselves, even in our dreams?
It is strange that we would rely on data and machines to reveal ourselves to us, given that even the most rudimentary measurement technologies are often flawed. For example, Gabi Schaffzin recently described his experience with a Zozosuit, a spiffy spandex body suit that is supposed to generate personalized clothes sizing but which resulted for him in jeans that were comically ill-fitting. When this kind of service fails, the difference between the potentially inaccurate “computed” version of ourselves and our physical body is obvious (though Zozo is eager for feedback to get better). But when it comes to sizing up our mental states, our sleep quality, our stress level, or the nature of our dreams, it can be harder to see how the computed self differs from our qualitative but amorphous sense of ourselves. If products like the Zozo suit will eventually work, who’s to say that Medibio doesn’t or that a dream-reading smart pillow won’t? In all these cases, the products work on us rather than for us, selling us a fiction about subjectivity: that we are completely machine-readable, or at least aspire to be.
Any kind of digital tracker, whether it is measuring our waist size or our dreamscape, sells users on this idea, that you should be able to be understood by computational products and services. This is part of what I call a computable subjectivity: an understanding of one’s entire mental and physical life as being completely legible to computational systems.
Tracking devices and other surveillance technologies emanate a perceptual field of sorts, made up of memos, R&D, beta testing, and marketing materials as new products move toward realization in the consumer space. The fantasy worlds in which these technologies were incubated — in the minds of VCs or founder-innovators — migrate into reality, pressing on the contours of our lived experience. The futurological discourses in which companies like FitBit, Medibio, and Zozo engage produce worlds that we then are encouraged to inhabit; they become, to borrow the title of a new story collection, “economic science fictions.”
That is not to say these product concepts are detached from reality but rather to suggest that the economic and social science fictions within which we live are not necessarily written down. They appear as networks of disparate artifacts and utterances: the TED talk, the patent filing, the industry-insider listicle, the ad campaign. This ideological haze, as Mark Fisher writes in the introduction to Economic Science Fictions, is “not necessarily falsehoods or deceptions — far from it. Economic and social fictions always elude empiricism, since they are never given in experience; they are what structures experience. But empiricism’s failure to grasp these fictions only indicates its own limitations.”
We can’t rely, in other words, on established methods of scientific observation to illuminate the connective tissue between technofetishist futurology and our everyday lived experience. Ideology can’t be tracked as data any more than our dreams. The Zozo suit and the dream-reading smart pillow are part of the “transpersonal fictional systems” that Fisher argues generates the individual subject as “something like a special effect,” positioning the computable subjectivity and machine-readability as aspirational ideals. They promise the possibility of optimizing every aspect of ourselves and implicitly making it more productive — our sleep patterns, our eating habits, our Netflix queues, our dreams.
Not only is the computable subject ideally suited to be the entrepreneurial self par excellence, but it is, as Fisher argues, an effect of the hegemonic fiction of capitalist realism, the feeling that there is no alternative to capitalism. When our imagination is dominated by these ideas, we seek what sociologist Ulrich Beck calls “biographical solutions to systemic contradictions.” In other words, instead of trying to change the system, we try to adapt ourselves and our lives to it. We accept that risk must be privatized, so that individual consumers can optimize their individual lives in their own self-interest, and no collective or collaborative alternatives are possible.
The subjectivities and lived experiences that emerge from these systems are not a result of some premeditated conspiracy. We may be becoming more susceptible to a certain kind of incisive biopolitical control, but not because of some specific evil genius has cooked up new wearables. In a sense, we are ourselves both the evil geniuses and an effect of them: Our subjectivities emerge from the economic science fictions we inhabit, reproducing and reinscribing their neoliberal ideologies and establishing the boundaries of our imaginations and experiences, perpetuating the cycle.
Our dreams might be the final frontier of capitalism’s march toward colonizing, quantifying, and capitalizing every aspect of everyday life. The dream may also then be the only ground left on which we can make a stand: Dreams are a space of unknowing, a space of confusion and non-reality outside the effective virtualities that render us machine-readable and wanting to be even more so. Dreams might be a space of autonomy from which we can draw inspiration to move beyond capitalist realism. But only if dreams themselves remain beyond the grasp of the computable subjectivity and its ideological machinery can they offer something other than the reductive vision of tracking, and of total availability to the technologies that intend to tell us who we really are.