Recently Netflix has been testing the option of letting viewers watch prerecorded programming at 1.5x speed (a feature that already exists on YouTube) so that users can consume more content in less time. There is no word on whether they are working on the viewer’s ability to accelerate live programming.
I’ve availed myself of the variable playback speed option when I’ve had to watch mandatory training videos at work or recordings of panel discussions or lectures. But I don’t understand why anyone would want to accelerate something they are watching for entertainment and not for information. Isn’t the point of entertainment to surrender to someone else’s pace? To pass time rather than fight against it? Maybe the point is that there is no entertainment, only information.
This Vice column by Bettina Makalintal tries to make a populist case for watching TV at high speed, pitting viewers against Hollywood “big shots” who want to make you do things their way and watch “good” content. “Variable playback is for bad stuff, not ‘art,’” she argues, “and it’s for all of us who watch it even though we know it doesn’t really deserve our full attention.” If the social pressure to keep up with certain shows exceeds their actual entertainment value, you can close that gap by speeding the shows up. In fact, this experience of efficiency and “gaming the system” may in itself be more pleasurable than the content. Once we collectively intuit this, we can then do one another the favor of only talking about bad and boring shows so that we will all feel authorized to watch only the sorts of things that demand we fast-forward through them. With variable speed playback we can give ourselves a kind of self-aggrandizing “executive summary” of shows without having to endure them. Only chumps fall for duration.
If sufficient numbers of people are watching shows on fast speed, the producers of these shows will of course begin to take this into account and optimize them for that modality of viewing. Plots and characters will be further simplified, points of emphasis will be awkwardly drawn out to try to make them register with viewers. (MTV’s late 1990s experimental soap opera Undressed offered something like this; its structure of inundating viewers with brief hypercompressed scenes achieved accelerated playback before the fact.) As shows are designed to be sped up, it will reinforce the ambient sense that it is “wrong” to consume content at its ordinary speed, that one is failing to optimize oneself as is expected as an information processor. It’s easy to imagine platforms changing their default playback speed to 1.5x or 2x.
The feeling of falling behind will become more acute, threatening us on every screen. It will be palpable in the experience of watching people speaking in ordinary “slow” voices. It will be the sound of inadequacy, of not producing enough consumption data to be a relevant consumer, to be a fluent social participant. It will be the sound of tedious, unmediated communication.
Belonging to any culture requires that we have certain reference points in common. But the individualism and personalization dominant in our particular culture militates against any such commonality. So economizing on our investment in the common culture (and conceiving it as the content that demands the least amount of concentration and focus) makes some sense: We can participate in the zeitgeist even as the zeitgeist is solipsism.
Netflix content that we could profitably watch at double-speed will necessarily be vapid: Its purpose is not to communicate complex ideas or even allow us to fulfill vicarious or aspirational fantasies but to serve as a vacant placeholder, a token with which we can signal our noncommittal commitment, our will to engage at speed without bogging down over substance. This allows us to participate in the common culture at a different level, the common culture of accelerating ourselves, of becoming more efficient information processors.
In his recent book Automated Media, Mark Andrejevic addresses accelerated viewing as a species of automated consumption, in which the interface partly consumes content for us as a way of getting us up to speed with the pace of production. “We can perhaps feel this pressure in the role that automated systems play in our daily lives,” he writes, “the incitation to relentlessly accelerate our communicative activity to overcome the frustrating limits of our sensorium.” It is incumbent on viewers to keep from becoming “points of friction” in the system that takes data about what they are doing to produce new content based on the revealed preference for more of it.
Automated production and consumption, in theory, form “a self-stimulating spiral,” Andrejevic writes, powered in part by algorithmic recommendation systems that do our discovery and, essentially, our desiring for us, feeding us putatively novel content that mimics our taste profile while allowing us to experience “curiosity” without the time or effort involved with being curious. The work of “wanting” to do something is construed as failing to want everything. It betrays a meager imagination. Specific desire is wasteful and inefficient, a form of friction rather than end in itself — as though building anticipation and situating the social meaning of a practice isn’t intrinsically part of being able to enjoy anything. Instead, anticipation is treated as an eliminable inefficiency, merely ornamental foreplay.
With the desire for particular content displaced onto algorithms and disavowed, the next logical step is to make the actual consumption of that content, now a rote formality, as expedient as possible. Otherwise there can be no way to aspire to desiring the totality, to realize the fantasy of complete access and make good on its theoretical possibility. What’s the point of the massive libraries of content available on streaming platforms if you aren’t even trying to ingest all of it? “The attempt to master all available content — to become fully aware of all that’s out there — pre-empts the act of experiencing it,” Andrejevic notes. “Pre-emption is, in other words, the antithesis of experience.”
This rejection (or transcendence) of experience is part of what Andrejevic calls the “cascading logic” of automation: “automated data collection leads to automated data processing, which, in turn, leads to automated response.” This culminates in the abolition of subjectivity; behavior is constricted to questions of consumption and externally administered through techniques of manipulation instead. Tech companies realized they could track what we do, amassed enough data to extract patterns about our behavior, and then began making decisions about what would make us most profitable to them. Now they are systematically enclosing our environments with an assemblage of sensors and other surveillance mechanisms to be able to enforce those decisions, and compel that our behavior fit those patterns. “Complete specification does not enhance the subject,” Andrejevic notes, “it liquidates it.” Algorithms that purport to know us better than we know ourselves are designed to annihilate us.
One obvious expression of desubjectification is the example Navneet Alang discusses in this column: algorithmic text completion — e.g. Google’s “Smart Compose,” which tries to write your emails for you, so you don’t have to be psychically present or “hold appropriate space” to communicate. The pretense in part is that this kind of automation is a form of self-care that frees you to do higher-level things, have the “real conversations” full of deep self-expression, in the same way fast-forwarding through junk content would theoretically save you time to watch more rewarding material later. But it’s not clear that “later” ever comes; rather “saving time” becomes an alibi for postponing that high-level, self-actualizing material forever and instead “doing” more and more of the rote consumption with the assistance of automation. In this sense, acceleration and efficiency become modes of procrastination.
The processed behind automated communication can be usefully estranging: AI text completion can work as a kind of defamiliarization process, when you provide it with poetic prompts and approach what it produces not as practical time-saving but as a probe into the deep strangeness of ordinary language. There is a “social average” component to the text that machine-learning-driven engines produce that makes it decidedly bizarre when its output is stripped of the contextual social relations that routinely govern language use. For instance, I could make poems all day with this AI text generator — the fact that the AI can’t really “try” to say something allows me to read the text into poetry — to see intention where there can’t be any. It helps me see my own will to intentionality more clearly.
When text completion is adopted as streamlining and inserted into social relations as an expedient, however, it becomes more problematic. Rather than estrange language and refresh our relation to it, it impoverishes communication and sociality more generally. Alang makes the point that automated text completion generates a “centripetal” force that standardizes and simplifies language across unprecedentedly global populations, making for the communicative equivalent of Marc Augé’s nonplaces: conversations that “work” with the stark, bland efficiency of airports or fast-food chains.
Yes, you could also say that the centripetal force also structures an opposite, centrifugal force: The algorithmic routinization of language at one level sparks new language forms at another, which can be seen in the language games of online microcommunities and the pockets of “weird” that ubiquitous connectivity can facilitate. But that sort of escape can be fugitive when the systems imposing standardization are so strong and intrusive. Algorithmic text completion intervenes in how we think, making us absent where we are expected to be present, at the moment we are ostensibly speaking. Smart Compose is smart because it renders us dumb. It assures us that we don’t need to be the speaking subject behind our words; Smart Compose allows the “langue” (the universe of language in its general use) to literally speak us into being as a statistical average.
Autocompletion is touted as being ideal for work contexts, which suggests that we have been so demoralized by the relations of production that we would rather be objectified by them, let them speak us, than try to sustain our subjectivity within them in hopes of exercising some agency, skill, control. Perhaps the hope is that automating the “work self” frees time for subjectively inhabiting some other creative self — that it could somehow produce time for leisure, for enjoyment on terms other than efficiency. But efficiency under capitalism inevitably serves further acceleration: more work in less time, not more freedom once “the work” is done. Its effect is to make more work (and more exploitation) possible. There is no “freeing up time for workers” under capitalism.
“Saving time” with Smart Compose ensures further objectification within work processes, more and more emails automatically spoken through us, less and less hope that it is worth thinking about what we do to live: Better to adhere to templates that codify and automate the fulfillment of social expectations according to what the data apparently proves are accepted practices. The same is true on the consumption side, with accelerated playback. Consumption is reduced to the work of information processing and participation in capitalist circuits of value creation. Better to deplete its content to get through more of it than to insist that quality isn’t simply quantity. That battle has been resolved: The only thing worth wanting is everything.
The elimination of the subject at the level of media consumption, Andrejevic argues, plays into a larger project of social deskilling, reducing communication to the sheer instrumentality suitable to the mechanized pursuit of profit and authoritarian control.
To make information processing as efficient as possible, the point of the content of information needs to be suppressed and abstracted: signal vs. noise, rather than something experiential or interpretive in its particulars. “Wanting” to do something — desire, subjective purpose, curiosity, etc. — impedes the industrialized process of forcing more of that something (some organization of information) to happen on capital’s terms. Andrejevic points to AI “mastering” human strategy games to illustrate how “intelligence” can be ideologically restructured to exclude desire.
Examples of automated “intelligence” tend to sidestep the reflexive layer of subjectivity in order to focus on the latest computer achievements: the fact that machines can now beat us in chess, Go, and some computer games. But there is little talk about whether the machines “want” to beat us or whether they get bored or depressed by having to play creatures they can beat so easily when there are so many other things they could be doing. That such observations seem absurd indicates how narrowly we have defined human subjective capacities in order to set the stage for their automation. We abstract away from human desire to imagine that the real measure of human intelligence lies in calculating a series of chess moves rather than inventing and popularizing the game in the first place, entertaining oneself by playing it, and wanting to win (or perhaps letting someone else win).
This constrictive reinterpretation of “intelligence” has alarmed some futurists and scientists working in AI. In To Be a Machine, Mark O’Connell’s book about transhumanism, he cites Stephen Omohundro’s paper “The Basic AI Drives,” which begins:
Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.
A chess-playing machine will logically consume all the world’s resources in an effort to remake it as an endless chess tournament. (Bitcoin miners aim at something similar.) In short, because AI’s strictly functionalist orientation is toward “maximizing its utility function,” it is intrinsically evil. It lacks the will or capacity to do anything errant or gratuitous; it is compelled by its purpose to try to dominate.
The AIs of Omohundro’s paper could be construed as technological reconceptions of the rebel angels in John Milton’s Paradise Lost, forever damned by the same logical compulsion to be evil. In A Treatise on Christian Doctrine Milton describes the consequences of sin as a “spiritual death” that consists of the loss of “right reason.” This manifests as a “deprivation of righteousness and liberty to do good, and in that slavish subjection to sin and the devil, which constitutes, as it were, the death of the will.” Or perhaps the automated perfection of desire.
Sin is its own punishment because it compels sinners to further sin: It is the opposite of freedom. An AI, then, is sinful by definition — without will and characterized ontologically by a “slavish subjection” to its programmed purpose. AI’s relentless and limitless pursuit of self-improvement through rigid, utilitarian conceptions of rationality condemn it to predictability: It will always do the selfish thing that maximizes utility along a single axis; it can’t conceive of doing something for others without that effort being reconceived as a form of utility that accrues to itself. AI forever lacks, to use Milton’s idiom, “grace.”
In a sense, social deskilling — making humans more like AI and more interoperable with it — is aimed at the elimination of the human capacity for grace, such as it is. I don’t know much about Christian theology but have the vague sense from long-ago graduate-school seminars that Milton’s belief was that humans must rely on God to experience grace and participate in its free-ranging goodness. Satan’s temptation is toward treating “free will” and agency as a form of self-reliance, which turns out not to be agency at all but the base compulsions of self-aggrandizement. Our development and implementation of AI has become a similar distortion of agency, a systematized rejection of genuine free will in favor of programming, predictability according to what humans can conceive and maximize — which from a theological point of view is not very much.
Automation deprives people of choices by claiming to fulfill them in advance, or by making the stakes of those choices seem beside the point. It tries to make swapping our will for superior processing capacity seem inevitable. “The automation of communicative processes envisions a surpassing of the pace and scale of human thought and interaction, which is why the technological imaginary tends toward post-humanism,” Andrejevic argues. “If automated systems can outstrip both human physical and mental capacities, avoiding obsolescence means merging with the machine.” This is not a humble concession to the machine’s superiority so much as the ultimate hubris. With enough surveillance and data capture in place we can assume a godlike totalizing perspective and automate the world in accordance with it.
The impulse to watch things or listen to things or read things at inhuman speeds indulges the same fantasy about becoming a machine and not needing to wrangle with interpretation or ambiguity or multiple simultaneous and contradictory possibilities. Instead, escape subjectivity into a perfectly comprehensible and operable world — into divine objectivity.
From a Miltonic perspective, automated decision making abrogates the freedom to choose good, which effectively guarantees evil. In Areopagitica, he famously wrote:
I cannot praise a fugitive and cloistered virtue, unexercised and unbreathed, that never sallies out and sees her adversary, but slinks out of the race, where that immortal garland is to be run for, not without dust and heat.
To act on a contempt for or impatience with content as such and a desire to get on to the capital value, the usefulness, the leverage, the effect or augmentation implicit in having consumed a thing with not the taste of it in the moment in mind but the effect of the nutrients in the abstract; to reject enjoyment or supplant it with momentum, with a sense of scaling ourselves to absorb the whole; to void interpretation in favor of operationalism; to seek frictionlessness communication and consumption, to pursue the pleasure of efficiency instead of the uncertain satisfaction of meaning making, to surrender responsibility over what we do and desire, to exterminate the subject position and indulge the desire to be a machine, to be done with subjectivity and its unpredictable social integuments and reciprocities — all this is to give up on the possibility of being virtuous. Virtue is supposed to be its own reward, but we’re not seeing the metrics for it.