Home
February 15, 2019

The Sea Was Not a Mask

A few weeks ago, YouTube announced that it was adjusting its content algorithms to “begin reducing recommendations of borderline content and content that could misinform users in harmful ways.” In some quarters, this was hailed as another step toward tech companies acknowledging their editorial responsibility for what their platforms facilitate, and abandoning once and for all the defense that they are neutral conduits. In this thread, Guillame Chaslot, a former YouTube engineer who now leads an algorithmic transparency advocacy group, proclaimed that this, the latest in an perpetual series of algorithm tweaks, should be considered a “historic victory.”

Before, Chaslot writes, YouTube’s algorithms would oversample a user who had fallen down a conspiracy theory rabbit hole and was thus watching video after video about how the earth was really flat or how lots of people are really lizards. The algorithm treated such a person as a “jackpot,” “a model that should be reproduced,” so it would analyze their behavior to identify sequences of content that might produce compulsive viewing in others whose on-site behavior suggested they were similarly groomable. Here’s how Chaslot charted the resulting “vicious circle”:

1/ People who spend their lives on YT affect recommendations more

2/ So the content they watch gets more views

3/ Then youtubers notice and create more of it

4/ And people spend even more time on that content. And back at 1

It’s not clear from this why certain people become hyper-consuming models for others, or what symbiosis exists between these “power users” and the conspiratorial content that seems to sustain them. Does more “extreme” content compel the most compulsive viewing, or are we only concerned with compulsive viewing when the content has antisocial overtones? In other words, when YouTube fine-tunes its algorithms, is it trying to end compulsive viewing, or is it merely trying to make people compulsively watch nicer things?

When algorithm tweaks are assimilated to the project of content moderation, the implicit logic is that with the right adjustments, algorithmic recommendation systems can protect users as well as hook them. The algorithmic recommendation process is not flawed, just incomplete: It just needs to be more robust, to give users “good” rather than “bad” material. The idea that YouTube shouldn’t force-feed users content at all is, of course, not considered. After all, YouTube’s algorithms are its core product, not the videos people upload. The assumption built into YouTube (and Netflix and Spotify and TikTok and all the other streaming platforms that cue more content automatically) is that users want to consume flow, not particular items of content. Flow and not content secures an audience to broker to advertisers. Pivot to video.

Back in 1974, Raymond Williams, in Television: Technology and Cultural Form, singled out “planned flow”  as “perhaps the defining characteristic of broadcasting.” Flow, in his view, must be understood as different from the mere programming of discrete units of content; flow is a matter of braiding different content streams to “revalue” the intervals between units so that breaks are tolerated and the units themselves are blurred together and no longer really interpretable or comprehendible in isolation. This allows an audience to be “captured.” Here’s Williams describing how binge watching happens:

it is a widely if often ruefully admitted experience that many of us find television very difficult to switch off; that again and again, even when we have switched on for a particular ‘program’, we find ourselves watching the one after it and the one after that. The way in which the flow is now organized, without definite intervals, in any case encourages this. We can be ‘into’ something else before we have summoned the energy to get out of the chair, and many programs are made with this situation in mind: the grabbing of attention in the early moments; the reiterated promise of exciting things to come, if we stay.

That compulsivity is so pervasive as to almost seem inescapable — from “page-turners” to bingeable shows to endlessly refreshable scrolls to autoplaying music and autopopulating playlists. It is usually depicted as a selling point, a proof of quality — you can’t put it down! — but that shouldn’t disguise the fact that what’s being sold is surrender: Engage with this thing so you can stop worrying about what to engage with. That is flow.

When platforms today deploy recommendation algorithms, their purpose is to produce flow. (They also become broadcasters.) Flow, fundamentally, is a trap — as anthropologist Nick Seaver details in this paper, that means it is a “persuasive technology” that can condition prey “to play the role scripted for it in its design.” Traps work, he argues, by making coercion appear as persuasion: Animals aren’t forced into the trap; its design makes them choose it.

Coercion and persuasion, then, can’t be cleanly distinguished. Seaver highlights how tech companies draw from this to generalize a behaviorist model of how human psychology works, operating in what is experienced as a “blurry middle” between coercion and persuasion. That blurry middle seems a lot like the blurred experience of content in the midst of flow. We are neither forced to consume more nor choosing to consume more; we both want the particular units of content and are indifferent to them. We are both active agents and passive objects.

Williams seems totally flummoxed by the idea that discrete content is increasingly irrelevant to flow: “though useful things may be said about all the separable items … hardly anything is ever said about the characteristic experience of the flow sequence itself,” he writes. “It is indeed very difficult to say anything about this.” He stops short of suggesting that this is by design: that consumers are distracted from the compulsive nature of the general broadcasting experience by the superficial variety of discrete programs. But if we understand flow as a trap, then that would seem to follow. Flow works by disguising its compulsory mechanism in the details of its content, which is nothing more than bait from the system’s perspective. And personalized flow, such as streaming services now promise, offers even more of a distraction, as users are invited to decode recommendations as quasi-astrological divinations of their true personality, their “real” desires.

But algorithms don’t reveal people so much as produce them, as Chaslot’s description of making “jackpot” users suggests. I am always wondering whether the tech companies’ “discovery” of behavior manipulation techniques and implementation of them at scale can make that Pavlovian understanding of human psychology more powerful, more accurate. That is, can we be behavioristically trained to be more susceptible to behaviorism? Seaver details how data scientists found that measuring people’s expressed preferences was a dead end because these were too variable — people liked or disliked different things over time or the same things to different degrees. It was easier to change the entire infrastructure in which people’s preferences lived, and then, rather than assume people are capable of forming and expressing their tastes independently, you could treat captured behavioral data as revealed preferences (much as conventional economics interprets consumer behavior in the market). Behaviorism is assumed and made effectual by the measurement methodology. But does that mean it now actually works on people with more intrusiveness? Is behavior reshaped by what and how we choose to measure, by which measures are made widely accessible and implementable? If social media metrics are any guide, the answer seems to be yes.

Shoshanna Zuboff’s line on surveillance capitalism is that tech companies aspire to use data collection and analysis to produce consumers who are wholly predictable from the outside, who have no meaningful subjectivity or decision-making capabilities and are just raw resources waiting to be drained of their value. In a way, I want that to be true: I like the idea that I fundamentally have free will and can reject behaviorist manipulation if I am vigilant and choose to. I like the idea that tech companies, in order to oppress us, would have to overcome some inherent human will to self-determination. I am flattered too by the idea that my consumer choices are not the result of any brainwashing but are reflective of my unique personality.

But I worry that all this blinds me to the trap, to the ways coercion increasingly appears as persuasion. In producing flow, tech companies are not so much unilaterally controlling consumers as confusing them about what is under their control. “Participatory culture” becomes at the same time a means of hyperindividualization — of isolating people in their personalized experience. The means of manipulation rely not on overriding personal agency but harnessing it, turning it inside out, making its diminishment seem like its apotheosis.

If flat-earthers and the like are characteristic examples of “jackpot” subjects, then it seems that the sort of subjectivity that algorithmic flow inculcates — the role that trap’s design scripts for us — is that of a “power user,” someone who understands continual video consumption as a definitive (if not defiant) act of self-fashioning. In order to fall into the binging trap and not try to elude it, you have to see it at some level as a supreme act of agency. You have to learn to enjoy behaviorist manipulation; you have to interpret manipulation as persuasion.

Seaver quotes Alberto Corsín Jiménez, who argues that “traps are predatory, but they are also productive.” As Seaver puts it, traps are “arrangements of technique and epistemic frame designed to hold particular kinds of envisioned agents.” Traps enable our agency to become perceptible to us through the process of structuring and restricting it. Flow allows us to experience our agency without exactly exercising it. It blurs the lines between those things.

***

The question I eventually want to get to here is whether certain kinds of content are especially suited to this blurring. How do we become addicted to the spectacle of our consumption, as an emblem of our own singularity? Does it take particular kinds of content? Does certain kinds of antisocial content make that spectacle more potent and compulsive? Does pursuing information that other people reject or that seems hidden or secret intrinsically make the pursuer aware of their own agency, of their ability to redraw the epistemic frame? Does flat-earth conspiracy theory lend more of a sense of pseudo-agency than, say, binging clips of bands performing on the Old Grey Whistle Test? 

Chaslot is pointing to a simple mechanism: the power users feed more behavior to algorithms, so more of their preferred content gets sent out to others, who then have an increased likelihood of becoming like them. But what makes those power users originally prefer conspiracy content? Is that even the right question to ask?

A lot of theory on conspiracy theory centers on believers’ reacting against a modern world made up of complex systems that no individual can master or comprehend. Frederic Jameson described conspiracy theory as “the poor man’s cognitive mapping in the postmodern age,” “the degraded figure of the total logic of late capital.” By that account, the serial consumption of conspiracy content dramatizes for a consumer their attempt to understand how the System — the one massive, all-encompassing system — really works. (It’s easier to believe in a flat earth than to believe in the end of capitalism.) People keep watching videos because the flow feels like agency in a world that denies it to individuals.

The hero’s quest for the total picture resembles what Mark Andrejevic, in “Framelessness, or the Cultural Logic of Big Data” (academia.edu link here), describes as the collective project of tech companies to surveil the entire world and interconnect everything in it. What Seaver described as happening to the individual — i.e., they are situated in a context where their behavior can be measured and therefore dictated — would in this case happen to everything. It would be a trap for the entire world.

This project, in Andrejevic’s view, is less about imposing a frame that makes individual agency legible than facilitating a ubiquitous surveillance that could do away with frames altogether. If “the truth” is always undermined rather than established by subjective perspectives in recounting it — if it is always the singer, and not the song — then you can solve for objectivity by extinguishing subjectivity. No more points of view: Instead, build an apparatus that can purport to record every possible thing from an omniscient “unbiased” perspective, and through that model, predict what is “supposed to happen” in an unbiased world.

It should be obvious that the totalizing ambition of “framelessness” is self-defeating. Andrejevic notes that “the scope of monitoring expands alongside the functions enabled by smart objects,” which multiplies the different kinds of correlation one could investigate or deem explanatory. The representation of the world continues to get more and more granular in a process of infinite subdivision (as a moment in time or a unit of space can always be further divided, Achilles’s paradox–style); at the same time, potential information expands exponentially as the different kinds of data collected can be combined to synthesize new ones. Every piece of data raises more questions than it answers, as Kate Crawford explored in this esssay.

Yet framelessness is the rationale for increased surveillance and interconnection, as if it only needed to become comprehensive enough for people to believe in it. Then it would effectively be true. Framelessness isn’t about totalization; it’s an engagement trap.

The framelessness trap works by convincing people that objectivity is possible through collecting more information (as if that didn’t merely multiply the subjectivity problem: who is collecting it? how are they framing it?). Andrejevic suggests that “we might describe the contemporary media moment — and its characteristic attitude of skeptical savviness regarding the contrivance of representation” — a.k.a. the preponderance of declarations of “fake news!” — “as one that implicitly embraces the ideal of framelessness (and its associated aesthetic of immersion).” But I don’t think everyone who is skeptical of objectivity claims implicitly wants or believes in framelessness. That wouldn’t be true, say, of advocates of standpoint epistemology, described here by Sandra Harding. It seems more to apply to the technocratic engineering mind-set, or to people in privileged positions who are tired of standpoints because they would prefer to never have their own challenged. It seems that those people particularly would seek epistemic closure that looks like total objectivity, total open-mindedness. This, I think, is what’s on offer in the blurring of agency and control in algorithmic flow. Algorithmic flow posits a framelessness for one. This lets a person feel as though they care deeply about objectivity through their progressive immersion in a hyperindividualized frame.

This setup allows one to discredit not only specific types of inconvenient expert knowledge but the entire project of working toward shared knowledge through the resolution of disputes rather than a radical, anything-is-possible skepticism. In Andrejevic’s words, “these are two faces of the contemporary information environment: the generalization of suspicion punctuated by the selective suspension of disbelief.”

In an interview with the New Yorker, William Davies argues that in order “to understand the mentality of the nationalists, or the populists, there has to be some appreciation of the fact that there is hostility toward the very institutions that might potentially resolve disputes in some sort of consensual way. I think what they want to do is to damage the very instrument through which we settle disputes at all.” This echoes a point frequently made about contemporary Russian-style political sabotage: the propaganda is not meant to convince anyone of something specific; it is meant to convince people that no one should be trusted about anything. The fact that the specific content of propaganda itself is arbitrary is, again, reminiscent of flow — a mode of experiencing that sustains a degree of distance and indifference to the qualities of particular content.

Andrejevic makes a similar point:

There is a significant disjunction between the use of ‘fake news’ by partisans on the left and on the right: the former use it to resuscitate a ‘reality principle’ while the latter use it to dispense with one altogether.  

Whenever journalists are blamed for failing to be objective — or whenever they claim their own objectivity — a frame of framelessness has been imposed that can easily be turned against the very possibility of journalism. It seems better to stress that all media representations of reality are productions rather than recordings of reality; they are collaborative efforts to produce a particular perspective that ideally serves some socially useful purpose. These productions are what make politics possible: They posit different ways of conceiving reality that allow people to argue over them, or build consensus for them. This can be characterized as moving toward objectivity, but it isn’t the same as having achieved it once and for all. That is not possible. When someone insists there is an objective way to represent reality that brooks no argument, they are trying to insist there should be no politics, that people should instead be administered through a process of comprehensive monitoring.

Andrejevic discusses this in terms of narrativity: a story is only possible when different perspectives are possible and new information can be produced or emphasized. Telling a particular sort of story about reality posits a future direction, and certain kinds of alliances and investments. But the framelessness fantasy posits stasis; it is, Andrejevic suggests, the Freudian death drive in action.

Chaslot depicted his emblematic flat earther as someone whose situational depression drove them to isolating self-harm. Tweaking the algorithms in an effort to reprogram that person’s consumption diet seems to reiterate and reinforce the problem: how we are caught up and subjectivized by systems we can’t comprehend, that we can’t situate ourselves within. It is much easier to imagine where you stand on a flat earth, even if it means looking over the edge.