Home
April 19, 2019

Harvester of Eyes

Facing intensifying criticism that its algorithmic recommendation system invariably feeds viewers “extreme” antisocial content — conspiracy theories, far right propaganda, and other hateful materials — YouTube, according to this Bloomberg report, is refining some of its metrics, complementing “watch time” with something it calls “quality watch time.” Obviously the company has this situation under control.

According to Bloomberg, “quality watch time” is supposed to “spot content that achieves something more constructive than just keeping users glued to their phones” and “reward videos that are more palatable to advertisers and the broader public.” But naturally, YouTube provides no specifics about how it intends to turn something qualitative into a quantity, let alone how explain how this new metric, uniquely among all attention metrics ever devised, will avoid producing the kind of incentives the company is seeking to curb. As John Hermann details here, would-be qualitative metrics like ratings have a tendency to incentivize fraud.

Maybe YouTube imagines that if it keeps secret the details about what “quality” amounts to, it can keep people from gaming the algorithm — because as the history of search engine optimization has shown, that strategy always works. Yes, it’s prudent to assume that what “quality” means on YouTube is something that its engineers decide by fiat rather than a matter of viewers’ taste or experience. “Quality” isn’t what people appreciate; it’s what YouTube figures out a proprietary method to measure. It’s also wise to assume that people universally agree on what “quality” means, so it is safe to try to build a metric around it.

Of course, it makes perfect Silicon Valley sense to try to solve the problems with one attention metric with another attention metric, as Ben Grosser notes. YouTube’s problems derive from the very concept of measuring and monetizing attention, but there is no better way to do it that fixes the problem that companies shouldn’t do it at all. An attention metric produces what it pretends to measure and attempts to reshape the subjectivity of those caught up in its mechanisms so that they prefer what is measurable and understand that as “quality.” YouTube is explicitly not trying to achieve some sort of representation of existing consumer behavior; it is inventing a metric and loading it into a cybernetic system so that it can manufacture and control that behavior, and make selling it more predictable and profitable. Why would it voluntary invent metrics that would do anything but increase profits? Its goal with this metric is to make “quality” and profitability seem more organically aligned and politically palatable.

YouTube’s efforts here fit a long-established pattern in the use of statistics to produce normative behavior rather that simply represent the behavior of populations. In The Taming of Chance, historian Ian Hacking’s describes the “statistical fatalism” that emerged in the 19th century as new methods of calculation and measurement emerged:

According to that doctrine, if a statistical law applied to a group of people, then the freedom of individuals in that group was constrained. It is easy to regard this as an epiphenomenon, an oddity accompanying the early days of statistical thinking. In fact it betrays an initial perplexity about the control of populations on the basis of statistical information. Statistical fatalism was the symptom of a collective malaise. We read a metaphysical worry about human freedom, at times well nigh hysterical. We can hardly credit it as a specimen of rational thought. Exactly. The knot was not metaphysical but political. The issue that was hidden was not the power of the soul to choose, but the power of the state to control what kind of person one is.

Statistical findings can be deployed rhetorically to appear to have the force of laws, positing what outcome is “supposed” to occur based on how the data was originally collected and processed. This allows political initiatives to appear objective, cloaked by statistical “facts” about what is average and “normal.” “Words have profound memories that oil our shrill and squeaky rhetoric,” Hacking writes. “The normal stands indifferently for what is typical, the unenthusiastic objective average, but it also stands for what has been, good health, and for what shall be, our chosen destiny. That is why the benign and sterile-sounding word normal has become one of the most powerful ideological tools of the 20th century.”

Hacking is mainly talking about bureaucratic efforts to use statistics to standardize populations and guide them to conformity. But what YouTube is trying to do is similar, only the attention statistics it uses are not published in tables but implemented algorithmically. The norms are not disseminated by institutional authorities interpreting the statistics with some plausible pose of scientistic objectivity, but experienced through the feedback loops initiated through interacting with the platform.

Attention metrics would seem to derive from the economistic assumption of “revealed preference”: what people do also explains what people want. What “glues us to our phones” is what we like and is therefore good. From this perspective you can safely ignore any conceivable obstacles that might stand between what people want and what people can get, and you can also ignore what people say they want, which is unreliable noise relative to what they do. You just need a monitoring system in place to capture behavior, and you can ignore what people have to say for themselves, and the internet has become precisely that.

Algorithmic recommendation systems thus operate under the assumption that people don’t know what they want, or at least that there decision-making process is too inefficient for the amount of choices on offer. They work from the assumption that revealed preference has been suspended; that consumers are operating in an environment where their choices must be guided, not patiently waited for. So even if you believe attention metrics tell you anything about what users regard as “quality,” that dimension is evacuated once you use those metrics to feed the algorithms. At that point the algorithms, if they are effective, override revealed preference in favor of a predictive model that approximates a users’ choices probabilistically and renders them essentially superfluous. What users watch from that point on is merely fine-tuning the predictions, confirming or modulating those assumptions — it’s not “revealing” what the user prefers independent of that system. It’s just a data point to help the system function in ways that its operators prefer.

The problem with attention metrics, though, goes beyond their misuse in algorithms that contradict their logic. The qualitative dimension of attention will always have a kind of negative-theology component to it: Anything that can be measured obviously becomes quantitative and misses the point of capturing “quality.” They are structural opposites. When the actual lived experience of “paying attention” is reduced to a set of numbers, what is lost — by definition —is the qualitative experience of that attention. That is, “quality watch time” is an oxymoron, and not just because of the high percentage of garbage on YouTube.

To understand what “qualitiative experience” is, you would have to listen to how people describe it and respect its unique particularity. “Quality” does not scale. But tech businesses are particularly dependent on the idea that anything and everything can and must scale. So their existential mission is to turn quality into quantity wherever it can and induce people to accept that however it can, whether by promising them convenience, or competitive advantage or attempting brute force brainwashing.

Businesses organized around measuring attention make the de facto proclamation that the quality of attention is irrelevant; it can be resolved into quantities, into metrics. If attention can be counted rather than experienced in different ways, then it is all fungible, all the same, regardless of how you might weight or discount this or that form of it. Attention should be understood as something that can be efficiently maximized, and that people’s experience of attention can be streamlined and homogenized in pursuit of that maximization. Whatever YouTube uses or tries to capture as a proxy for attention “quality” will become just be another number subject to maximization; it will become another end in itself that diverges from “quality” as it becomes increasingly targeted. This means that the more precise the effort becomes to measure quality, the more perverse and elaborate the incentives will become to game those measures. The “better” the attempts to measure quality are, the more that “quality” itself  is precluded.

YouTube’s algorithms don’t favor the bizarre and hateful sorts of videos they favor because they lack information about quality; it’s that the  data they have points toward increasingly manipulative content. Because of how the company measures attention as quantity and not qualitative experience, it can only optimize for “quality” as a form of manipulation that obviates that experience. The more information the “quality”-seeking algorithms process, the more specifically they articulate the sorts of content that can generate it on their terms, as an analytic construct that has nothing to do with how human beings, who are the only ones capable of experiencing qualitatively, describe it. This leads to YouTube content creators who confess to being guided entirely by the algorithm. (This is feeling increasingly tautological, which drives me to try to say the same things in more and more laborious ways to clarify what becomes more and more slippery. I’m sorry about that.)

This kind of content produces a particular kind of viewer who perpetuates the cycle, who responds behavioristically and embraces one-dimensional attention metrics to describe not only their own behavior but those of people paying attention to them.YouTube thereby screens for viewers who respond to the most quantifiable version of “quality” that it can produce; it doesn’t respond to a pre-existing audience demand for a certain kind of quality. Instead, its audience comes to resemble its content (as is starting to be claimed about TikTok, which is driven entirely by algorithmic recommendation); and they become mutually reaffirming in a system that aspires to hermeticism.

With viewers and content creators united in producing a system that maximizes “attention” independent of quality (or that seizes on every proxy for quality and transforms it into quantity), it is left to the advertisers, of all people, to care about ineffable, immeasurable quality, the kind that is capable of sustaining the individual character of brands. It is no accident that “quality watch time” is meant to highlight content “palatable to advertisers.” The metric won’t palliate advertisers by finding the good content so much as it tries to clear the space of “identity” or “personality” so that brands can occupy it. If consumers just behavioristically respond to content, they don’t have tastes of their own but must try to convey tastefulness by proxy through brands.

Advertisers, then, have to appear tasteful in choosing what to sponsor, even as consumers are denied a similar opportunity by recommendation algorithms. The algorithms that place ads have to make advertisers appear as sentient as possible while thay make consumers as thoughtlessly mechanistic and programmable as possible. Brands want to be seen as the source of something inexplicably powerful that magically coheres in their trademarks and imbues that quality to consumers. If brands are perceived to be synonymous with the algorithmically generated junk their ads are programmatically matched with in YouTube, then there would be no point in advertising.

How advertisers feel about quality comes to replace how viewers themselves might feel about it. Ads “experience” quality in lieu of us. Both YouTube and their advertisers agree that viewers must be seen as manipulatable and controllable — that their preferences are ultimately malleable and should be made to be more so. The interference of what they “think” should be minimized as much as possible. YouTube’s metrics strategy differs from the logic of advertising, though, which manipulates people by allowing them to believe they are exercising their agency while being fed choices. YouTube instead offers the convenience of not having to choose. It seems as though these don’t contradict but complement each other. The convenience of not choosing becomes the ultimate expression of consumer choice.

In a famous adage about advertising often attributed to retailer John Wanamaker, he complains that “half the money spent on advertising is wasted, but the trouble is I don’t know which half.” That captures not why advertising is a scam but why it works: It taps into associations that consumers experience as qualitative — as not fully explicable by numbers, or logic, or overt causality. Advertising achieves “quality watch time” in that it fails to foreground its mechanisms. It works when it leaves some immeasurable impression that doesn’t show up as uniform, individual-level compulsion, even if it appears in crude aggregate sales statistics. The point is that advertising works through these imprecise gestures and becomes less effective the more mechanistic or measured or tracked it is. That exposes the man behind the curtain and destroys the illusion on which it depends.

“Let a man propose an antistatistical idea to reflect individuality and to resist the probabilification of the universe,” Hacking writes in The Taming of Chance, “the next generation effortlessly coopts it so that it becomes part of the standard statistical machinery of information and control.” Oddly enough, advertising (despite itself and many of its practitioners) is one of those antistatistical notions, and attention tracking is the effort to co-opt it. “Could not a more articulate, wilder, euphoric backlash preserve some of the ancient freedoms of chance?” Hacking asks. It seems impossible that ads, in their illogical free-associative approach to human behavior, would be site for that sort of optimism, a means by which people can still conceive and articulate their freedom in the very heart of the concerted effort to manipulate them. But it also figures that advertising would not have become the predominant form of social discourse without harboring some traces of utopianism. It only makes sense that something that speaks to us all should speak to us of hope.