Home
April 26, 2019

Moving the River

A few days ago, Data & Society posted the transcript of a talk by danah boyd about YouTube as a news source and the platform’s vulnerabilities can be exploited. She notes how propaganda is created to fill the “data voids” associated with search terms that suddenly become popular in the wake of a news event, and describes the resources being committed to reverse-engineering recommendation algorithms so that anyone watching videos with basic, mainstream information will be served recommendations for propagandistic content. “If you watch a health organization’s video and then you follow the recommendations you’re given or allow auto-play to continue, within two videos you will almost always be watching a conspiracy video,” she says. “Why? Because the communities that are trying to shape these connections understand how to produce connections.”

This thread, from former YouTube engineer Guillaume Chaslot, gets at the same idea: that YouTube’s recommendation engine is vulnerable to manipulation, especially if the attackers have the resources and the sophistication to mount a coordinated attack. The armies of manipulators provide the requisite tokens of engagement so that the algorithms will feed propaganda to the disengaged masses.

I find this is a more satisfying explanation for why YouTube’s algorithms tend toward conspiratorial content than the idea that there is something inherently more “engaging” about “extreme” videos that the algorithms merely pick up on and regrettably amplify. But this portrait still conceives the average YouTube viewer as someone who passively nods along to whatever the algorithm serves up. It’s implicit that whatever YouTube recommends, people will automatically watch — or at least enough people for the propaganda to effectively change the climate of opinion. At YouTube’s scale, moving the needle requires only a marginal improvement in the positioning and circulation of content.

A similar logic of scale underlies other would-be deployments of artificial intelligence, as in the potential use of robots described in this TechCrunch article to expedite corporate hiring processes: It doesn’t matter if the robots badly misread any particular candidate because it’s assumed that there will be enough applicants to keep the margin of error small. Though VCV, the developer of the applicant-screening software described in the article, “claims it can help eliminate human bias from the hiring process with preliminarily screening of candidates, automated screening calls, and by conducting these robo-video interviews with voice recognition and video recording,” the point is ultimately to save time, money, and affect in HR departments. Implied in the implementation of such software is a mandate not to treat every applicant fairly but to most efficiently find a suitable worker. Robots and AI are deployed in the process VCV promises in part because they distance employees from having to dirty their hands with the dehumanization that entails. When demonstrably unfair AI applications are nonetheless thought to “work,” this is what’s meant: They are saving people the discomfort of confronting what they’re doing and the kinds of decisions on which their position in the hierarchy depends. AI systems eliminate human guilt, not human bias.

Something similar may be at work with YouTube’s recommendation algorithm, in that it absolves viewers of their sense of responsibility for anything they may end up watching, and thereby permits it. Whether this means that people really want to watch the “extreme” content and are glad to have the algorithm as a facilitator and an excuse, though, I’m not sure. “Want” or desire may not be relevant words in this kind of context. When videos autoplay by default, does that indicate that a person wanted to keep watching, wanted to “learn” more? It may instead indicate a kind of lapse in executive function, an inability to muster resistance in that moment. Or maybe it indicate a positive desire to be able to drift along; maybe that opportunity for surrender, for relief from choice, was more important than the content itself: the delicious satisfaction of a decision averted. You can consume “convenience” in choices being made for you whose stakes may not seem all that high; you can experience momentum that requires nothing of you, that for once does not deplete. Unending content streams can have the effect of propelling us through time, and this may be what grips and engages us more than any particular information contained in any particular post. When I find myself struck on Twitter, it usually is like this; I feel pulled along by inertia.

When, out of a spirit of due diligence, I installed TikTok on my phone earlier this week and spent some time trying to use it, that flow of momentum was mainly what I took away. I don’t think I lingered on any one post for more than a second, but I was mesmerized by the Chat Roulette–like opportunity to keep flicking and having some other random person suddenly appear, often in an awkward, half-readied pose of someone preparing but not yet doing something they think will be funny. Perversely, I didn’t want to give them the satisfaction of my sticking around to find out. I treated TikTok as a kind of test that I could pass by proving that I find no one but myself interesting, no matter how well their attention traps are laid. I didn’t want to let it waste my time, but still I found myself swiping and swiping for several minutes at a time, as if I was training myself to become (even more) addicted to my own disdain.

Maybe this was the famed TikTok AI learning my preferences — it someone how deduced the sort of content that I was sure to delight in rejecting. Is there any difference, from the algorithm’s point of view, between stuff you love and stuff you love to hate? Not that I had strong feelings about anything I saw there — it just seemed like lots and lots of people trying sweetly and earnestly to amuse someone, but I don’t think I stuck around long enough for that someone to assume my outline.

Karen Hao of MIT Technology Review wrote a newsletter this week focused on TikTok, declaring in the title that the app “is replacing our free will with algorithms.” No hyperbole there. “Whereas Netflix, YouTube, and Facebook’s news feed all use recommendation algorithms to push users a curated list of options,” Hao explains, “TikTok removes choice from the experience altogether. Its algorithms give viewers only one video to watch at any given moment and use subtle cues like how much time you spend on it, whether you like it, or even how quickly you swipe away to determine what video you get to watch next.” (This is more or less the same pitch that these VC investors made for the app; I quoted them when I wrote about TikTok before.)

Knowing this going in, I was far more interested in interacting with the hidden system that was evaluating me, taking my measure, than I was in the people dancing or doing pranks in the videos. Every time I lingered on something, I would think, What does the AI think now? The video content was just a means to the end of eventually revealing myself to me, or at least revealing the algorithm’s propensities. The longer I stayed on the app and swiped, the more I would get to know myself — seems like a great formula for “engagement,” though not with anything beyond my own ego.

It’s never clear where an interest in something “for its own sake” stops and an interest in “what that thing says about me” begins, but in a delivery system powered by instantaneous feedback, that blurring becomes even more pronounced. How rapidly one experiences their tastes changing in response to the app must be part of the pleasure it gives — it certainly seems to be the case in many of the Road to Damascus paeans to TikTok that have been published, where a writer invariably describes how they learned how badly they needed to see a genre of content they hadn’t even known existed before.

Interacting with TikTok was prompting me to try to see myself as the app does, and by extension to reimagine myself in terms of the pleasures it presumed I was deriving from its content. Would I eventually actually want what it had to offer? Would I see that as my own desire? Or would I still just be desiring myself through the lens of those recommendations? Is there even any difference between those things? When I wrote about TikTok before, I had already primed myself to come to this sort of conclusion. What remains unimaginable to me, still, is that I would actually want to watch these videos for their own sake, without the algorithmic intrigue. So I remain convinced that the point of TikTok is to teach us to love algorithms over and above any content, and to prepare us to accept an increasing amount of AI intervention into our lives. It seems designed to program users with a form of subjectivity appropriate to algorithmic control: where coercion is merged with an experience of “convenience” as one’s desires are inferred externally rather than needing to be articulated through our own conscious effort.

In Escape From Freedom, Erich Fromm writes, “The automatization of the individual in modern society has increased the helplessness and insecurity of the average individual. Thus, he is ready to submit to new authorities which offer him security and relief from doubt.” If AI companies have their way, they will be the new authorities. Hao quotes venture capitalist Connie Chan making the claim that TikTok parent company “Bytedance is ushering in the era of AI consumer apps.” What that means apparently is that the model of force-feeding users a series of recommendations while disaffording the ability to search the archive on their own terms will become standard in more and more interfaces. Hao mentions that it is already being applied in dating, tutoring, and — surprise, surprise — hiring. Basically, AI is useful anywhere decisions need to be made and disavowed: “Security and relief from doubt.” TikTok points to how that attitude toward decisionmaking can be embedded in the logic of an interface and thereby be expanded, creating new markets for AI systems. That is, apps like TikTok and YouTube can train users to enjoy not making decisions, and to enjoy accepting the decisions made on their behalf precisely to the degree that they feel convenient, bespoke-tailored to us. We can swipe on or not, but the more instantaneous that decision is, the more the app has helped us, the more we are recognized by it, the more we feel seen.

Fromm claims that modern man has “in a measure lost his identity. In order to overcome the panic resulting from such loss of identity, he is compelled to conform, to seek his identity by continuous approval and recognition by others. Since he does not know who he is, at least the others will know — if he acts according to their expectation.” It’s easy enough to replace “others” with “algorithms.” I need TikTok (and YouTube and so on) to tell me who I am, to assuage my deep sense of self-estrangement. Whereas earlier social media forms foregrounded the promise of recognition by others, the AI consumer apps will emphasize the “continuous approval” and automate it. Neatly enough, this leads back to the “automatization of the individual in modern society” and further intensifies the sense of helplessness and insecurity that makes the social apps feel so necessary.

But at the same time, the performers on TikTok don’t come across as helpless and insecure. They come across more as someone following through on an idea. That seems inseparable from the simultaneous experience of the app handling the consumerist choices for you It’s as if the app is saying, don’t worry about what to watch, just think about what to do. I realize my use of TikTok is especially contrarian, in stubborn opposition to any of its “social” features: I don’t post anything or follow anyone, I don’t like or comment on anything, I don’t look at any of the trending hashtags. I deliberately didn’t want to let it teach me how to love it, which would have at some point entailed making something for it. Instead I enjoyed the idea that I could spurn its love; I loved my performance of refusal.

It’s probably telling that I tended to just read whatever text was on a post and move on to the next one, as if it were a labor-intensive and extremely mundane version of Twitter. I’m not sure if my tendency to read rather than watch TikTok was a matter of my being not an especially visual person, or a matter of just being old. danah boyd began her talk by telling her audience that if they don’t consume news primarily on YouTube, that’s because they are old. It is undeniably true that I am old. Watching video of any sort tends to make me feel even older, like I don’t have the time left to waste watching something unfold at the coercive pace of duplicated time. This doesn’t make me feel special or superior; it makes me feel fossilized. Not only do I no longer understand how most people live and see the world; but the way I live, the way I’m oriented to what seems to me to be reality, must be increasingly unimaginable and unsympathetic to most other people.

Out of that anxiety, I chose to consume TikTok asocially, as a kind of protective measure. I don’t see the other users in the clips as friends or peers; I see them as coldly as the algorithm does. They tended to bore me because they didn’t offer me any usable information; they were just snippets of some people living life — people more or less doing nothing. Nonetheless, this made me feel guilty about having so little curiosity about these other people, and I wanted to blame the app for engendering that in me. But then I started to take that guilt as proof that I was resisting what I think AI is mainly meant to achieve, namely insulating people from any guilt they may feel about seeing others as means and not ends.

So I leaned into it and began to wonder if maybe TikTok’s users were some sort of everyday heroes. I was tempted to see the “doing nothing” on display there as illustrating Jenny Odell’s sense of the phrase. In How to Do Nothing she defines “doing nothing” as a kind of resistance to the capitalist ethos of productivity in the attention economy. In the introduction, she advises refusing “a frame of reference in which value is determined by productivity, the strength of one’s career, and individual entrepreneurship” and instead “embracing and trying to inhabit somewhat fuzzier or blobbier ideas: of maintenance as productivity, of the importance of nonverbal communication, and of the mere experience of life as the highest goal.” Could that be what is going on with TikTok and I am too deluded and committed to the status game to see it? Shouldn’t I celebrate all those people “merely living?”

But then again, of course, TikTok is the attention economy; it’s a training ground for conceiving of attention as something that is always already captured by a media platform, something that requires an app to give it form, something that is, above all else, measurable. TikTok has the same sorts of metrics to incentivize content production (followers, comments, “hearts”), and it has the same goal of maximizing time on app, as any of the other platforms. The algorithms it deploys have the same intention of making all content into “doing something” in the most basic possible way, that of being counted.

In her list of fuzzier ideas, Odell goes on to recommend a “recognizing and celebrating a form of the self that changes over time, exceeds algorithmic description, and whose identity doesn’t always stop at the boundary of the individual.” Here is where TikTok and I both distinctly fail together. TikTok promises a self that conforms to algorithmic description and seems to grow the more you subject yourself to it. If it changes over time, that change is contained, foreordained. I bring the expectation to social media that they will materialize my identity, make it tangible to me, allow to experience it, and the platforms do their part with metrics and recommendation that help reify the self.

Maybe there is a fuzzier, blobbier way to use them. There is no shortage of clips in which people literally represent themselves a blobs, as warped or distorted as in fun-house mirrors, in a kind of repudiation of a strict representation of the self in favor of something different. And there are lots of augmented reality plug-ins that effect these sorts of distortions, that make the image of the self into raw material to be deformed and reshaped imaginatively and disposably. In some ways these are lures to draw users in and route their sociality through the apps that permit these sorts of gimmicks, and in some ways they depend on how identity is otherwise anchored within the platform, grounding these distortions in something normative and permanent. But it still feels wrong to think of it in terms of panicked conformity, as Fromm’s framework would suggest. I don’t think I have ever felt it myself, but I like to think there is a kind of momentum that is fuzzy, that captures the Deleuzean idea of an arrow in flight but without a trajectory. I want to float somewhere without being carried away.