Home
January 25, 2019

Understanding Makes the Mind Lazy

Earlier this week, Instagram felt obliged to try to quash a rumor that had been circulating on the platform that the company’s algorithms made it so that only 7% of any given account’s followers would see that account’s posts. This wasn’t the first time: this sort of rumor has been recurrent ever since platforms switched to algorithmic sorting. It speaks to a conflict of interest between platforms and the people who rely on them for distribution. Platforms don’t trust users not to game their algorithms, and in turn, users don’t trust the platform’s algorithms not to cheat them out of attention. Algorithms seem almost designed to foster a low-trust environment that, paradoxically, makes any conspiracy easier to credit. There is no basis for believing anything that isn’t convenient or self-serving.

The Instagram rumor and its refutation illustrate the conundrum that John Hermann described in this New York Times column: that platforms have to act as though their algorithms work and don’t work at the same time, and this equivocation fosters a paranoia about how algorithms work. If Facebook, for example, comprehensively denied claims that operatives used the platform to influence elections, “it would be tantamount to admitting that the systems we’re not allowed to know about — and the metrics we aren’t allowed to see — might not be quite as valuable, or as worthy of trade secrecy, as Facebook needs us to think they are.” But this isn’t strictly a matter of telling advertisers that algorithms work and telling the general public or regulators not really. It’s about negotiating the incoherent aspirations of algorithmic targeting, which purport to predict what people want based on who they are while also allowing manipulators to change it.

Last March, Procter and Gamble cut $200 million in digital ad spending in part because online metrics seemed unreliable. Yet the company remained committed to “transforming our industry from the wasteful mass marketing we’ve been mired in for nearly a century to mass one-to-one brand building fueled by data and digital technology.” This is another manifestation of the same conundrum: As ad metrics become more transparent, it becomes more obvious how they are gamed and what they miss. The more data you bring into marketing, the more waste you will discover, and the more unanswered questions you will generate. The perfectly targeted ad is like an oasis that never moves off the horizon no matter how doggedly advertisers and tech companies march toward it. (Case in point: Personalized ads are coming to TVs, targeting viewers through cable-box surveillance.) The point of advertising, after all, is not to nail down what people are, as if that were static; it’s to shift currents of demand, to alter behavior patterns. But the logic of data profiling uses the past to repeat it as the future.

To make that contradiction cohere — the idea that more data can reduce waste and improve targeting rather than constantly telescoping the bull’s eye — Facebook must obfuscate if and how its manipulation methods work, and be content to foment paranoia among users about algorithmic control. In that space of mystery, users and advertisers alike are licensed to believe anything and doubt anything. As Hermann puts it, “On Facebook, where every user’s experience is a mystery to all others — and where real-world concepts like privacy, obscurity and serendipity have been recreated on the terms of an advertising platform — users understandably imagine that anything could be happening around them: that their peers are being indoctrinated, tricked, sheltered or misled on a host of issues.”

This mystification is not an unfortunate side effect; it’s the value Facebook adds. Users are isolated from each other so they can feel as though they are the implied subject of all the discourse they experience on the site — so that they can be targeted in “one-to-one brand building” campaigns. Users get to feel important, singled out, worthy of decoding, and at the same time they get to interpret whatever they read through the lens of “Why did the algorithm choose this for me? What does this say about me and my tastes?” But that works only through an effort of disavowal: You have to feel that the algorithm is right enough to cater to you but not powerful enough to control you (even while it controls all those “indoctrinated peers”).

Facebook’s aim is to facilitate that disavowal, so it continues to deepen the mystery about its algorithms while seeming to be in the process of dispelling it. Instagram, in its efforts to mollify users’ apparent concerns with algorithmic content suppression, resorted to some evolutionary language — “your feed is personalized to you and evolves over time based on how you use Instagram” — as if the algorithm was an organic entity subject to a process of natural selection rather than corporate programming. It also claimed that “we never hide posts from people you’re following — if you keep scrolling, you will see them all.” In other words, if you don’t see all the content you want, it’s not a ranking algorithm’s fault — you’re just not scrolling hard enough.

These explanations are not meant to quell users’ paranoia; they’re just muddying the waters while implicitly reminding everyone to try harder to beat the algorithm that is always a step ahead of them. Trying to game the algorithms is not some niche form of “coordinated inauthentic behavior” perpetrated by rogue actors. It’s is just what normal social media use consists of. Opaque algorithms incentivize more posts and new techniques for posting, new methods for generating engagement and capturing attention. That experimentation may do as much to drive platform participation as effective content targeting.

In this London Review of Books essay about Brexit, William Davies offers this description of accelerated finance:

The mentality of the high-frequency trader or hedge fund manager is wholly focused on leaving on better terms than one arrived, with minimum delay or friction in between. To the speculator, falling prices present just as lucrative an opportunity as rising prices (given the practice of ‘shorting’ financial assets), meaning that instability in general is attractive. As long as nothing ever stays the same, you can exit on better terms than you entered. The only unprofitable scenario is stasis.

In a sense, platform paranoia is akin to market volatility; it reflects and promotes a high-frequency trading of sorts in various propositions, accelerating cycles of belief and skepticism as we churn through a much higher volume of information. Advertising is more likely to be effective amid these conditions, where it seems that everybody and not just marketers is being manipulative and deceptive.

But the paranoia that platforms foment is not limited to ideas about how its algorithms operate or the “fake” materials they circulate. It spreads to users’ attitude toward themselves: We become paranoid about who we are based on what the algorithms are showing us. How we are targeted is always incomplete and inaccurate, but these inaccuracies in themselves can still drive and reshape behavior. Being targeted itself affects the targets, regardless of what is targeted at them, or if anything hits. “The only unprofitable scenario is stasis.”

According to a recent Pew survey, “about half of Facebook users say they are not comfortable when they see how the platform categorizes them, and 27% maintain the site’s classifications do not accurately represent them.” These classifications, of course, are drawn not strictly from what you do on Facebook but from a broad field of on- and offline surveillance, in which nothing observed about you can possibly be inaccurate, could possibly not factor in to who you “really” are. As Pew describes it:

These categories might also include insights Facebook has gathered from a user’s online behavior outside of the Facebook platform. Millions of companies and organizations around the world have activated the Facebook pixel on their websites. The Facebook pixel records the activity of Facebook users on these websites and passes this data back to Facebook. This information then allows the companies and organizations who have activated the pixel to better target advertising to their website users who also use the Facebook platform. Beyond that, Facebook has a tool allowing advertisers to link offline conversions and purchases to users — that is, track the offline activity of users after they saw or clicked on a Facebook ad — and find audiences similar to people who have converted offline.

The indistinct boundaries around the surveillance make it seem that anything is possible, and all paranoia is warranted. After all, expanded surveillance is used to generate synthetic insights about people, which are used to reshape their experience, their opportunities, how they are treated in socioeconomic situations where those “insights” can be “leveraged.” Thus Hermann can write as if he doesn’t know what his own behavior has consisted of: “I can’t even guess why Instagram started showing me a bunch of photos of a certain breed of dog or why it’s suddenly serving me ads for meal kits. I know how these things make me feel, but Facebook knows how they made me behave — knowledge it won’t soon share.” In this construction, your behavior is not what you remember doing or how you felt; it’s what Facebook manufactured about you, who it sold that information to, and how it was used against you.

Platforms have a lot invested in that distinction between “how things make me feel” and “how I behave,” as if they were the only intermediary that could make the magic connection between the two. They want to sell control over that connection, the moment at which your feelings become actions in the world. (Advertisers understand that link between feeling and acting entirely as a matter of “conversion rates” — when you actually buy something.) If platforms are working as they intend, they prompt us to experience that divorce between feelings and behavior as natural and to see feelings not as behavior in their own right but as some sort of speculative condition in search of algorithmic confirmation. Perhaps if our feelings are recognized by 7% of our followers, they become real.

If surveillance dictates what counts as behavior, it creates a safe space for feelings where they can be bracketed off from the world and we can enjoy them without consequences. So all the surveillance that platforms facilitate can have this inverse effect of proving a kind of invulnerability — the more they target me, the more they don’t get how I really feel, and the more it becomes clear that my “real feelings” escape the net. We’re not even sure what counts as behavior from the algorithm’s point of view, so we are liberated from caring about it. Our feelings can be whatever we want them to be, and since they transcend the algorithm, we can enjoy them without having to be responsible for them.

Because the surveillance is general and open-ended — taking in and assimilating all users — it allows individuals to disappear into the mass. Regardless of how I might feel about, to return to Hermann’s example, ads for meal kits, Facebook knows not how I specifically will behave but how a certain proportion of an amorphous audience will respond. At that moment, I cease to be the subject of my own feed as an individual but am assimilated to a indeterminate collective subject, a statistical construct whose behavior is utterly predictable at a certain scale, where it doesn’t matter which specific individuals act but only that a predictable percentage of them will. Facebook targets me as a set of probabilities, which means my self is always being recalculated as something new. It’s always in the present tense.

The recent “10-year challenge” meme — in which people post before and after pictures of themselves from 10 years ago and now — prompted a conspiracy theory that it was a scheme to get users to inadvertently supply the data needed to train facial-recognition apps. Max Read, among others, pointed out that companies already have such data in abundance, and that no conspiratorial thinking is required to understand their explicit business models. It’s not an unreasonable position to start by assuming that anything that’s connected to the Internet is a conspiracy to collect your data.

In a column for the Washington Post, Philip Kennicott took the criticism in a different direction, seeing the meme as part of how Facebook brings about the “destruction of authentic memory.”

When we remember our lives authentically, we ask a fundamental question: Why did I remember this thing, at this moment? The “Why now?” question gives memory its meaning. Facebook randomizes and decontextualizes memory and detaches it from our current self. And why would I want to know what I looked like 10 years ago? This communion with lost time should steal upon us in it is own, organic fashion, not at the bidding of other people, or according to the algorithm of a rapacious and amoral corporation.

This is on a par with arguments that recommendation algorithms destroy serendipity, or that taking too many photographs interferes with having real experiences. The assumption is that there is some organic way of experiencing life that a new form of mediation is colonizing and destroying; our brains will stop working to form memories because we’ll be entirely dependent on our comprehensive video life logs that will show us what really happened. That seems to be a potential danger with this product that purports to “capture life from your baby’s point of view” and “record those moments so that your baby can see them when they’re an adult.”

Since, according to contemporary ideology, our only “real behavior” is what can be captured through surveillance (it records “how I behave” rather than my mere “feelings”), it makes sense that there would be means for extending that surveillance, taking some control over it, and making it more comprehensive so we can become more real. We filmed every moment of your life because we love you! But it remains true that the more you capture, the more you define what has been left out, and the more you structure that void as the really real, the authentic authenticity. There is no total representation of a person’s life, nothing that could preclude the ability of memory to intercede. Memory works in the present on the materials in our consciousness not to reproduce a past as it was but to create something new out of it. Claiming that some forms of memory are inauthentic is just an aesthetic judgment about another person’s life choices.

If we are inclined to be lazy about what we remember or who we are or what we desire, then that laziness will find its various crutches, including new modes of communication and new purported technological conveniences. And the way convenience is so widely touted as a virtue may prompt more people to be so inclined. Convenience is a basic form of social control; it’s easy to chart the path of least resistance. It’s easy to predict selfishness. Companies invested in predicting our behavior will do what they can to make us all more selfish. But no matter how much documentation or data a company may have, it can’t make us remember only what it can exploit and it can’t target us so accurately that we cease to be capable of anything new. Facebook can’t make us experience an involuntary memory, let alone pre-empt “organic” ones. It can force-feed us madeleines, but we could still end up with nothing but a stomachache.