Home
September 6, 2019

Vanity metrics

In recent months, several social media platforms (Instagram, Twitter, Facebook) have announced that they are experimenting with doing away with public-facing like counts. Purportedly, this is from a concern for users’ well-being and to encourage healthier conversations, though I think we can dispense with those motives out of hand. If these platforms were concerned for human well-being, they would abolish themselves. They are experimenting with doing away with like counts because they think it’s possible that they will make more money that way.

Josh Constine suggests in this TechCrunch piece that ditching likes is meant to get users to post more, which seems to be the consensus view. The posts with large like counts, he writes, “can make other users self-conscious of their of own lives and content. That’s all problematic for Facebook’s ad views. Facebook wants to avoid scenarios such as ‘Look how many Likes they get. My life is lame in comparison’ or ‘why even share if it’s not going to get as many Likes as her post and people will think I’m unpopular.’”

That assessment of platforms’ motives seems plausible enough, but it should not be misunderstood as their being unhappy about posts with large like counts. If anything, platforms want posts with large like counts to dominate feeds unobtrusively while they find other ways to placate people who post things the content-filtering algorithms suppress. Eliminating likes points the way toward social media becoming merely a more surveillant form of television, where users consume mainly the algorithmically force-fed content, with most user-generated content being a comparatively small sideshow.

Removing like counts will do nothing to undermine those accounts that get high like numbers. As Taylor Lorenz has argued, getting rid of like counts helps rather than harms established influencers, who don’t depend on likes for the proof of consumer engagement that brands demand of them. This Ad Age post states it pretty bluntly: “We already knew that vanity metrics — including the like count, comments or follower counts — don’t serve as true engagement metrics.” It’s all about the conversions. Kaitlyn Tiffany pointed out that Instagram has no desire to cripple the influencer industry, which is intrinsic to its retailing aspirations. “It’s all part of Instagram’s stated goal to turn the feed into a ‘personalized mall,’” she writes.

Of course, people didn’t go to malls to open their own stores; by the same logic people shouldn’t use Instagram to try to become influencers in their own right. Going to the mall wasn’t strictly about buying things, though; it was also about hanging out and sustaining a certain fantasy about being seen among all the status signifiers and emblems of abundance. Hiding like counts makes that sort of aspirational vicariousness more salient and accessible. One can float their suspension of disbelief on a series of their own unmetricized posts. But to sell aspirational fantasy, you ultimately have to keep the dollar stores out. The space for daydreaming can’t be interrupted by the dismal evidence of the fellow dreamers’ delusions.

Discarding outward-facing metrics that give users a quasi-objective sense of a post’s popularity is a similar move to cancelling the chronological timeline. As John Herrman put it in a New York Times piece about eliminating like counts, “What you see on Twitter and Instagram already depends on a mixture of signals — things you’ve liked in the past, how much time you’ve spent looking at a particular user’s content, whether you communicate privately with a given user and whether you have an affinity for some topic or another — not just chronology, likes or retweets. Those signals are all metrics too, of a sort, invisible to us but very much legible to the platforms themselves.” Increasing that invisibility is now to the platforms’ benefit, Herrman argues, because they no longer need to drive user growth with vanity metrics. Instead, platforms now gain power by seizing further control over how the attention of already captured users is directed. The implicit message in getting rid of likes is that users don’t need to see them to draw any conclusions about the content they are seeing or pursuing. They aren’t supposed to be “pursuing” particular content at all but assuming an increasingly passive stance toward what a platform chooses to show them.

With algorithmic sorting, this is usually billed as a necessary convenience — in their supreme benevolence, algorithms rescue us from the torment of too much information by vigilantly watching over us to learn our desires (potentially hidden even from ourselves) and show us only the things that we really want to see. In practice, though, this means platforms show us things that it is paid to show us (ads, promoted content, etc.) along with the sorts of content that makes us tolerate it, that convinces us there is no point in trying to search for anything different. In other words, algorithmic sorting is meant to make us indifferent to wanting particular things. It teaches users to enjoy passivity as an end in itself, as a kind of pure convenience in the abstract. Ah, the joys of being pre-emptively freed from decisions I hadn’t even yet considered!

The information overload that algorithms supposedly save us from is actually incentivized by them. With algorithmic filters in place, we can adopt an indiscriminate attitude toward information — why should I have to do any presorting? Consider, for example, this post by Will Oremus, in which he contends that algorithmic filters have helped “fix” Twitter by letting users follow more people, which in turn helps us better fine-tune the algorithms. “I used to limit my follows to 1,000 … Now I follow more than 2,000 and don’t give it a second thought, because I know Twitter isn’t going to show me everything they tweet,” he writes. “If anything, following more people should now increase the signal-to-noise ratio in your timeline, because it gives Twitter’s ranking algorithm more tweets to choose from. If someone is tweeting stuff that doesn’t interest you, the algorithm just stops showing you their tweets.” That is placing a lot of faith in the algorithm to guess correctly — for it to know something you don’t yet. You are just a data provider, not a data processor; you participate in Twitter as a sensor, not even as a consumer. “I find myself less likely these days to see an annoying tweet and consider unfollowing its author and more likely to forget that I follow that author altogether,” Oremus continues. The usefulness of Twitter here is to perpetrate the fiction that there are no courtesy follows, and that a follow equates to actually caring about what someone says. But the algorithm is there to actually nullify the underlying promise of the vanity metric and to make it an empty trophy.  “In the meantime, Twitter feels fresher, because I’m following new people and encountering different worldviews,” Oremus writes. But that contradicts itself — the algorithm filters out the diversity he is trying to use it for.

Oremus’s paean to New and Improved Twitter suggests that Twitter is at its best when it provides information without users having to take responsibility for that information’s source. It sounds like a veiled confession that Oremus has let the algorithmically driven interface ease him into a more passive attitude — we consume ourselves surrendering to the flow. He accepts that “following” should be a signal to Twitter but not so much to the person being followed, who is obviously less important than the platform itself to the user’s “fresh feeling.” As Herrman put it, “Likes and retweets used to be translated into signals for people. Now they just provide signals for the machines.”

Why shouldn’t Twitter effectively delete most of the content of people I follow from my feed? It already tries to slip in posts from people I don’t follow, under the pretense that a few people I follow have liked them. What I asked to see is demoted to just another signal for consideration, and not necessarily the most important one. Twitter is more interested in sowing doubt in me about that: It wants me to follow people not out of a desire to read what they post but a desire to get to know myself better through the medium of its algorithms (and its advertisements). From this perspective, Twitter works better for me not when I interact with other people but when I strictly interact with it alone, teaching it to unearth the content that keeps me engaged. My goal in being on Twitter should be to find ways to stay on Twitter.

But who wants that? A major problem with Twitter (beyond the hate it facilitates) is that it urges us recognize ourselves in our responsiveness to certain kinds of bait. Algorithmic filtering makes that pathetic self-realization even more intense. We get the memes we deserve. This seems far worse to me than following too few people or somehow missing the “good” content.

I know it is in Twitter’s interest and not mine to show me an algorithmic timeline because of the literally hundreds of times I have had to switch the Twitter phone app from the deceitfully named “Home” (Silly user, you don’t get to choose your own home here! We will tell you where you will live!) to “Latest Tweets.” Inevitably that option to switch back will be removed, under the pretense that no one really wants it anymore. In reality, that will mean that those users who weren’t already converted to passivity have either surrendered to the interface’s recalcitrance or left the platform altogether. Basically it will go the way of Instagram. Earlier this week Frank Pasquale linked to a Reddit thread complaining about Instagram’s algorithmically driven algorithm: “It would be an excellent decision for instagram to allow the common sense option of viewing the feed either with the algorithm or in traditional chronological order,” one poster writes. “Furthermore, I don’t necessarily want to see the ‘best’ posts most often anyway.”

While that option might be “excellent” for users, it would not be for Instagram, which has more to gain in training its users to fetishize its algorithm. Instagram wants to guarantee its advertisers that certain kinds of “best” content get seen the most and is therefore the most valuable sort of content to advertise against. When platforms insist on showing users the algorithmically selected “best” content, it is not doing those users a special favor; it is not necessarily considering their interests at all. The fact that the algorithms draw on user data to “target” them doesn’t mean this is for their benefit. (In general, most people don’t want to be shot at.) The goal is to impose a particular notion of what “best” is: The sorting is a form of meta-advertising. Eventually users are supposed to accept that whatever they happen to see on a platform is “the best content for them,” and if that includes ads for the NFL or for White Claws or chicken sandwiches or Donald Trump, then so be it. What’s best is best.

The comment that Pasquale highlighted in that Reddit thread argues, “If you’re not self promoting and spamming and doing XYZ to your account, you’re basically a nobody. And these companies know that, so they only promote bigger people.” What this suggests is that the point of algorithmic sorting is to manufacture the “bigger people” and discourage the “nobodies” from cluttering up the field too much. They should “engage” in their little corner of the platform with their puny like counts but not expect to compete with the real players, the brands and the influencers in cahoots with them. Algorithmic sorting rewards and produces scale: a few users with big follower counts and interactions, and many millions who are content to be followers, presumed to be beyond the need for vanity metrics and far more delighted to just obey the feed and accept its flattering attention to their needs. Scale, after all, is the entire point of platforms: They exist to sell the opportunities to leverage it, and that is all. “Digital well-being” and “healthy conversations” are at best means, not ends.