Each day, Facebook’s 1.45 billion daily active users share 2.5 billion pieces of content, including 300 million photos. On Twitter, 100 million daily active users send 500 million tweets per day. On Instagram, 500 million daily active users upload 52 million photos per day. YouTube’s users were uploading 300 hours of video every minute as of 2015, a number that has surely risen as its user base has risen to 1.8 billion, a 20 percent increase since just last year. Growth — in total active users as well as how active those users are — is the business plan for every major social media company. When these companies decide how to moderate their platforms, it is with these unfathomably large numbers in mind.
All the large-scale platforms are built on the premise of letting users post whatever they want without prior review, with moderation deployed to quell concerns after the fact. But that doesn’t mean content moderation should be seen as an afterthought. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media, researcher Tarleton Gillespie argues that moderation is central to what social media platforms offer and how they differentiate themselves. “Social media platforms emerged out of the exquisite chaos of the web,” he writes, and they offer to tame it with legible templates, orderly protocols, and content curated based on who and what they deem most relevant to users. How they bring order to chaos — the rules they develop for content as well as the means, efficacy, and consistency with which they enforce them — goes a long way to establishing what it feels like to “be” on a platform. “Moderation is, in many ways,” Gillespie asserts, “the commodity that platforms offer.” That is, content moderation is a large part of what makes a platform feel like a specific sort of place, even as it suggests this place is less a “where” than a “how.”
But the spirit of the rules is often in conflict with the platforms’ broader ambitions to sustain user growth and participation. As much as platforms may want to maintain quality control, leveraging network effects has been a more important priority, which suggests that scale rather than moderation is mainly how a platform offers value. If a platform achieves enough scale, there will be no meaningful alternative to it, and it can then use its leverage to reshape users’ expectations. As technology reporter John Hermann writes in a recent New York Times Magazine column, “As with Facebook, and to some extent now Amazon, there is no overarching pitch to [Google’s] users beyond: Where else could you possibly go?” Gillespie makes a similar point: “The longer a user stays on a platform and the larger it gets, the more she is compelled to stick with it, and the higher the cost to leave.” You can’t take your friends with you.
By design, then, the amount of content to be moderated is vast, and moderation is spotty. “Platforms are filters only in the way that trawler fishing boats ‘filter’ the ocean,” Gillespie writes. “They do not monitor what goes into the ocean, they can only sift through small parts at a time, and they cannot guarantee that they are catching everything, or that they aren’t filtering out what should stay.” While every platform posts a version of its content-moderation rules, “there are real differences between the rules these companies post and the decisions they end up making, case by case, in practice.” Regardless of how logical or comprehensive the written rules may seem, they end up being enforced inconsistently at best and inscrutably at worst. This suggests that content moderation, like airport security, is more a gesture of “security theater,” as Bruce Schneier has called it, than a failsafe system. It’s not so different from enforcement of the legal code: the laws that are enforced — and against whom — tell us more about the state’s priorities than the code as written. So too with content moderation.
Discourse on platforms is refracted through a hidden lens, and the rules can only be inferred from what’s projected into one’s own feed, which will of course differ for every user. But that doesn’t mean that what a user sees on a social media platform is random. There are no accidents — only a platform’s priorities. Suppressing or amplifying any particular type of content — be it political dissent, hate speech, breastfeeding photos, violent threats, sex-work logistics, misleading news, or anything else — is a matter of will, not capacity. It is not a technical problem: no sophisticated code or technology is required to simply remove content. It just requires resources, usually human ones.
The cracks and blemishes of dissent may be acceptable on the small scale of a niche forum, but they become structurally unsound on a larger one
But platforms tend to balk at making these decisions. Alex Jones’s InfoWars, a media platform that specializes in far right conspiracies and hoaxes, was launched in 1999, and it garnered more than 2 million followers on YouTube, plus nearly a million each on Facebook and Twitter despite regularly publishing items plainly in violation of their terms of use. The platforms hosted this content nonetheless, until recently, when outrage over (among other things) InfoWars’ claims that the families of victims in the Sandy Hook shooting were “crisis actors” who never had children in the first place began to mount. After some waffling, Facebook eventually banned InfoWars, with YouTube following suit quickly thereafter. It’s doesn’t seem a coincidence that these bans happened on the same day — nobody wanted to go first. Other platforms were less consistent: Though Apple took down a half dozen InfoWars podcasts, the InfoWars IOS app is still available in the App Store. Twitter defended its decision not to ban InfoWars on the grounds that it had not violated any of its policies. When CNN demonstrated how that wasn’t true, Twitter removed just a few posts.
Platforms have generally wanted to appear neutral, as though that weren’t a political position in its own right. As Gillespie notes, platforms like to brag about the galaxies of content they make available, but they keep relatively quiet about what and how much they remove, “in part to maintain the illusion of an open platform and in part to avoid legal and cultural responsibility.” They would like to avoid perceptions of bias, as these could curtail user growth. Gillespie cites Steve Jobs’s justification of banning an iPhone app that simply counted down the days left in George W. Bush’s presidency: “Even though my personal political leanings are Democratic, I think this app will be offensive to roughly half our customers. What’s the point?”
Such a stance is very easy to manipulate in bad faith, though. The opacity around content moderation shrouds the logic for decisions; anyone can claim bias, as no one can be sure whether or not they are being moderated. On Twitter, conservatives complain of “shadow banning” — that is, letting a user post but hiding their posts from followers — and received favorable coverage despite the fact their complaints seem to be unfounded. Conservative commentators like the Gateway Pundit and Diamond & Silk have accused Facebook of suppressing and censoring their content with flimsy supporting evidence at best (and much more substantial evidence to the contrary). But the platforms have tended to appease this community, rather than the many users who would like the platforms to do more to address hate speech and harassment.
Summarizing work by information studies scholar Sarah Roberts, Gillespie says that companies are cagey about how they moderate in part “to downplay the fact that what is available is the result of an active selection process, chosen for economic reasons.” What makes sense economically for these companies, which profit mostly by selling ads and targeting data, is content that yields trackable “engagement,” regardless of what kinds of behavior or beliefs users are engaging in. The incentives for fomenting conflict are obvious and plenty. Hate watches, ironic shares, baited clicks, and angry comments all demonstrate more engagement than a smile, a nod, or a verbal recommendation. “All these companies began with a gauzy credo to change the world,” longtime tech reporter Kara Swisher wrote in a recent New York Times op-ed. “But they have done that in ways they did not imagine — by weaponizing pretty much everything that could be weaponized. They have mutated human communication, so that connecting people has too often become about pitting them against one another, and turbocharged that discord to an unprecedented and damaging volume.”
The platform has become a battleground, and the larger it becomes, the higher the stakes for the disparate tribes fighting over it
The platform has become a battleground, and the larger it becomes, the higher the stakes for the disparate tribes fighting over it. For evidence of this tribalism, look at the replies to any political tweet. Below any @RealDonaldTrump tweet, for example, whether it has any political content or not, will be a series of responses of users from across the political spectrum arguing not with but past one another, aiming instead for the presumed infantry of adversaries behind their interlocutors (and the assumption that supporters of their own viewpoint will come to their aid if need be). The perceived connections with other like-minded users are inseparable from the camaraderie-building attacks on the “enemy.”
Platforms start off, Gillespie notes, with “users who are more homogenous, who share the goal of protecting and nurturing the platform, and who may be able to solve some tensions through informal means.” On a small message board, a few hundred or even a few thousand users may come together to discuss a given topic (knitting, junior hockey, brutalist architecture, plastic surgery, etc., you name it), with some degree of homogeneity among them, as with any interest group. Harmony was fostered not just by the niche topic but also by community standards that one could engage with directly via accessible human moderators. There were still arguments and harassment, but problem users could be reprimanded directly by a moderator and talk things out until they either fell in line with the community standards or left entirely.
But as the user bases on all-purpose social media platforms grow, they also diversify, and what was once a monolithic bloc becomes made up of “whole communities with very different value systems … who look to the platform to police content and resolve disputes.” Sometimes that’s just flagging or reporting a post, but it can also mean directing a mob on the user who posted it. Users can now unite into a front and battle with other subcommunities on the platform. At a certain point, a platform becomes too pluralistic to be governed holistically. The cracks and blemishes of dissent may be acceptable on the small scale of a niche forum, but they become structurally unsound on a larger one. A coffee mug with a chip in it is idiosyncratic; a coffee shop with a chip in it is unfit for occupancy.
Social media platforms often assume that their users are not generally bad actors, posting on the site in good faith, despite the incentives for mobbing, conflict, or other attention-seeking practices built into the interfaces and business models. Though their massive networks are their great strength, platforms tend to see only the nodes when dealing with abuses, rather than the myriad connections between them. Moderators typically determine the permissibility of posts without knowing the identity of the poster, much less the relevant context for what they have posted. In their supposedly neutral attempt to recuse themselves from arbitrating intent, platforms ensure misinterpretation. It’s like deciding sarcasm doesn’t exist and trying to follow an episode of Seinfeld.
The compulsion to view infractions as perpetrated by individual actors in a vacuum is drawn from a law enforcement paradigm, but a more apposite one, as Swisher’s rhetoric of “weaponization” suggests, is that of warfare. That’s not to hyperbolize the discourse on social media into the equivalent of munitions. Rather, the warfare paradigm recognizes that the essential tensions on social media are between communities, not individuals. No algorithm nor well-trained army of moderators can conform to contradictory standards of different communities that are in direct conflict with one another. The rules they enforce then are not reflective of any community’s standards but reflect the platform’s desire to serve itself.
“For large-scale platforms,” Gillespie notes, “moderation is industrial, not artisanal.” The people and programs moderating are no longer from the communities they moderate. They work through thousands of flagged posts a day, each divorced from context. Aside from the volume — which is unrelenting — moderators are “compelled to look at the most gruesome, the most cruel, the most hateful that platforms users have to offer … child porn, beheadings, animal abuse.” Desensitization to this content becomes necessary for psychological survival. To not be destroyed by the work, moderators must learn to compartmentalize it in a way that separates them further from the communities whose behavior they’re monitoring, making any reflection of those communities’ standards impossible.
There are no accidents — only a platform’s priorities
Gillespie suggests that platforms should moderate more aggressively and offers five broad recommendations: design for transparency, distribute the agency of moderation among users, protect users as they move across platforms, reject popularity metrics, and diversify the engineers and entrepreneurs who make decisions. But what incentives do platforms have to improve moderation? Without data showing the commercial value of improved moderation, no platform will invest in those changes, and with the misguided belief that any form of moderation can serve “neutrality,” no platform will ever summon such data. But communication is never neutral, and there is a constituency for virtually any content or ideology imaginable. A given community wants moderation only insofar as it can weaponized against adversarial communities.
Platforms would prefer not to foreground their interventions and settle into the background as “social infrastructure.” Technology “is most consequential precisely when it fades from notice and assumes a taken-for-granted status,” L.M. Sacasas writes in the New Atlantis. “Accidents and malicious use, in fact, often have the effect of foregrounding technologies and systems that have become invisible to us precisely because of their smoothly functioning ubiquity. We may be momentarily discomfited by the newly perceived fragility or vulnerability of the technologies upon which we depend; rarely, however, do we reconsider the nature and extent of our dependence.”
It’s also possible that tech companies are blinded less by profit incentives than by their certainty that they know what’s best for us. In Anti-Social Media, Siva Vaidhyanathan argues that “if Zuckerberg were more committed to naked growth and profit, and less blinded by hubris, he might have thought differently about building out an ungovernable global system that is so easily hijacked. Facebook’s leaders, and Silicon Valley leaders in general, have invited this untenable condition by believing too firmly in their own omnipotence and benevolence.” These leaders claim to have constructed their visions on the foundational axiom that connecting people is necessarily good in and of itself. Tech execs see themselves as benevolent gods of their platforms, with subjects made in the image of their aspirations. They assume users who are open not only toward their immediate communities but in general, with an eagerness to encounter difference and novel experiences. This worldview obfuscates the companies’ role in furbishing communication battlegrounds.
It may be that sheer scale endangers the high ideals of having open platforms, and that a historically unprecedented inundation of speech warps the overall experience of communication in ways that we are not suited to handle. “When the human condition was marked by hunger and famine, it made perfect sense to crave condensed calories and salt,” Zeynep Tufekci writes in Wired. “Now we live in a food glut environment, and we have few genetic, cultural, or psychological defenses against this novel threat to our health. Similarly, we have few defenses against these novel and potent threats to the ideals of democratic speech, even as we drown in more speech than ever.”
A platform is worthless without users posting on it. Facebook, Twitter and others have recently taken significant hits to their stock price on revelations of slowed user growth. Users create the value on which these platforms’ stock prices are based, and users bear the costs of the toxicity on those platforms.
The crisis social media users contend with stems from an outdated fantasy about how people want to use these platforms, and how people behave more generally: solipsistically self-interested, infinitely rational, and divorced from any larger movement or context. Recognizing the sociality of the conflict is the first step toward making these spaces healthier for good faith discourse. Such a shift would require platforms to surrender their neutral pose. With billions of users at stake, social media companies are loath to make major changes to their foundations, but if they don’t, they risk watching whole structures crumble under their own weight.