In The Shock Doctrine, Naomi Klein famously demonstrated the power of world-altering crises like 9/11 and Hurricane Katrina to “shock” people into a kind of stupor that prevented them from pushing back against the “godlike” power of corporations to impose social austerity, deregulate economies, and pursue revanchist development. These crises were generally distributed by television news. According to the film and media scholar Mary Anne Doane, writing in “Information, Crisis, Catastrophe,” live television is tuned to the “explosiveness of the present,” meaning it organizes itself around apparently important events, blurring the line between seemingly banal information and real, traumatic catastrophe.
If television, which broadcasts to more or less general audiences, is tuned to catastrophic events that are large in scale and widespread, social media platforms disseminate crises at a different scale. Not only do they direct general information to mass audiences; they also target information to individuals based on data and algorithmic sorting processes. News events are relevant — but so is everything from viral videos and posts, to fights or spats between individuals and groups, to photographs that update your status and comments that update your opinion. Data is drawn from individuals living their lives down to the day, hour, and even minute.
Our “addiction” to social media platforms is not just about feedback loops that “hack” into our dopamine system
Social media, at least according to its founders, began as an almost utopian site for engagement with networks of friends and acquaintances. But now platforms like Facebook and Twitter have become feel-bad sites of crisis, full of content that stokes feelings of impotence and despair in the face of widespread corruption. So many stories about scams large and small have been circulating that cultural critic Jia Tolentino went as far as calling the summer of 2018 “grifter season,” and the Guardian, in this article, designated 2019 “the year of the scam.” More pressingly, many are dealing with ongoing financial precarity, and we see stories reminding us to worry about technological automation rendering us redundant. The future of life itself stands in the shadow of environmental catastrophe.
It can feel as if these very real crises have begun to cross-pollinate, metastasizing into some meta-crisis that is not only everywhere at once but impossible to isolate. Social media play a key role in that confusion. Indeed, the dominant feeling for social media users today might be one of anxiety. Troubling material is unceasingly circulated and shared by social media users, especially on feed-based platforms like Twitter and Facebook that blur the line between “news” and “social” media. And yet we still use them — even if they are being supplemented by other kinds of platforms that emphasize “stories” and images instead of feeds, links, and text.
That the internet can be an unhappy, anxious place is hardly a secret. There are no shortage of warnings about the dangers of “tech addiction,” and the connections between social media use and anxiety about one’s self-worth. But our so-called “addiction” to social media platforms is not just about feedback loops that “hack” into our dopamine system. Social media platforms use our ambivalence about attention and our own agency to their own benefit at the same time as they seem to cater to us.
It might seem like these platforms should have reason to avoid or downplay items that generate negative feelings, and that our ongoing engagement would be better secured by the rush of the good feelings we were supposed to have in the allegedly warm and fuzzy world of social media. But that is not the case, and the platforms know it. Instead, they mobilize our negative feelings to give us the impression of agency.
Crises, like sparkplugs, spur us into action: gathering information, waiting for updates, searching for opinions. This process keeps us forever suspended, forever updating, and forever in “crisis mode.” When platforms show us things that make us feel bad and anxious, it is not because they are working defectively but because they are working correctly.
The links between attention, discomfort, and platform capitalism may seem obvious. One could suggest that our behavior is manipulated in the same way that a smoker’s is: Just as their day is structured around cigarette breaks, a user’s day is structured around going online. This is effectively the argument that behavioral scientist Nir Eyal makes in Hooked: How to Build Habit Forming Products (2013), which has become something of a manifesto for advertisers and tech companies alike. Eyal argues that people can be hooked on products and digital spaces by associating them with internal emotional triggers, like loneliness or a fear of missing out. For Eyal, what distinguishes his “hook model” from other feedback loops is the “ability to create a craving” structured around the dopamine rewards a user apparently feels when using social media. The corollary of this craving and reward system is the claim that it establishes a predictable form of desire that — when it isn’t fulfilled — triggers feelings like “I am the only lonely user of social media,” thereby establishing a drive for reward in the form of platform interactions.
This straightforward model treats the negative feelings we associate with reports of crises, scams, and manipulation not as threats to user well-being, imperiling their satisfaction with a product (not to mention their health more generally), but as drivers of ongoing engagement. This advice has worked so well for platforms that Eyal has recently written a sequel aimed at teaching users how to break the habits he taught companies how to inculcate.
But Eyal’s model relies on the assumption that users are either blind to the hook model of social media or that, like addicts, they will continue to log on even if they understand the consequences. Certainly, there is truth to the suggestion that our distaste for social media plays as big a role in capturing our attention as our appreciation or taste for anything we discover through them. But these triggers don’t work behind our backs; they work precisely because our attachment to uncomfortable, negative feelings is complex and ambivalent. And this ambivalence can sustain a kind of complicity. We become entangled with the crises that circulate, unsure of our own role in them and driven by a confused desire to experience some form of emotional release.
Platforms mobilize our negative feelings to give us the impression of agency
To get more information about us (and thus more informational capital), platforms need more of our attention. Rather than manipulate our unconscious, platforms engage our sense of agency directly. They give us the impression of control through an unending series of updating decisions: checking notifications, liking, retweeting, and upvoting, engaging in “conversation” with others, or simply refreshing our newsfeeds.
These decisions are often discussed in the abstract by people like Eyal and UX designers as “dark patterns” that manipulate us into doing things we didn’t mean to do, at the expense of our own free will. But it is not the case that users of social media platforms are paralyzed and inactive. Instead, the user (via their engagement with platform infrastructure) becomes dependent on code for action. At the same time, this infrastructure conveys the false impression that code depends on the user for action. Although many critiques of technology and social media claim that “compulsive” platforms nullify our sense of agency and alienate us from an idealized “real life,” it may be more accurate to say they flatter us into thinking that we are in control.
Social media platforms like Facebook and Twitter, then, repay our ongoing attention through a refracted sense of agency. By having the option to like, retweet, or comment, to refresh my newsfeed or to tweet out myself, it seems like I am the only one in control of my Twitter. The content I see is structured for me by algorithms based on my usage patterns, making me the implicit focus of my feed, regardless of what the tweets I see are about. My attention habits are being foregrounded for me by the platform, as if they overrode any other logic for information consumption. But Twitter as a whole is not a personalized vacuum — it is a social media. The affective experiences of any individual user have their sources and consequences in the behavior of other users. This tension between individual agency and social structuration underlies the crises that beset us on these platforms. We are situated in an environment where we are provoked to respond and experience immediate results, even as our response only obscurely affects the experiences of other users, in ways we cannot dictate. Platforms thus instantiate an experience of agency that is simultaneous with an experience of a lack of agency.
This ambiguous condition mirrors the affects that cultural and literary theorist Sianne Ngai calls “ugly feelings” in her 2005 book of the same name. Historically, scholars have tended to interpret unambiguous feelings like anger, fear, and happiness as the primary drivers of our actions, but for Ngai it’s the ugly feelings — ambivalent emotions like envy, irritation, and anxiety — that are “perversely functional.” Ngai argues that ambiguous and ugly feelings are non-cathartic, because they “foreground a failure of emotional release.” This failure prompts a kind of “suspended action”: exactly the kind of obstructed agency we often feel at the mercy of endlessly updating platforms and algorithms. To feel irritation is to feel a kind of ongoing, weak anger that does not come with the emotional release of an outburst of fury, since we may not know what, exactly, we are irritated about. The suspended and even disorienting feelings of irritation or anxiety drive an unceasing desire to act in some way to overcome the confusion these feelings cause.
Because ugly feelings are confusing, and because that confusion motivates a desire in us to “feel better,” negative emotions are actually productive of action — a productivity perfectly suited to information-gathering, capital-accumulating platform corporations. Ngai argues that “insecurity about one’s place during periodic innovation, fear of losing recently gained privileges, and anxiety over being ‘left behind’ translate into flexibility, adaptability, and a readiness to reconfigure oneself.” This perverse functionality is manifest in a “desire to overcome” obstruction, impassivity, and suspension. So we say to ourselves, “With one more ‘refresh’ this algorithm may recommend me something decent.” Perhaps, on Instagram, I have seen pictures of yet another influencer on vacation at the top of my feed. I am irritated because I had logged on while at work in the hope of seeing friends posting “stories” about their day. Maybe I feel envy that someone else is on vacation, irritation at their privilege, or anxiety about my own salary. Regardless, rather than closing Instagram, I attempt to overcome those unpleasant feelings by refreshing my feed. In this instance, I have functionally overcome suspension; the only problem is that what arrives is another, newer feeling of suspension. On platforms, every update, regardless of content, mainly serves the purpose of necessitating further updates.
Negative emotions are productive — perfectly suited to information-gathering, capital-accumulating platform corporations
Just as algorithms are concerned only with their own reproduction — that is, the update — ugly feelings “operate” in the same way. Anxiety, irritation, and envy have a remarkable inability to be resolved. Ugly feelings mirror the platforms that enable them. In the grip of endless updates, every advent of the “new” can seem to offer the possibility of overcoming ugly feelings at the same time as each update simultaneously re-creates them.
Anxiety, envy, and irritation are thus tailor-made for feed-based social media platforms. Our desire to overcome them powers a feedback loop that continually presents new crises with every refresh. Social media platforms therefore become a kind of pharmakon — not only the poison, but the remedy, suspending us in a cycle of perpetual ambivalence. It becomes perversely gratifying to use social media as a coping mechanism for the very anxiety it instantiated. As we experience negative emotions or feel unsure about our agency, the protocols of constant updates sustain an uncanny feeling of interest. Even if I know my agency is subordinate to a social media platform, my ability to update its feed nevertheless feels like the best opportunity to reclaim control. Nothing holds our attention better than our own discomfort.
If crises on television produced feelings of helplessness from the barrage of information, on social media platforms crises revolve around the update. Like our machines, we are creatures of habit. But every so often, ruptures happen, and habits and routines are broken, and something changes. In her 2016 book Updating to Remain the Same: Habitual New Media, Wendy Hui Kyong Chun identifies these as “crises.” “In a networked world, there are two operational modes: habitual programmed repetition (machinic and human) and critical exception.” Crises are these critical exceptions.
Crises capture our ongoing attention because they are, well, interesting. In Our Aesthetic Categories, Ngai defines the feeling of “interest” as a “not quite” emotion well-suited to the information-saturated conditions of platform capitalism. As she notes, interest is marked by ambiguity (we don’t have to know why we are interested in an object to be interested in it) and does not necessarily come with a positive or negative valence — it is only different from what is ordinary. Because of how the update-oriented systems of personalized social media feeds are populated, everything tends to appear as acutely different and highly “interesting” without being entirely unfamiliar.
To be interested in something is, according to Ngai, a feeling of “not-yet-knowing” that compels the interested party to return to that thing for another look. Perhaps a political YouTuber has just posted an inflammatory video that has garnered attention. I may not yet know whether I want to watch it, watch how others are watching it, or log off out of disinterest or exasperation. But regardless of my eventual actions, I experience my initial interest as agency, albeit one that operates by extending its indecisiveness and opening itself up to more updates. This is a confused agency that manifests as our ongoing, anodyne attention. As a result, nothing is ever complete, no task ever finished, no crisis ever overcome.
Chun offers this formula for the platform model of crisis: “Habit + Crisis = Update.” Crises cut through the mundane (with “interesting” material) to produce the possibility of real-time empowerment and engagement that television was unable to offer. Paired together, habit and crisis make their own routine: the update. Every notification that pops up on my phone or feeds functions as both update and crisis warranting further updates. It is not just that they create a sense of urgency. The habit of checking itself becomes urgent. This is a trap that sustains attention and engagement by blurring the user’s agency with the agency of code. The updating rhythm of the feeds appear to us as the temporality of our habit, and our habitual engagement in turn provides the data that can sustain the timely algorithmic provision of relevant information.
In this way, the continuous reality of human experiences is broken up into an overwhelming succession of updates that never end — a scrolling wheel that continues into infinity. It is not surprising, then, that our sense of social media is one of disenchantment, that our feelings become negative, and that we continue to habitually use social media in spite of these realizations.
Paired together, habit and crisis make their own routine: the update
Viewing social media use through the lens of habit, crisis, and update offers an alternative to Eyal’s behavioral framing. The reason for our habitual use is not the quasi-biological assertion that we are “addicted,” nor is it because of a steady flow of positivity that tech executives and marketers, with increasing futility, seem determined to hold onto. Instead, our habitual social media use is grounded in ugly feelings. These negative feelings give us a feeling of control in the midst of a barrage of updates by spurring in us a desire to find cathartic solutions to perpetual problems. Feelings like anxiety and irritation give us the impression of control over impotence and despair, while envy, even though it originates from wanting something we do not have, seems like a control over our desires. Put together, our acknowledgement of and responses to these emotions gives us the impression that we are evading the kind of behavioral manipulation Eyal describes. Ugly feelings appear to provide the opportunity for control over our experiences, as a way to wrest the possibility of emotional release away from the autonomous algorithm and back onto ourselves.
Platform infrastructure, as it extends in time and in space, seems to go on forever. No one can index the forever. We may want to use platforms to engage with “newsworthy” events or to stay away from the news; we may want to blindly update ourselves or to critically engage with the constant stream of information; we may want to share “fake news” or to combat it. Unfortunately, we’re all subject to the same platform infrastructures. We are, in different ways, attempting to index the forever. It really can seem as if we live in “cursed times,” or that everything is a scam. But this is no curse, no higher power or malevolent user hoodwinking us all. This is social media working according to plan.
It is as if the sailor in the midst of a storm imagined there was only one wave in the sea, and as they passed each wave’s crest, they were incredulous at the sight of another wave moving toward them. Even if the sailor eventually realizes that there are many waves in the sea, they can’t overcome the last wave’s crest until they make landfall. Similarly, the endless scrolling capacity of algorithmically powered feeds makes it functionally impossible to achieve the “perfect” resolution of our anxious curiosity while simultaneously projecting the fantasy that it could be possible, if only the user continued to struggle to overcome suspension and impassivity.
In recent years, scholars like Siva Vaidhyanathan have called for Facebook and other tech giants to be regulated as one way to overcome our manipulation. But that might be a band-aid for an already massive wound. As it stands, the only way to overcome obstructed agency and impassivity may be to disengage from a platform and make landfall.