Home

The Mismanaged Heart

The empty status box is waiting to sell us on ourselves

Over the past few years, technology has put itself on first-name terms with me. Logging on to a public wi-fi provider, I receive the message “Welcome back, William!” as if it were a homecoming. “We care about your memories, William,” Facebook tells me. “Recommended for you, William” is the first thing I see when looking at Amazon. “William, William, William.” Silicon Valley appears to have imbibed Dale Carnegie’s How to Win Friends and Influence People.

This one-to-one chumminess coming from companies that view their potential market as the entire human race is, at the very least, ironic. The rote conviviality contrasts with traditional etiquette that insists on the use of family names to demarcate degrees of familiarity, and it also departs from bureaucratic procedure, which replaces names with numbers to suggest objectivity. Instead, it makes it clear that in the digital age, it doesn’t especially matter what we want to be called or how familiar we want our technology to be with us; it can unilaterally assume a familiarity with us that is anything but objective. Amid the reams of data I leave in my daily wake, “William” is little more than my own preferred avatar.

As the reach of data analytics grows, so the ability to treat each individual uniquely and warmly grows too. The logic of data analytics is that surveillance capacity increases the potential for personalized services. In practice, this means generating more and more automated friendliness to mask tech companies’ increasing indifference to anything that would inhibit their operating at scale. Within these platforms, abstraction becomes the condition of intimacy. A superficial informality conceals the underlying mechanics of indiscriminate rationalization.

But to view platform conviviality purely as a veneer would be to miss the distinctive cultural logic at work here. Sociologists have long been fascinated by the informal etiquette of Silicon Valley. AnnaLee Saxenian’s landmark 1994 study, Regional Advantage, showed how the Valley benefited from a degree of cultural openness that Massachusetts’s more traditional Route 128 business cluster could not match. Others, like Manuel Castells and Fred Turner, have looked to the longer history of the Bay Area to show how networked computing was inflected by the ethos of West Coast counterculture from its origins in the 1960s. The informal dress codes and working environments of such companies as Google have since become a cliché, though an increasingly pernicious one, as it becomes clear how little separation this leaves between working and nonworking life. The latest utopia, as Benjamin Naddaff-Hafrey detailed in an essay for Aeon, is the “campus” workspace, which the employee need never leave.

As the reach of data analytics grows, so does the ability to treat individuals warmly. Abstraction becomes the condition of intimacy

As tech companies have become fixated on constituting and exploiting social networks, cultural diversity and informal sociability are increasingly regarded as crucial sources of competitive advantage. The conviviality of smart devices and platforms is consistent with this ethos. If the function of informality is to erode the distinction between work and leisure, then informal rhetoric is a necessary feature of platforms that want to mediate and capitalize on all aspects of our lives, including work, family, and social life. The great promise — and threat — underpinning this is that we will never have to “take off one hat and put on another” but will have a single casual identity that is recognized in every institution we enter. When a device or platform addresses me as “William,” it is offering to support (and exploit) the identity that I carry into work, leisure, family life, and anywhere else, insisting that it be the same wherever I go. But if informal networks don’t allow the possibility of legitimate escape, they can become suffocating.

As feminist scholar and activist Jo Freeman argued in “The Tyranny of Structurelessness” in the early 1970s, a dogmatic faith in informal networks shrouds unspoken power dynamics: “When informal elites are combined with a myth of ‘structurelessness,’ there can be no attempt to put limits on the use of power. It becomes capricious.” Freeman was challenging her contemporaries in the New Left, but her article can be read as a prophecy of the new style of flexible management that would become known as post-Fordism. From the 1980s onward, workplace practices were redesigned to depend less on explicit hierarchies, in which instructions and rules were imposed on employees from above, and more on the ability of individuals and teams to adapt to clients’ demands. Work became more varied and individuals assumed greater responsibility, but only rarely with commensurately greater reward. Managerial authority became internalized within the anxious, sometimes precarious, worker. The informality of digital platforms serves this ongoing process of nudging users into relentlessly administering themselves.

If familiar modes of address help users over work-life boundaries, the way platforms pose questions further fosters a spirit of voluntarism. Totalitarian regimes have often been depicted through chilling scenes of bureaucracy run amok, with officials requesting information in dispassionate, almost inhuman tones. But tech companies have discovered that minor rhetorical adjustments can yield significant expansions in data collection, facilitating what Shoshana Zuboff has described as surveillance capitalism. Rather than ask coldly, “What is your date of birth?” platforms simply offer to help “celebrate your birthday!” Rather than demand “your full address,” they invite you to identify a certain location as “home.”

It is no wonder that data collection now far outstrips what the 20th century bureaucratic state was capable of. Often this expansion is explained merely as a matter of ubiquitous digitization — now dubbed the “internet of things” — and endlessly rising processing power. But the rhetorical turn toward conviviality has also played a critical role, allowing surveillance to be administered and experienced as a form of care.

For this reason, it’s important to reflect on how this rhetorical turn actually works to engage us. When Facebook and Twitter ask, “How are you?” or “What’s on your mind?” what is really going on? Taken literally, these questions seem to demand some sort of empirical report or fact. “What’s on your mind?” could in theory be heard as a request for specific, concrete information, just like the question “What’s your date of birth?” Contemporary neuroscience might respond to “What’s on your mind” with a brain-scan chart.

But this would not be a normal social response. Someone who replies to “How are you?” with a data-driven answer like “7 out of 10” or “23 percent better than Thursday” would not seem to have understood the question, despite those answers being empirically more detailed than socially appropriate answers like “Fine, thanks,” or “Not bad.” In social life, thoughts and feelings are not usually represented as facts but performed in various verbal and nonverbal ways. The language of psychology, Wittgenstein claimed, could never be scientific in the manner that, say, medicine was scientific: “What’s on your mind?” is a categorically different sort of question than “What is your blood pressure?” It is primarily relational, not empirical. Such questions, Wittgenstein argued, should he considered in terms of what they do socially, not what they seek to represent scientifically.

That empty status box that greets the social media user might equally (and perhaps more literally) be accompanied by the injunction please express yourself now. But the way Facebook puts it — “What’s on your mind?” — tries to suggest sociality, a connection. It is an attempt to make the question actually convey “I care about you” or “Just be yourself.”

When a device or platform addresses me by first name, it is offering to support (and exploit) the identity that I carry into work, leisure, family life

Sociologists, following the early 20th century work of Max Weber, sometimes assume the world is becoming increasingly “disenchanted” by a scientific, bureaucratic logic that privileges quantities over qualities, calculation over feeling. The vast new calculative capacities of data analytics seems to confirm this view that everything is ultimately measurable. But this overlooks how platforms strive to sustain convivial codes and conventions of self-expression while making numerical calculations retreat from view. One of the central questions of post-Fordism is how to weld together the quantitative mechanics of business with the emotional enthusiasm that produces engaged employees and satisfied customers. Since Weber’s day, sociologists like Eva Illouz have looked at how capitalism has come to employ more emotional tactics to regulate human behavior through advertising and cultural cues. Arlie Hochschild’s classic 1979 work, The Managed Heart, looked at how flight attendants use friendliness and care as part of their work. Platform conviviality plays a similar role.

Unlike the expert yet clunky affect scales employed by psychiatrists and clinical psychologists, when a digital platform asks you “How are you feeling?” it specifically doesn’t want a number by way of response. The convivial approach is a means of getting around our defenses, to get at data that might be sold as more accurate and more revealing. In that respect, questions such as “How are you?” perform a methodological function analogous to the one-way mirror used to observe focus groups. To users interacting in real time, the question sounds like an opportunity for dialogue, just as Wittgenstein argued. But to the owner and controller of the platform, it generates data — perhaps not of the brain-scan variety but still of a sort that can be studied, analyzed, and evaluated. When we express how we are, platforms hear this as a statement of what we are.


Despite the concern about Big Data and the “quantified self,” it bears remembering that for the majority of us, our orientation toward the world is becoming less empirical, not more. We have less need to be preoccupied with details: We no longer need to know how to get to a restaurant but merely how to have a conversation with Google Maps or Yelp — platforms that are already deeply familiar with us, our habits, and our tastes. We express a desire for a given experience — in this case, a meal — but we no longer need develop our own rational approach to accomplishing it.

Without an empirical, outside view of the logistics it takes to procure our meal, we are less likely to be able to provide a critical evaluation of it afterward. Instead, in keeping with the on-demand promises of apps, we are more likely to express how we’re feeling as we eat it or to share a photo of it in real time. The user is becoming submerged in the constant ebbs and flows of experience, expressing feelings as they go, but scarcely worrying about the facts and figures.

Likewise, when social media offer nonverbal means of responding to their questions about how we feel — memes, emojis, emoticons, Facebook reactions, reaction GIFs, etc. — they keep us closer to immediacy, to real time. They are an efficient, impulsive alternative to the old standards of customer feedback, foreclosing on the time in which a user developed critical distance and a more deliberate response.

Social media’s new forms of emotional language can save the user from having to find a more objective or dispassionate perspective. They work similarly to mood-tracking apps like Moodnotes and Gottafeeling, which randomly and colloquially interrupt users (“Just checking in, how are you feeling?”) in hopes of getting spontaneous data on their emotions. Such methods are leaking from digital spaces into cafes, restrooms, and waiting areas where we can press a smiley, a neutral, or a frowning-face button to log feelings about our “experience” as it is happening. The government of Dubai is rolling out such physical interfaces across the city, creating what it calls “the world’s first, city-wide, live sentiment capture engine.”

Businesses that treat individuals as leaving a data trail of subjective feelings, trading in “moments,” sell real-time feelings and mood adjustment as the product itself

This is wholly unlike post hoc numerical evaluations, like customer satisfaction surveys. With “sentiment-capture engines,” an experience does not garner evaluative feedback after the fact but is instead “fed forward” (to use Mark Hansen’s suggestive phrase) for future analysis. This points to a clear divide between two different types of social and commercial knowledge: one views individuals as trusted reporters and critics of an objective reality; the other treats them as leaving a data trail of subjective feelings, which becomes the objective reality that only machines can grapple with.

The second kind of data is integral to businesses that trade in “moments,” whether they are algorithmically driven social media or any of the other companies that hope to operate in the “experience economy,” selling real-time feelings and mood adjustment as the product itself. And it is not merely companies that want this data. Academics have gotten in on it as well, with the rise of “digital methods” in social research, such as data mining Twitter’s public APIs. The scale and secrecy that surrounds much large-scale corporate data analytics represents a major threat to the public vocation of social research; this “crisis of empirical sociology,” as it has been dubbed, will be exacerbated as more academic researchers are drawn to the private sector, either for financial reasons or because they are attracted by the unprecedented quantities of data that platforms have to offer. Companies like Facebook have been courting data scientists for some time.

With the rise of sentiment capture, the users doing and feeling things, and the analysts processing what those users do and feel, increasingly dwell in different worlds, with diminishing overlap or friction between the two. Wittgenstein wrote that “every game has not only rules but also a point.” Platforms are able to express one point for their users, which is convivial, and another point for their owners, which is empirical. On one side, the sharing and expression of experience is, as Wittgenstein described, a relational phenomenon completely understood only by those who participate in it. On the other, it is an empirical phenomenon known only to the person — or algorithmic interpretive system — who does not participate in it.

The conviviality of the focus group is achieved through comfortable chairs and maybe alcohol. As the mood in the group becomes lighter, more sociable, it generates ever greater insights to those who are watching. But what’s most interesting about this methodology is this: The more decisively the mirror divides observer from observed, the more seemingly authentic is the knowledge that results. Digital platforms, likewise, produce this sharp divide, extending what focus-group marketers (and behavioral scientists) began but 20th century bureaucracies, typically operating by a panoptic logic of enforcing discipline through overt surveillance, largely missed.

One of the defining features of traditional bureaucracies, as Weber saw it, was that they seek to monopolize the information they accrue to secure their power and authority. In the early years of the 21st century, there was some hype emanating from business schools about a “post-bureaucratic” age, in which “open data” platforms would release government data to the public, granting them a view inside administrative functions. New forms of accountability would arise, thanks to the radical transparency made possible by digitization. The idea exerted particular sway over David Cameron’s U.K. government from 2010 onward, resulting in a wide-ranging “open data” initiative meant to transfer power from civil servants to citizens.

This optimistic vision rested on the assumption that individuals — especially when acting as citizens — have a primarily empirical orientation toward the world. It assumed that people want to know what is going on, they want data about performance, they demand the numbers from inside the belly of the beast.

For those who do adopt this stance — because they are investigative journalists or activists or professional skeptics — this post-bureaucratic turn indeed represents new possibilities for transparency. But for most of us, the era of platform-based surveillance represents a marked decrease in transparency, when compared with 20th century state bureaucracy.

The grammar of the old bureaucracy is transparent — “Tell me your full name” — even if the records are not. You know what it wants to know. The convivial alternative — “Hey, William, what’s going on?” — represents a new opacity, where everything feels relational and immediate but becomes the object of knowledge for someone else or something else. In the post-bureaucracy, we don’t know what they want to know, or when we’ve finally told them everything.

William Davies is author of The Happiness Industry: How the Government & Big Business Sold us Wellbeing (Verso, 2015). He blogs at www.potlatch.org.uk.