Home

Here to Help

Social media platforms have been proactive in suicide prevention, which suggests their vast powers over life and death

The datafication of our lives is here: Phones, “smart” technologies, and other forms of information capture turn our behavior into data with increasing proficiency, with and without our explicit permission, codifying everyday life in systematic ways that both reflect and shape social dynamics. In this way, data train algorithms that in turn train us.

Since tech companies own the data they collect and face few regulatory constraints, they can more or less do what they want with it, and the largest companies enjoy virtual monopoly status that minimizes their accountability. Because of the ubiquity of these companies and their centrality to social life, they have evolved into hybrid business entities and civic institutions. The concerns around corporate data practices have accordingly moved beyond matters of strictly personal privacy. Our individual willingness to generate data through social media participation matters less than our collective, compulsory, datafication.

Our status as datafied subjects in overlapping state, commercial, institutional, and corporate databases points to an emerging structure of data governance, in which algorithms model our behavior and constantly recalculate who they think we are and what sorts of permissions, exclusions, and opportunities should be extended to us.

How are we to have a voice in this governance? What expectations and limits should be placed on algorithmic control? Are the corporations that collect and process so much of our data equipped to address the political questions that arise? How can we organize to establish and promote alternative agendas?

Initiatives like the suicide prevention tool help rationalize the company’s data collection as a matter of care. Emotional surveillance thus becomes compulsory

What’s necessary to address such questions is not just data policies or better personal privacy hygiene but a data politics. To illustrate the stakes of such a politics, we can look at a concrete example of data governance: Facebook’s newly introduced suicide prevention protocol — an AI-driven feature that uses pattern recognition to identify suicidal users and immediately link them to mental health resources.

The topic of suicide has deep roots in the history of sociology. Emile Durkheim famously wrote about suicide in the 19th century, demonstrating that even the most intimate of personal acts are tied to social forces. (Hence the need for the discipline of sociology.) Traditionally, those forces have included such social institutions as the family, the government, schools, and organized religion. These institutions provide the densely woven patterns that structure and justify individual behavior, grounding the norms that often become invisible in their seeming common sense.

Today, social media must be included among these institutions. When corporations build infrastructures that support personal, social, and professional life, those companies come to bear the power and responsibility that institutional standing entails. How these companies, so central to social life, respond to a problem like suicide gives a glimpse into the changing social order of a datafied era.

Mental health and self-harm are not new issues for large social media companies. Up to now, they have tended to approach these issues by facilitating peer-to-peer user support and directing users to outside resources. Reddit, for example, hosts a SuicideWatch board on which community members listen to and advise one another, and Tumblr has #PostitForward, a mental health and wellness channel aimed at ending stigma through users’ shared stories and positive, validating interactions. Instagram has a special section within its help pages dedicated to advice about what to do if you see a concerning post, with links to resources for people experiencing mental health troubles themselves. In Australia, Twitter is rolling out a new partnership with ReachOut, a mental health organization meant to help users cope with the disturbing content that can barrage news streams. Interventions across these sites include information about and links to external sources for mental health care, but they don’t generally include direct interventions by employees.

But what about algorithms? With its vast user base and deep social permeation, Facebook has enormous administrative capabilities that far surpass other platforms, and its data trove teems with potential insight about the condition of the public body. It’s usually thought of in terms of ad and content targeting, but the kinds of stories that extracted data tell extend far beyond mathematical predictions about what users are likely to buy or click on. The data that Facebook collects can also unearth bullying, bigotry, religious extremism, and intimate-partner violence. And it can attempt to predict users’ proclivity for self-harm or suicide.

People share their lives on Facebook, and that includes expressions of mental anguish. While it would be nice to think that such expressions will always be met with unstinting support and empathy from friends and loved ones, recent history indicates that our networks might not be so reliable. Social media users have a shaky track record when it comes to helping one another; a version of the bystander effect at times seems to come into play. Livestream has hosted multiple deaths in which viewers were unable to intercede. Just last month, 40 people shared but did not report the livestreamed rape of a 15-year-old girl. Inaction around these instances of self-harm and sexual violence make a strong case for systematized, data-driven intervention.

That is what distinguishes Facebook’s new suicide prevention tool, which encodes suicide monitoring directly into the platform at the level of code, rather than depend solely on users’ flagging posts and reporting concerns. Its machine-learning algorithms train themselves to recognize certain key words and patterns of interaction as indicative of emotional distress, triggering an automated and a personal response. It is a technology that exhibits both care and control, stewardship and intrusion.

The way these facets are intertwined suggests the complexity of the politics of data governance. To engage data politics effectively, we must not only be critical but also constructive. So rather than immediately condemn the privatization of public health efforts, it is important to acknowledge what Facebook gets right. By instituting protocols that address the problem of self-harm, Facebook demonstrates its willingness to assume responsibility for the safety of the de facto civic space it maintains. With Facebook’s capacity and propensity to collect data, it would seem deeply irresponsible if the company didn’t enact data-based safety measures against self-harm. The tool was developed not out of preemptive paternalism, but in response to a clear problem developing on the platform, and if it prevents even just one suicide, the entire apparatus may well be justified. Facebook’s willingness to allocate resources to suicide prevention is an unambiguously positive step with regard to corporate social responsibility and will most certainly help some people experiencing serious pain.

Given its size Facebook must be regarded as a bellwether for corporate responsibility with big data. CEO Mark Zuckerberg seems to recognize this and has taken to delivering polished speeches about the need to “develop social infrastructures” geared toward “supporting,” “protecting,” “informing,” and “engaging” communities in meaningful ways. Similar themes predominate in his recent “Building Global Community” manifesto, which reads less like a corporate prospectus than a political platform.

In the New York Times, Farhad Manjoo depicts Zuckerberg as struggling to reconcile new statesman-like ambitions with his engineering roots. But Zuckerberg need not become a politician, participate in any electoral process, or sponsor any legislation to indulge his political drives. Facebook can shape society through its data practices and the application of its engineering prowess. Governance, for Facebook, is a matter of algorithms and code.

Facebook’s suicide prevention feature serves a clear purpose and clearly serves a digitally connected public. But a thoughtful data politics also demands we recognize an ontological tension in datafication: Data represents thinking, feeling, active subjects within digital systems, and it also serves in aggregate as a lucrative, alienable financial product. This warrants close interrogation of how a company’s design features and policy protocols address that tension and the competing incentives that stem from it. Such close attention will inevitably reveal unintended consequences of corporate initiatives that companies — and users — may not have imagined.

Among social networks, systematized intervention could bring about complacency. Friends may remain quiet, assured that the system will provide necessary care

Facebook could have the most benevolent intentions in its suicide prevention protocols, but the company’s altruism is inextricably bound up with its business model. The same mechanism — data collection and analysis — is at once its means of responsible, responsive governance and a highly effective technique of capitalist exploitation. And initiatives like the suicide prevention tool help rationalize the company’s data collection as a matter of care rather than profit.

This coincides with how Facebook generally tries to represent itself — as connecting the world to make it a better place, not surveilling users in order to subject them to ever more exacting conventions of control and monetization. The suicide prevention feature normalizes the company’s aggressive personal data monitoring practices, regardless of purpose. To support the capacity to intervene in rare and extreme cases of self-harm, Facebook needs to collect everyone’s data, all the time.

Emotional surveillance thus becomes compulsory and translates into an additional cost of social (media) participation: Facebook users trade their feelings for access to the platform, and Facebook personnel may intercede at any moment to express alarm. This blurring of care and invasiveness mimics how Facebook has justified earlier changes to its design and terms of service, like its “real name” policy, which unites a concern for personal “integrity,” “authentic connection,” and harassment prevention with the company’s desire for data that can be reliably linked with offline behavior.

The algorithms that work to prevent suicide also classify users’ emotions for marketing and other commercial purposes, as shown by an in-house Facebook study demonstrating the capacity to identify “moments when young people need a confidence boost” and connect that data to advertisers’ content. The algorithmically deduced propensity for self-harm thus becomes another data point in a reputational profile that may not only flood at-risk users with commercial offers for mental health services, but single them out in their perceived vulnerability for intrusive, deceptive, and/or manipulative marketing techniques.

Once Facebook intervenes on a particular user, that person essentially takes on the label of “troubled” or “mentally unwell.” In this sense, the algorithm not only indicates one’s pre-existing mental state but can generate mental health outcomes. Social scientists know that mental-illness labels can have devastating effects on the sense of self, as people internalize the negative meanings associated with labels and experience stigma at the hands of others. For instance, being labeled mentally ill can result in fewer job prospects and lower work evaluations, having one’s concerns dismissed by medical professionals, decreased self-esteem and sense of efficacy, and general experiences of social exclusion from both strangers and intimates. Interventions that demarcate a person as unwell could then potentially cause psychological distress as well as alleviate it.

The compulsory submission to monitoring on Facebook may also dissuade users from expressing themselves during difficult times. Under Facebook’s watchful eye, people may be careful to maintain affective neutrality to avoid an algorithmic “mark” and intervention from the Facebook team. As Amanda Hess pointed out in a 2015 article at Slate, “For people who use social media to reach out when lonely or depressed, getting caught by a robot can end up feeling more alienating than supportive.” Rather than open channels of communication, those in distress may elect instead to keep their feelings to themselves.

Among social networks, systematized intervention could bring about complacency. When people notice that their friends are in trouble, they may remain quiet, assured in the belief that the system will provide necessary care. They may even decide that they lack the expertise to make such an assessment, questioning their subjective experience as a valid barometer of what a friend needs. The danger here is a shifting dependence from nuanced affective connections to impersonal expert systems that discredit gut feelings and intuitions in favor of predictive quantifications.

Of course, these troubling elements of Facebook’s suicide prevention feature don’t indicate that the feature — or the company — are entirely pernicious, any more than the benefits of the tool give Facebook carte blanche to quantify, classify, and intervene in how we express our feelings. Rather, they act as critical points of interest in the construction of a politics of data governance, with issues that go beyond any one platform. Which stories should data-collecting companies be permitted to tell about us? What stories are they telling about us behind our backs? Are there stories we should insist upon? Stories we should resist?

The capacity to collect and analyze data with robust breadth and sharp precision has arrived, and computational capacities will continue to develop, fostering powerful roles for large social media companies and the people who run them. These dynamics are rife with contradictions and antagonisms, pushing and pulling. The antagonisms within data governance needn’t be resolved — they may indeed be irresolvable, like so many other political questions— but they must be revealed in order to guide how we navigate the messy amalgamation of competing values around stewardship, privacy, autonomy, and care. The politics of social media design go beyond the personal and have ramifications for how we will all get along, not only with companies like Facebook, but with one another.

Jenny L. Davis is a lecturer in the School of Sociology at the Australian National University and co-editor of Cyborgology.