Home

Safety in Numbers

Facebook’s Safety Check prompts us to relate to global crises mainly on individualistic terms

In early February 2017, Kellyanne Conway appeared on MSNBC’s Hardball With Chris Matthews to claim that the mainstream media had neglected to provide proper coverage of the Bowling Green Massacre, an attack on the Kentucky city that never actually happened. Conway later said that she misspoke in trying to refer to the arrest of two Iraqi men in 2011, but her admission couldn’t deter the deluge of memes, tweets, and Photoshopped pictures of plaques commemorating the nonevent. On Facebook, users checked into Bowling Green and updated their statuses to indicate that they were “marked safe” from the massacre, a tongue-in-cheek nod to Facebook’s Safety Check feature, whose official use was initially triggered by devastating real-world crises.

Safety Check was developed to allow Facebook users to notify friends that they were unharmed during times of emergency or disaster. The feature was rightly well-received for providing another means for communicating in areas or during times where other modes of connection were unavailable or unreliable. However, by explicitly and institutionally entering into life-and-death matters, Facebook takes on new responsibilities for responding to them appropriately. Safety Check has not merely provided relief to anxious users while raising awareness and directing aid toward tragedies; it has also raised more complicated issues. The system has been subject to bad-faith usage, proving that tech companies still struggle to fully understand their human user base, and it is inextricably bound up with Facebook’s nonhumanitarian objectives. The complicated responses to Safety Check have forced Facebook into reckoning with its potential role in perpetuating and reinforcing global inequities.

Safety Check demonstrates social media companies’ naked ambition to seamlessly integrate themselves into people’s lives, using crises as pretexts, regardless of their tenor or severity

Developed after Facebook engineers observed how social media was used after the 2011 earthquake and tsunami in Japan, Safety Check launched globally in October 2014. Initially it was to be deployed exclusively in the event of natural disasters, and only the company itself could activate it. This first occurred in December 2014, following Typhoon Ruby’s catastrophic strike in the Philippines, and the service gained major global traction following the April 2015 earthquakes in Nepal.

After the coordinated ISIS attacks in Paris in November 2015, Facebook decided to deploy Safety Check for a non-natural disaster. Almost immediately, the company received criticism for what was perceived to be a Western bias in its recognition of tragedy. The day before the Paris attacks, ISIS bomb blasts in Beirut left 43 dead, but Safety Check had not been activated. This suggested to some critics that the company valued some lives — those of Westerners — more than others. In response, Alex Schultz, Facebook’s vice president of growth, assured users that the company would make Safety Check “better and more useful” in times of crisis.

As part of this effort to improve the feature, Facebook developed Community Help, on the principle that some form of action and assistance could follow from Safety Check’s indexing of where and how people respond to crisis. After marking themselves safe, users can post to offer or request shelter, food, or housing, mimicking the ad hoc efforts that independent groups have made in response to previous disasters and crises. It’s a markedly efficient way to organize and distribute real material aid to those who need it most, and it points to the real value-add of Safety Check: its ability to reach a relatively wide audience in a relatively short period of time with minimal effort. Safety Check also connected with Facebook’s potential for encouraging and supporting philanthropy. Within a few hours of activating Safety Check for the Nepal earthquake, Facebook promised to match up to $2 million in donations to the International Medical Corps through its fundraising widget. Crisis-mapping efforts made use of social media posts to monitor locations and localize relief efforts, proving that technology-driven approaches can be useful when applied to specific outcomes.

At the same time, though, Facebook also tried to become “better and more useful” by backing away from full responsibility for when Safety Check is triggered. In November 2016, the company made activation a matter of users’ behavior on a case-by-case basis, taking the decision out of employees’ hands. In its current iteration, the Safety Check deployment process begins when Facebook receives alerts from global crisis-reporting agencies iJET and NC4. (The criteria iJET and NC4 use to determine global crisis, a Facebook employee stated via email, were proprietary information that could not be shared.) Following the initial alert, Facebook scans for posts about the incident using keyword searches and geolocation data. Once a certain threshold of people posting about the incident is passed, Facebook activates Safety Check. This is meant to eliminate any sense of false urgency and qualify the severity of an event through data metrics: If enough local users are discussing the event, the logic goes, then it’s worth activating a tool that to help them signal their safety.

However, Facebook’s multiple iterations of Safety Check highlight the complexity of relying on the “social” in social media technology. Since Facebook shifted to its quantitative community-based approach, Safety Check has seen 335 instances of activation; in the two previous years, when Facebook initiated the response by different internal criteria, it was deployed only 39 times. This doesn’t reflect an uptick in crises but is instead an indication of how the community-driven conditions Facebook has set have markedly different thresholds.

One potential result of this is that unnecessary panic and paranoia can be stoked by what are ultimately much discussed but minor inconveniences. At the same time that Safety Check and other technology-oriented solutions like Google Person Finder assuage people’s fears that their loved ones could be dead or missing, they also bring into being the opposite potential of fomenting a disproportionate fear in more stable conditions. The community-driven notifications built in to Safety Check deployment can spread alarm about an incident beyond the actual danger posed. For instance, when a LIRR train derailed in at Brooklyn’s Atlantic Terminal in January, for instance, Facebook prompted users to mark themselves safe in the aftermath even though the incident caused only minor injuries and no deaths. While some users were probably just relieved to know that their friends and loved ones were safe, others may have been thrown into an unnecessary state of needless fear and concern, raising questions as to whether the feature’s efficiency can also cause harm.

Safety Check recontextualizes any event as mainly of personal interest, or not, over any broader sociopolitical ramifications — how they occurred, who was most affected, and who received aid

Safety Check’s efficiency, regardless of the severity of a crisis, is helping make the feature appear indispensable, which ultimately benefits Facebook’s bottom line. Increased engagement on the platform, whether or not it’s in the midst of an earthquake or wildfire, means increased profit for Facebook, a company whose net worth exceeds $300 billion. Praise for its social initiatives should be tempered with at least some degree of skepticism, particularly when those initiatives are contingent on situations of precarity and uneven development. From this point of view, Safety Check demonstrates social media companies’ naked ambition to seamlessly and inextricably integrate themselves and their associated commercial incentives into people’s lives, using crises as pretexts, regardless of their tenor or severity.

Capitalizing on fear spurred by a lack of credible information is essentially the same strategy that Kellyanne Conway deployed. When she invoked the false crisis of terrorists at Bowling Green, she was trying to call into question the mainstream media’s, and by extension, America’s values while distracting people from the injustice of the president’s travel ban. Why weren’t we more concerned for the civilians of Bowling Green? Safety Check’s handling of global catastrophe allows for similar subterfuges of misdirected moral outrage: When earthquakes struck Nepal again in May 2015, an apparent glitch in Facebook’s system allowed users as far away as the United Kingdom the option to mark themselves safe. American and UK media outlets rushed to decry the tastelessness of far-away Facebook users who elected to mark themselves safe though they were nowhere near the attacks, with Buzzfeed compiling a list of outraged users threatening to unfriend the “sick” people who “disrespected” the tragedy in Nepal.

Regardless of whether these users were disrespectful, it might be more important to ask why Facebook’s algorithms encouraged them to “check in.” Are safety notifications beholden to the same algorithmic systems that control Facebook’s newsfeed and target advertisements, page suggestions, friend suggestions, according to the company’s assessment of users’ interests? Should they be? To what degree are these systems intermeshed, and how does participation in Safety Check affect the way Facebook and third parties catalog our lives?

Safety Check maps crises as a series of visible but inherently isolated independent events. It incorporates them in users’ feeds not according to their social or national or humanitarian significance but by the individualized context supplied by a particular user’s network of friends and acquaintances. This recontextualizes any event as mainly of personal interest, or not — it is embedded as more immediate or more relevant based on personal connections rather than any other gauge of its significance. This kind of social organization, in which tragedies are made to seem to strike individuals rather than groups, can implicitly discourage consideration of any broader sociopolitical ramifications of various incidents — how they occurred, who was vulnerable, who was most affected, and who received aid and attention. It impedes a broader understanding of how tragedies can be politicized.

If Facebook’s algorithmic sorting increasingly polarizes users, it would also slant response to events they learn about through the site, even those that would seem to affect everyone and require nonpartisan action. The Safety Check model, in theory, allows individuals effectively choose not only which events, but also whom to care about — and have the rest filtered out algorithmically. This raises the possibility of not only fostering apathy but also making explicit on a broad-based level something closer to contempt.

Who really cares whether you are safe or not? Facebook currently saves you from asking that question and assumes most people in your social network do. How much does it matter to you if a person from your hometown you haven’t spoken to in years has marked herself safe while on a hiking trip in Thailand? What if future iterations of Safety Check allow us to curate whose safety updates we’d want to subscribe to? How devastating would it be to shut that feature off on a person and make one’s ultimate indifference explicit? I don’t care if you live or die.


Shortly after the attacks in Paris, a poem by blogger Karuna Ezara Parikh went viral. It decried the apathy of “a world in which Beirut, reeling from bombings two days before Paris, / is not covered in the press.” The popularity of the poem, which also asks readers to pray “For a world that is falling apart in all corners, / and not simply in the towers and cafes we find so similar,” indicates the degree to which people have conflated their algorithmically sorted newsfeeds with the larger world.

The algorithms that made “fake news” possible allow us to evade the conscious moral choice to care about only the things we’ve already deemed relevant. Facebook just assumes we feel that way. Similarly, Safety Check combines with Facebook’s algorithmic platform to potentially produce an individually tailored worldview about whose safety is truly significant. Its community-driven notifications and potentially beneficial Community Help platforms nonetheless allow Facebook to absolve itself of responsibility to evaluate the severity of crises, much as it has attempted to absolve itself of responsibility for deceptive and inaccurate content its algorithms help distribute.

The impact Safety Check has on the lives of Facebook users is potentially enormous, yet like the rest of the company’s services, it is structured as if convenience and practicality were the governing factors — as if the more nebulous and inefficient feelings of anxiety, concern, disgust, and indifference never came into our social behavior. Facebook’s dogged efforts to augment and expedite our relationships with one another and to the world may be well-intentioned, but they draw on oversimplified notions of our incentives and are ultimately indifferent to anything but individual behavior. For all its community aspirations, Facebook still misunderstands the nature of the “social” it wants to impose on us all.

Tausif Noor is a freelance writer in London.