Home

The Safety Dance

Automated tools that try to calculate “brand safety” reproduce the whiteness of mainstream content

Full-text audio version of this essay.

In 2018, Scarlett London posted an ad for Listerine on her Instagram. Fully made up and dressed in hot-pink pajamas, she posed with heart-shaped balloons and, confusingly, a plate of tortillas styled as pancakes, balanced atop a duvet with her face printed on it. Though admittedly excessive, the image is pretty typical of everyday glamour shots and home-spun sponsorships on Instagram. But this particular post was picked up and trolled widely on Twitter and Reddit, and London faced a litany of abuse and death threats. The reaction also harmed London’s livelihood. The backlash caused the video to be flagged by software that assesses the risk of the influencers that they work with. After the incident, London’s “brand safety” score ticked down in this algorithmic influencer-management tool.  

Brand safety is supposed to mean the avoidance of extremist and offensive content. In practice it is underpinned by conservative morality – an avoidance of sexuality or nudity, profanity, or anti-social behavior. When, in March 2019, a promotional email from a brand-safety software vendor purported to identify the “least brand-safe influencers of the month,” a mommy blogger named Sarah Barnes turned up among the usual suspects of prank and gaming YouTubers. Barnes had less than 8,000 subscribers, and her channel contained a very benign selection of pavlova recipes, school uniform hauls, and videos of her family fruit picking. When I asked the vendor why Barnes was on the list, they replied that “Sarah hasn’t posted on her YouTube account for some time now so as we’re not able to monitor her activity closely on that channel as well as on Instagram we will suggest some caution (although on first glance she seems squeaky clean!)” But identifying Barnes as one of the least brand safe influencers of the month is hardly just “some caution.” Her reputation and income were likely harmed by an influencer-ranking system about which she had little awareness or recourse.

Discrimination can be concealed in brands’ subjective hiring decisions, which hinge on nebulous ideas (if not alibis) like brand fit and brand safety

Influencer marketing is a form of cultural production that is growing fast. In 2020, marketing spend on influencers has increased 73 percent. This is partly because influencers are perfect for producing advertorials in a pandemic: They are experts at weaving together compelling content alone from their own domestic space, often doing their own styling, filming, and editing. Influencers — often women and often young — are those who successfully blend documentation of their ostensibly “real” lives with advertorials and brand sponsorships via social media platforms. They work in a highly competitive attention economy and while a tiny A-list earns millions each year, a much longer tail of aspirational creators are lucky to earn pennies. This distribution is shaped by existing forms of cultural discrimination — Black influencers, for instance, are systematically under-represented and are paid less — though this may be obscured by the intimate and personal nature of influencer work. Discrimination can be concealed in brands’ subjective hiring decisions, which hinge on nebulous ideas (if not alibis) like brand fit and brand safety.

Because influencing is a “pink ghetto” profession, those studying social media have often failed to tease out its political economic implications. It is often overlooked that proto-influencers originated many of the practices that now underpin social media’s hugely lucrative advertising economy. For example mommy bloggers networks developed sophisticated advertising infrastructures that predate Google’s purchase of Doubleclick in 2007. Beauty and fashion bloggers developed hyperlinked forms of in-text advertorial that heralded a valuable affiliate marketing economy, foreshadowing shoppable Instagram links by more than a decade. Influencers thus helped create the foundation for not only how we experience advertising on social media platforms but also how we live our lives and work. In particular, the style of compulsory online authenticity and passionate work that animates influencer cultures now permeates the workplace and job search in general. This authenticity is a specific style of self-branding that hinges on an impossibly consistent self-presentation, a spectacular good self. Journalists, for example, are expected to build personal brands that consistently display wit and expertise between bylines. Dunkin Donuts, GameStop, and Walmart now explicitly groom their employees into becoming social media stars, incentivizing TikToks that showcase the “fun, behind the scenes” side of their jobs.

The style of compulsory online authenticity and passionate work that animates influencer cultures now permeates the workplace and job search in general

The circumstances that affect influencers and their “brand safety” can be seen as harbingers of the kinds of scenarios could affect us all. The emerging tools that help manage “brand safety” risks discriminate against influencers in a multitude of ways, but these technologies and techniques are already being employed more widely in recruitment and beyond. As brands scrutinize influencers with AI, so do employers approach hiring in similar ways that are also systematically biased: Amazon, for example, had to scrap its recruitment tool as it favored male applicants over women, and tools that allow parents to scrutinize babysitters’ social media are more likely to flag Black women as “disrespectful.”

Influencers may be seen as the canary in the coalmine for a growing spectrum of employment and employee-management practices. And the automated tools that monitor and assess them, that try to predict their “scandals,” present a picture of the sort of opaque and unbounded scrutiny we may all face, never knowing when some seemingly ordinary aspect of our lives will be turned into an unaccountable mark against us.


Given that an influencer’s job is to marshal attention, it should be no surprise that many of them find themselves embroiled in scandals. This may be by apparent design, as with YouTuber Logan Paul’s 2018 trip to the “suicide forest” in Aokigahara, Japan. Or “TanaCon,” a Fyre Festival-like fiasco that YouTuber Tana Mongeau created after a top YouTube convention wouldn’t give her “A List” Featured Creator status.

But not every influencer scandal is a calculated bid for attention. An influencer’s value often depends on consistent lifestyle messaging, which is often underpinned by chaste kind of morality: if an influencer’s self-brand hinges on not drinking or particular dietary choices, such choices are expected to be upheld, and scandals occur when such messaging is undermined by an inadvertent display of hypocrisy. When the vegan vlogger Rawvana was caught in another influencer’s video eating fish, she faced a swift community backlash described by the Cut as “a total personal branding meltdown.” Other scandals have derived from supposedly spontaneous happenings being exposed as marketing events, as when the “surprise engagement wedding” scavenger hunt conducted by @fashionambitionist was ultimately revealed to be a “meticulously planned marketing stunt.” This was scandalous not because it revealed an influencer’s orientation toward being commercial — sponsorship is part of how influencers establish their legitimacy  — but because of the breach of relatability. As Crystal Abidin points out, even influencers peddling glamour must work to create an impression of humility and the ordinariness of their everyday lives through strategically managed peeks into the backstage — what she calls “porous authenticity.”

Some influencer scandals are no more than moral panics. Influencers are accused of pandering to or corrupting their audiences through overt displays of sexuality (ostensibly leaked nudes), greed (through releasing exploitative merchandise) or attention grabbing pranks and stunts. British beauty vlogger Zoella’s YouTube channel has been attacked for creating a “pandemic of insecurity” among girls due to her channel’s emphasis on cosmetic use, and her ghostwritten book’s popularity has been attributed to “declining teenage literacy rates.” Sometimes the “scandal” is a sponsor backlash against influencers trying to wield political influence. In 2017, L’Oreal dropped Munroe Bergdorf, a Black transgender influencer and activist, for publishing a long Facebook post on structural racism in which she commented that “all white people are racist.” (Contrast this with the mommy bloggers who incorporate QAnon messaging: the Atlantic describes them as “the women making conspiracy theories beautiful.”) Theories about politicians’ and celebrities roles in pedophile rings are nestled within images of picture-perfect families posed in glamorous interiors, augmented with vague captions about preventing child trafficking. These dog whistles blend in effectively with the pastel hues of Instagram, and tools rarely flag them as scandal.

Brands have, of course, long paid celebrities for endorsements, hoping some of their aura will rub off on products. But this can work both ways: If a celebrity endorser misbehaves or speaks out for particular ideas, it will reflect on the products as well. This is what’s behind the premise of “brand safety.” Standard spokesperson contracts usually feature a morality clause that allows advertisers to terminate relationships if the “talent” commits any act that causes “public disrepute, contempt, scandal, or ridicule, or which shocks or offends the community or any group or class thereof.” For celebrities, the degree to which their behavior was offensive or shocking could only be tracked somewhat indirectly, but for influencers, the impact of a scandal can be directly measured, in for example the loss of followers.

Unlike pre-social-media celebrities, who often operated under the auspices of larger media companies that ultimately vetted their behavior, influencers are typically under a different sort of management, working through media platforms and negotiating their own advertising deals. They often lack institutional support, an established code of conduct or the sorts of industry-wide organizations that could develop one. Attempts at organizing influencers have fallen short: The Internet Creators Guild, which launched in 2016, offered the closest thing to a (notably non-union) successful model, but it was shuttered three years later because big creators did not want to pay the membership fees or publicly share their rates, which would make industry standards and inequalities more visible. Ultimately, the ICG (and later attempts at unionization) struggled to connect with influencers, who tend to work alone, often in isolation, geographically distributed. Isolation and distribution has also plagued organization efforts for gig workers, such as those contracted by ride sharing and food delivery apps – yet, progress is more common in these industries, which are wider reaching and are not subject to celebrity attention economies. Influencers are trying to win attention within bitterly competitive markets but struggle with a lack of professional management to help them navigate the traps and pitfalls and cutthroat practices that come with a career in the public spotlight. So no one knows what an influencer might do next.

Non-risky influencers — who are hired more frequently, and paid more — tend to be white, beautiful, heterosexual with long-term boyfriends. Later, they become mommy bloggers

But “risk” in this context largely functions as a euphemism for attributes that lie outside a very specific norm. Non-risky influencers — who are hired more frequently, and paid more — tend to be white, beautiful, heterosexual with long-term boyfriends who they will eventually engage, marry and procreate with. Then, of course, they become mommy bloggers. This is the expected lifecycle of the influencer; the A-List beauty vloggers of the mid-2010s have gracefully aged out of beauty and fashion verticals, posting less and dropping followers. They have been replaced with savvy TikTok ingénues like the D’Amelio sisters, who have inherited their million dollar Proctor and Gamble sponsorships.

Generically, “risk” also means content that falls outside historically stable and recognizable verticals like makeup, fashion, travel, and cooking. Success in each of these genres is also contingent on whiteness. Murali Balaji has demonstrated how Black creatives are often commodified by music-industry intermediaries in narrow and stereotypical ways, marketed through a “paradigm of otherness.” Similarly, self-brands on Instagram are rationalized and limited: Black influencers are often shepherded away from the “mainstream” toward working with products hinged on Blackness, like natural hair care. These are positioned as niche markets and pay influencers lower rates. Black influencers are also pushed out of predominately white fashion and beauty influencer ecologies because the products don’t come in their skin shade or don’t work on their hair. Thus an ostensibly “DIY” and participatory culture is constrained to whiteness by the limited range of the commercial professionals funding it.

The perception of the “risks” of using influencers has created an opportunity for intermediaries who promise to help marketers mitigate it. Among these are influencer talent agencies, which function like their counterparts in modeling or entertainment and seek to discipline and shape influencer markets. Agencies typically take a contracted 10 percent to 20 percent cut of influencers’ earnings — possibly the most transparent financial arrangement influencers will encounter. But only a tiny fraction of aspiring “content creators” will ever secure such representation. The participatory ethos and low barriers to entry on social media platforms may make it seem as though anyone can win the influencer lottery, but not surprisingly, agencies tend to be interested in familiar types, like the “can-do girls” with blonde hair and shiny teeth who are domestically skilled (at complex French braids and cookie baking), likable, a little goofy — Disney princesses crossed with your best friend’s older sister.

This approach to talent selection mirrors long-held attitudes about what constitutes “risk” in media businesses. Content made by white creators becomes wrapped up with feelings of safety in ambiguous ways. Media industry middlemen have long shaped and rationalized the production of media according to market logics, as scholars like Anamik Saha have shown. Amanda Lotz has described how fuzzy and subjective “feelings,” informed by “perceptions of what advertisers desire,” play into the decisions made by media company managers, which then cannot be reverse-engineered to uncover the moment where racial discrimination occurred. Socialization mixed with industry talk in corridors and the “hidden curriculum” of working in creative industries contributes to such snapshot decisions, which dictate much of what we see and consume through media. They tend toward conservative estimates of what the public wants, or estimates of what audiences can afford. As Keith Negus details in Music Genres and Corporate Cultures, managers and other intermediaries — often white, highly educated, and male — who slot artists into markets help dictate what content is perceived as commercially safe, and the biases of their subject position become encoded as industry wisdom. Often what is seen as non-risky boils down to what has worked before.

There is no record of when these decisions occur or what data went into them. This makes them even less transparent and accountable than those made by automated software and black-boxed algorithms. Nevertheless, the assumptions behind those assessments work their way into automated tools, another emerging intermediary between influencers and brands. One such tool, CreatorIQ, claims to algorithmically diagnose who is “brand safe” for advertisers by analyzing influencers’ published content across all social media platforms, interactions with other influencers, follower numbers, previous brand work and press coverage. They reduce messy diagnoses of risk into supposedly objective numerical scores. Another, called Peg, uses natural language processing to scan YouTube content for influencers’ use of profanity. When I looked at this tool in 2019, the word queer was encoded as profanity, meaning the brand-safety scores of those who used it may have been cut down. There are already examples of how the demonetization of queer content by LGBTQ+ YouTubers has caused queer creators to either diversify away from this genre of content or leave the platform.

Often what is seen as non-risky boils down to what has worked before. There is no record of when these decisions occur or what data went into them

Another popular tool, AspireIQ, allows brands to search for influencers based on the similarity of their content to previously successful advertorial campaigns. This becomes a kind of reverse cool hunting: instead of identifying and redeploying trends that the kids are into, brands are now hunting content that emulates their paid advertorials to amplify and extend them. Other software helps brands identify who has spoken about them in a “noncommercial” way, so they can reward what they read as an authentic passion for their products with paid work. As a result, “voluntary” inclusion of products in influencers’ content is rising.

Such tools sometimes aspire to a Palantir-style approach of integrating streams of data and seeking out patterns that could not only examine the “legitimacy” of influencers’ current practice — “verifying” their metrics and making sure their followers are actual people from particular demographics — but also predict influencers’ future behavior and the likelihood they will jeopardize a brand’s value. As Forrester’s industry report notes, all these tools “have access to the same social media APIs” — that is, they are drawing from the same reservoir of posts and other content — so they must differentiate themselves with promises of accurate algorithmic guesses and inferences. This amounts to an approximation of the subjective and value-laden decisions about “brand safety” made by previous intermediaries.

But brand safety is not an objective data point that can be isolated and measured. Algorithmic tools to assess influencers may merely codify pre-existing biases about who poses a “risk” and why, providing for datafied dog whistles and offering statistics to help brands back up discriminatory decision making. They often disproportionately harm marginalized people (particularly Black people) through a spectrum of what Avriel Epps-Darling describes here as “technological microaggressions.” To take just one recent example, Twitter’s image preview was found to systematically center white faces over black ones. The racist tendencies of image-processing software are also well-documented. One piece of influencer software used image processing to measure the face shapes that received the most social media engagements, noting that heart-shaped faces “performed best” in cosmetic campaigns. Face shape, of course, is heavily correlated with racialized perceptions.

The whiteness of mainstream content on platforms is multiplied when tools’ algorithms explicitly reward encoded whiteness. Creators who aren’t paid by brands can’t work; they cannot produce culture, and this shapes what we meaningfully have access to on social media platforms. To understand influencer culture, and what we see on platforms, we must consider the growing “soft” censorship that shapes what we consume as much as direct forms of platform moderation. 

Beyond narrowing of cultural horizons, algorithmically approximated “brand safety” also foreshadows how hiring and recruitment tools can scan the social media presentations of everyone and not just influencers, working to impose a hegemonic morality. They suggest how we all may be monitored for profanity and backlash, for evidence of proper levels and kinds of consumption. As “authenticity” becomes economically significant, we all may be obscurely assessed for our “fakeness.” This becomes increasingly urgent as our social lives are increasingly contingent on social media platforms; like influencers, our personal brands then unfurl across more and more of the web. In addition to monitoring how platforms scaffold our interactions and experiences, we must pay attention to the black-boxed intermediaries that are surveilling and categorizing how we participate according to decades-old commercial logics, trading these categorizations without our awareness or recourse. How brand-safe are you?

Sophie Bishop is a Lecturer in Cultural and Creative Industries at the University of Sheffield. She uses feminist political economy to study content production on the internet.