Home
June 21, 2019

Self Checkout

I am one of those recalcitrant people who refuses to use self-checkout machines. If there is one human operating a register, I will wait in that line, no matter how long it is, placated by the serene feeling of self-righteousness that settles on me. They were trying to trick me into paying for this stuff twice! I will think to myself. Once with money and again with my labor!

If an absence of alternatives forces me to the self-checkout, my childish response is to immediately try to break the system. I’ll spread items out on the scanner or place them unexpectedly in the bagging area and start mashing buttons until the machine makes enough noise to bring over the attendant. This faux feat of resistance gives me a moment of bratty satisfaction, but then I remember that it merely adds to the stress of a worker who is already overburdened and not responsible for how management has essentially decided to make them serve a half-dozen customers at once.  This makes me angry all over again at how retailers exploit customers’ tolerant good nature as a way to reduce labor costs.

Alexandra Mateescu and Madeleine Clare Elish argue in this Data and Society paper about automation that “retail experiments, like self-checkout or customer-operated scanners, tend to rely on humans to smooth out technology’s rough edges. In other words, the ‘success’ of technologies like self-checkout machines is in large part produced by the human effort necessary to maintain the technologies, from guiding confused customers through the checkout process to fixing the machines when they breakdown.” Their broader point is that automation doesn’t eliminate human labor; it often leads to its disguising and devaluation, making store employees feel increasingly interchangeable with machines and each other.

But customers too are expected to “smooth out technology’s rough edges” by cooperating with deliberately under-resourced systems. Everyone is supposed to obey the self-checkout machine’s instructions and perform uncompensated “shadow work” (to use sociologist Craig Lambert’s term, cited by Elish and Mateescu). This is not just a matter of clumsily scanning barcodes; it is also a mild form of affective labor, what Arlie Hochschild labeled “emotion work.” Customers have to bridge the gap between the norms of human customer service and the company’s imposition of inhuman staff shortfalls without completely losing their patience or simply taking what they couldn’t find a reasonable way to purchase. Who would be shocked if some took a self-checkout line as a license to steal?

That course of action would merely provide an alibi for the next trend in retail, blanket store surveillance to facilitate fully automated checkout (and far more robust data collection). “As media scholar Joseph Turow states, in the last 20 years, supermarket chains have been inundated with the message from technology companies that they “will succeed only if they figure out how to trace, quantify, profile, and discriminate among shoppers as never before.’” Elish and Mateescu write. “Consumers are increasingly positioned as valuable not only because of spending power but also as sources to mine for valuable data. By contrast, the future envisioned for frontline retail workers is either one of obsolescence through automation or a rechanneling of the workforce into higher-skilled positions.”

A similar situation is playing out on a much larger scale with content moderation for social media platforms. Tech companies have long relied on and tried to leverage the human decency of its users, recruiting them directly for shadow work in the form of flagging posts that violate the terms of service and otherwise enforcing and abiding by “community standards.” Lots of affective labor is involved in not only creating content and augmenting its value but also generating and sustaining the affective ties that drive people to use social media in the first place. The companies count on most people cooperating, giving their “real” identities and phone numbers, managing their own privacy, posting “authentically,” and not creating hostile environments. In return, the platforms “trace, quantify, profile, and discriminate” among their users like no industry ever before while refusing to disclose the extent of that monitoring. They also implement that data in undisclosed ways to shape user behavior and auction off the “insights” derived from analyzing and concatenating it with other data flows. That is, they rely on the kinds of consideration from users that the companies themselves don’t practice.

Of course, not all social media users are so compliant. Many are deliberately inconsiderate and seek to exploit the system that is trying to exploit their expected good behavior, posting lots of content that intentionally violates platforms’ terms of service. Some of it is meant to test the limits of the system, to find its vulnerabilities as a company prioritizes scale over responsible growth. Some of it is likely prankish opportunism. Much of it is to harass, offend, and repel other users, or to disseminate hate while connecting and indoctrinating people into fringe or terroristic movements.

Like the shoppers who see self check-out as a license to steal, these users sense that their obedience is being assumed out of efficiency rather than enforced, which they interpret as an opportunity to seek out their own way of profiting somehow by the gap that’s been left open. The willingness of others to play by the rules makes them suckers, the natural target of users unscrupulous enough to follow the companies’ lead.

The behavior of these users puts pressure not on the “system” or “the algorithms” so much as the undervalued and largely invisible workers who work in content moderation. Earlier this week, The Verge published another entry in the series of exposés of the content-moderation business and its working conditions. Sarah T. Roberts’s forthcoming book Behind the Shadows also promises to shed more light on the experiences of these workers. Companies like Facebook contract this work out, Casey Newton argues in the Verge piece, because they expect to eventually replace them with AI: “If you believe moderation is a high-skilled, high-stakes job that presents unique psychological risks to your workforce, you might hire all of those workers as full-time employees. But if you believe that it is a low-skill job that will someday be done primarily by algorithms, you probably would not.” Like the grocery store employees, their work is being devalued and invisibilized by the pretense of automation, which also is meant to authorize the companies’ pursuit of scale beyond what their worker base can responsibly accommodate.

Platforms, as Roberts emphasizes, made the business decision to allow users — imagined as those who are automatically polite and play by the rules — to upload content and expect immediate dissemination. They have prioritized user growth,  data collection, and advertising over providing a safe, reliable service to users. It’s no wonder that some users will adopt the same attitude and the same goals: try to spread unwanted messages as far as possible.

Viewing content moderation as a “low-skill job” that could eventually be automated away is bad-faith wishful thinking on tech companies’ part. Much as there will always be people who deliberately find ways to derail self-checkout machines, there will always be people with far more incentive who are willing to work hard at tripping up content-moderation algorithms, devising ever more ways to be malicious. As Roberts notes, “the vast majority of social media content uploaded by its users requires human intervention for it to be appropriately screened — particularly where video or images are involved. Human screeners are called upon to employ an array of high-level cognitive functions and cultural competencies to make decisions about their appropriateness for a site.” The guidelines are constantly changing, because users are constantly finding new techniques for being in violation, and no AI will ever be able to anticipate them.

What AI may eventually be able to do, however, is detailed in this ACLU report on the emerging industry of video analytics. This industry uses machine learning to process vast amounts of surveillance footage to extract data and actionable material:

While no company or government agency will hire the armies of expensive and distractible humans that would be required to monitor all the video now being collected, AI agents — which are cheap and scalable — will be available to perform the same tasks. And that will usher in something entirely new in the history of humanity: a society where everyone’s public movements and behavior are subject to constant and comprehensive evaluation and judgment by agents of authority — in short, a society where everyone is watched.

This takes the principles of the cashierless store and extends it to society. Some will find this “convenient” while others will find this suffocating and prejudicial. And if content moderation and self-checkout stations are any indication, it will likely rely on legions of hidden human workers who will man A Scanner, Darkly-style holo-scanners until their brains turn to mush and they are discarded, just as the traumatized content moderators are quietly ushered out the door now.

The ACLU report posits that widespread use of video analytics will “generate significant chilling effects,” especially when combined with facial recognition technology. The report describes a scenario in which you are talking with an old friend on the street but then remember the ubiquitous surveillance and tone your enthusiasm down. “Probably nothing happens — but you have checked yourself, and your freedom to have some unrestrained, freewheeling fun has been curbed.”

This suggests a different kind of self-checkout, in which we evacuate the category of the self in response to pervasive monitoring. Because the systems will be trained to identify deviations from normal patterns of behavior, those patterns themselves will become depleted, minimized, to provide the smallest possible point of comparison and thereby escape suspicion. The ACLU report suggests that tracking of this type “turns us into quivering, neurotic beings living in a psychologically oppressive world in which we’re constantly aware that our every smallest move is being charted, measured, and evaluated against the like actions of millions of other people — and then used to judge us in unpredictable ways.”

This future seems all the more plausible when you consider, as noted above, tech companies’ established ambition to “trace, quantify, profile, and discriminate among shoppers as never before.”  One response will be to try to break the systems with extreme deviance. Another will be to try to check out of oneself, focusing one’s emotion work on ensuring that there is no emotion to detect. Then one’s inner life will match the behavioristic assumptions of automated systems, which presume that everything we want to do can be anticipated externally. The robot eyes will come to life to watch over the lifeless.