Home
March 29, 2019

The House We Live In

At the Atlantic earlier this week, Sidney Fussell reported on Airbnb’s policies toward hosts installing cameras to observe their customers and the platform’s apparent ambivalence about enforcing them. As Fussell notes, Airbnb hosts are permitted to have cameras installed in living rooms, common areas, and outdoor spaces, but not bathrooms and sleeping areas. They are also required to disclose the cameras to any guests and secure their consent. But if the amount of camera-sniffing tips, apps, and devices are any indication, lots of Airbnb customers are not confident that the company’s policies are all that effective. What counts as disclosure too has been under question: In January, an Airbnb guest complained on Twitter that the company told him that if a camera was displayed anywhere in a photo of a rental property, it counted as notification. To top things off, his host gave him a negative review as a customer and wondered what he was hiding. In another of the cases Fussell details, a guest discovered hidden cameras in the bedroom he was renting, whose memory cards he then removed and kept as evidence that he had become the victim of a crime. When he reported it to police, however, they accused him of being a thief.

These examples suggest that Airbnb is not exactly raising people’s faith in one another. As Frank Pasquale sarcastically wondered, “Whatever happened to ‘community’ & ‘trust’ built up in a warm & fuzzy ‘sharing economy?’” In the early days of platforms, some apologists seemed to believe that since these platforms allowed strangers to trade with each other as strangers, they were somehow broadening the reach of trust and community rather than diluting or nullifying them. Of course, ever since eBay, scaled-up commerce platforms have basically turned the idea of trust into its opposite, making it reliant on a variety of rating schemes and ever more intensive surveillance, even as they provide a haven for all sorts of predation and grift.

Platforms don’t want to build trust; they want to monopolize it. The whole point is to allow for encounters that don’t require interpersonal trust, because platforms support an infrastructure that makes trust irrelevant. Platforms posit human behavior not as contingent and relational but individuated, behavioristic, and predictable. All you need is to collect data about people; this is sufficient to control and contain their interactions with each other in the aggregate over time.

Cameras in Airbnbs (and in Ubers and Lyfts, and so on) are merely part of this infrastructure; they are, in effect, another way to rate people. The logic of platforms demands them. No wonder people are still willing to rent even the properties that disclose them. That logic also makes a paragraph like this from Fussell’s piece sound reasonable enough: “Of course, hosts have plenty of reason to train cameras on the homes they rent out to strangers. They can catch guests who attempt to steal, or who trash the place, or who initially say they’re traveling alone, then show up to a property with five people.” The presence of cameras is likely interpreted by many Airbnb customers not as egregious but simply necessary. And Airbnb has no incentive to discourage cameras and every reason to encourage them — their presence supports and intensifies the air of distrust that platforms rely on. You need Airbnb not because you trust strangers but because you don’t. Ultimately you trust the platform brand (as proxy for the reputational schemes it administers), not the person, just like in any other form of mass-market commercialism.

In a sense, the cameras certifies the mutual distrust in operation between parties, authorizing them each to be at their worst, to try for whatever they can get away with. Within the world of platforms, no one should be naïve enough to expect anything but the worst from people, absent the protections that increased surveillance, data collection, and platform-controlled ranking can provide. This militates against a sense of collective responsibility. People are instead oriented ethically to the platform itself and not to the people they interact with — they need to remember to rate people, not to try to get along with them or work toward mutual understanding. In other words, a platform isn’t matching people in hopes that they will cooperate or get along. They are matching people in hopes that both parties will become more dependent on the platform to mediate their encounters.

It is in a platform’s interest that people find they can’t get along, can’t communicate, can’t resolve their issues. This strengthens their demand for a third-party mediator. So a platform will do what it can to make its rating systems put parties at cross-purposes. Then cooperation or cordiality appears not as prosocial but as strategy. And the performances are not for one another but for the ratings and the cameras, and for the future encounters that we hope those will secure for us.

Since there is no collective responsibility in any platform-mediated encounter, each person can be ranked as though they are fully accountable for whatever happens. The logical extension of this is for each individual to have their own reputation score independent of the context of any particular encounter with particular others. Airbnb, Fussell points out, now uses “risk scores to flag suspicious behavior,” an approach that transforms past data into horizons of possibility for those scored. This model of security, as Louise Amoore details in The Politics of Possibility, doesn’t seek to avert catastrophe so much as establish risk screening as a permanent enabling condition, facilitating an increased amount of exchange with little regard of the human cost at the level of the individual — not merely the ripped-off customers and clients but those screened out from participating in the first place. “To manage risks ahead of time is to enroll modes of calculation that can live with emergence itself, embrace and reincorporate the capacity for error, false positive, mistake, and anomaly,” Amoore writes. When a Airbnb host puts hidden cameras in their bedrooms, that is ultimately just regarded as more information for a system that is geared toward always improving its efficiency (lower that renter’s score!), not a reason to question the integrity or viability of that system altogether.

Automated scoring systems that amass unspecified amounts of data and process it opaquely and indeterminately with the aid of machine learning algorithms are well suited for laundering various forms of bias and prejudice and discrimination. As Amoore puts it, “The political decision to prevent someone from boarding a plane, to detain them at a border or a railway station, to deport them on suspicion of posing a threat, to freeze their assets is increasingly obscured in a computational judgment that is ever more possibilistic and difficult to challenge.” The same goes for similar implementations in commerce platforms. (Amoore points out that state risk-assessment systems were derived from customer-relationship-management data-mining software developed for retailers.)

Because discrimination is baked into reputation scoring, the pervasiveness of platforms extends that discrimination in a more tenacious and obfuscated form. They allow for prohibited forms of discrimination by other means, much like Facebook’s advertising platform has allowed.

But the problem with platforms is not merely a matter of algorithms, whether its tainted historical data reinscribing a history of prejudice, or opaque structures allowing for masked discrimination. Discrimination plays more overtly into the atmosphere of distrust platforms foment and rely on. If the presence of cameras in Airbnbs, etc., implies a mutual distrust among parties to a quasi-social transaction in one sense, it may also unite them against the unannounced but often presumed targets of surveillance in a racist society: racialized others used to rationalize systems of tracking and control. As Simone Browne documents in Dark Matters, many surveillance techniques were developed as part of maintaining slavery and retain that legacy in their implementation.

Cameras in Airbnbs indicate a desire not merely to protect property but an implied willingness to sort people into those who are confident that they “have nothing to hide” (i.e. have not been subjected to direct scrutiny on the basis of some prejudice) and those who have reason to fear being subject to prejudicial suspicion.

As Chris Gilliard writes in this essay about a misguided classroom exercise in eavesdropping, “Being surveilled has been, and continues to be, the de-facto state of existence for marginalized populations in America … Privacy for marginalized populations has never been, and will never be an abstract. Being surveilled, whether by private actors, or the state, is often the gateway to very tangible harms — violence in the form of police brutality, incarceration, or deportation. And there can be more subliminal, insidious impacts, too.” Cameras in Airbnbs should be interpreted in this light, in terms of who will see them as abstractly menacing at worst, and who will see them as directly threatening.

Some Airbnb guests may see such cameras as a feature, a positive sign that the property owner is committed to keep certain people out — i.e. the cameras send a similar signal as the one sent by an establishment refusing to accept cash. This signal might give some guests a sense of being on the same side as the renter despite their otherwise misaligned incentives. The cameras  communicate a signal about who supposedly belongs. Renting an Airbnb that has cameras installed is almost a kind of flaunting of one’s privilege in saying, Not only do I have nothing to hide, but I am eager to be observed. I know the camera loves people like me. (I wonder if the willingness to install surveillance devices like the Amazon Echo in one’s own home also reflects this attitude.)

Gilliard quotes media studies professor Larisa Kingston Mann, who writes, “At a certain point we should be able to recognize that human dignity requires being allowed to just be without being watched.” The existence of reputation scores (and the platforms that administer and profit from them) suggests a different view of human dignity, by which being watched allows certain people to be while others are exposed and objectified. Platforms are organized around the abolition of forms of commercial privacy, and the rejection of norms of market anonymity.  “The norm-shifting involved around privacy works to benefit tech companies who profit immensely from labeling extraction as ‘sharing’ and ‘community,’” Gilliard argues,

Until we can come to better terms with the disparate impacts of privacy harms, the privileged will continue to pay for luxury surveillance, in the form of Apple Watches, IoT toilets, quantified baby products, Ring Doorbells, and Teslas, while marginalized populations will pay another price: Surveillance, with the help of computer data, deployed against them — in the form of ankle bracelets, license plate readers, drones, facial recognition, and cell-site simulators.

The cameras in Airbnbs are situated on the same axis. They stage a moment in which one recognizes oneself as a beneficiary of surveillance or its target. And in that process of division is also a process of expropriation, of extraction. The trust that might have been built, that might have been felt, that might have endured and opened up different conditions is instead pre-empted and inverted, and made into another confirmation that reputational identity is individual and zero-sum.