Home

The Domino Effect

How machine logic infects our tastes

Americans order a lot of pizza: A 2014 survey by the U.S. Department of Agriculture suggested that one in eight Americans eats pizza on any given day. Nowadays it feels as if there are as many options for having a pizza delivered as there are available toppings. If picking up the phone fills you with anxiety, Pizza Hut offers a pair of sneakers with a built-in button for ordering pizza. “Host Hungrybot in your Twitch channel to let your fans order pizza delivery right from your stream,” a startup targeting eSports fans trumpeted. Twitch is one of the few digital realms left untouched by Domino’s, which now offers a series of apps, chatbots, and even the option of tweeting an order using the pizza emoji. Some of these ordering options may exist primarily as marketing gimmicks, but their aggregate effect remains notable: Any interface to which you have access can likely be used to order pizza.

Any interface to which you have access can likely be used to order pizza

This in part stems from pizza’s popularity, but taste is only a small part of the story: The delivery pizza is highly adaptable to the logic and formatted language of communication interfaces. The typical consumer’s mental model of a pizza — dough with sauce, cheese, and toppings baked in an oven — is quite similar to a machine’s conception of pizza, which is quite similar to how a pizza is actually made. The algorithm for pizza is not complex. Ordering a pizza through a chatbot or within a Twitch stream is possible because all parties in the transaction are imagining the same simple process and speaking from the same restricted phrasebook.

Because it is streamlined to be easy to assemble, Pizza (and not the Verace Pizza Napoletana-certified kind) is well-suited for digital abstraction. The fast food burger and the burrito have undergone similar transformations, along with plenty of foods desirable not only for their taste but because they are rationalized and efficient, capable of individual customization without requiring any special trust in the person preparing it at the other end of the interface. Do these interfaces make it simpler to satisfy our tastes, or do they subtly simplify them?


After a bad day at work, you return home to find a turnip, some lettuce, and a desultory chicken breast. That problem was the basic premise of the British cooking show Ready Steady Cook: Members of the public would throw together bags of groceries for a few pounds, and chefs would then make a serviceable meal out of these ingredients. This premise lasted 16 years and 1,895 episodes. Beyond their knife skills, what the chefs on Ready Steady Cook really offer is improvisational intelligence: the ability to come up with solutions to new problems on the spot.

Improvisational intelligence, or the appearance thereof, is a dream of consumer technology. DARPA hired a jazz musician to help teach an AI system to improvise. IBM engineers fed Watson, of Jeopardy! fame, the entirety of Bon Appétit’s archive, combined with insights into human taste and analysis of what ingredients tend to be used together. “With Watson’s help, I cooked some eggplant fritters that made convenient use of every sad, wrinkling root in my refrigerator’s crisper,” Alexandra Kleeman wrote in the New Yorker. But Watson is not in your kitchen yet, and may never be; instead, its example is used to show what the current range of culinary companions cannot do.

Apps encourage us not to trust ourselves, but to think of ourselves as a component of the machine. These tools simplify our lives on the condition that we simplify ourselves for them

The Allrecipes skill for Amazon’s Alexa claims to “quickly [find] recipes that match your desired dish type, ingredients you have on hand, your available cooking time, and/or your preferred cooking method.” But it’s just an interface over a simple dataset — the recipes written and documented by contributors to allrecipes.com — and the appearance of improvisational intelligence is purely a function of the search terms a user enters. JULIA, a chatbot that aims to be “your new BFF in the kitchen” by demonstrating the improvisational power of a master chef, can only answer questions one ingredient at a time — it can provide a recipe for turnip, lettuce, or chicken breast, but not all at once. You can feel your new BFF querying a database in the background. The app regularly responds to queries with “I don’t think I’m qualified to answer that yet,” before linking to a page of tips about how best to chat with the bot.

Generally, predictive services are not predictive so much as reliant on someone else having been in your position before — if you search for the contents of your fridge, odds are someone else will have previously cooked these items together, but you will have sift through their results yourself. The appealing complications of actually cooking a meal remain messy, inconvenient, and human. Apps encourage us not to improvise and trust ourselves, but to undertake the process of itemizing and analyzing our ingredients as data for machines to process; to think of ourselves as a component of the machine itself. These tools simplify our lives on the condition that we simplify ourselves for them.


Ordering in is meant to outsource problems — and labor — to other parties: You pay for other people to buy ingredients, prepare them, and bring them to your door. That workforce is largely invisible, and interfaces like those employed by Seamless or UberEats are designed to conceal the labors of the unknowable number of people involved in preparing your food, making the process appear as little more than a hand-off at the door. These apps make a contradictory promise: to simplify the multipart process involved in creating a meal to a series of clicks, while offering enough options to satisfy an infinite number of cravings. You agree to meet the interface somewhere between what you want and what it knows how to offer, until you want what it knows how to offer.

To narrow down thousands of options to a usable interface, food delivery apps ask that you filter results by category, and encourage you to declare your preferences — even the Domino’s Twitter ordering mechanism is dependent on a customer having registered a preferred order that can later be triggered with an emoji. All these shortcuts and conventions add up to a rudimentary language: a series of words and tics that can be used across ordering interfaces. Grubhub’s lingo is virtually indistinguishable from Seamless, with which it merged in 2013, and insofar as multiple delivery apps are a nuisance for users and restaurants alike, this may presage further consolidation. Nevertheless, these linguistic conventions have emerged with the bare minimum of centralized coordination; this phenomenon reflects the flattening effect of culinary interfaces on how we talk about food. 

These interfaces appropriate both experience and effort, repurposing unseen human labor as machine magic. No one is working for you, only empowering you to make your own decisions

Food delivery interfaces have the habit of reducing epicurean decisions to practical ones, as if buying a meal were no different from buying a mop. But food isn’t just about ingredients; it’s about desire. If rationalizing food consumption were desirable in more than theory, Soylent would be more than a niche taste. The persistence of ideas like “comfort food” speak to its emotional resonance; what do you feel like having for dinner may be the most common phrase in relationships for this very reason. We consume the associations we’ve made with a meal, memories of the times we consumed it, as well as those we prepared it with, those who prepared it for us.


The language of app interfaces makes the complex processes of desiring and preparing food feel like completing a personality test: You simply check off a series of preferences and receive the right meal for you, as if the app’s magic alone had summoned it. This same sort of abstraction holds for other sensory experiences: Spotify’s playlists can be said to flatten the context in which a song was created, trading the artist’s ideas, intentions, cultural cues and environs for preset “moods” generated by the service, while offering artists paltry remuneration. These vitamin-like “moods” then impose categories of experience onto the listener. As we learn to map our desires onto these interfaces, we absorb their vocabulary; human experience is delimited, incrementally, by the limitations of our machines.

These interfaces appropriate both experience and effort, repurposing unseen human labor as machine magic. Food apps, with chatty text, friendly logos, and humanesque voices, are gussied up as our friends in the kitchen — always friends, and never domestic help. No one is working for you, only empowering you to make your own decisions, based on your own tastes, as your tastes slowly shift in a direction that suits the logic of a database. Taste and labor are linked in this domain: Abstracting away the reality of labor creates a permission structure in which you’re more comfortable asking for what you think you want. Perhaps what we want is just to avoid the reality of human contingencies, the reality of other people.

David Rudin is a writer who lives in Montreal. He is highly caffeinated.