Towards the beginning of the pandemic, when there was a global shortage of sanitizer and my hands felt like agents of death, I was targeted by an Instagram ad that seemed to be selling me human touch. In it, two people gently touched hands in front of an exposed concrete wall — a performance of naïve, exploratory intimacy, as if the two were aliens newly supplanted into the bodies of a straight Aryan humanoid couple. The contact of their fingertips created a series of high-pitched, breathy notes, like a synthesized woodwind instrument.
The company behind this ad is Playtronica, a studio working to “bring interactivity into the everyday world” by making objects (and people) sonorous in response to touch. Founded in 2014, Playtronica has released two devices: the TouchMe, which enables the user to “turn human skin, water or flowers into a musical instrument” with notes that change depending on the area and intensity of contact; and Playtron, which connects to a computer and up to 16 objects (the website suggests fruit) that can be used as a piano keyboard. “The world we live in is afraid of touch and interaction,” states another advertisement, “so we decided to highlight it.”
The ad irritated me. Behind its tweeness, there was something patronizing about the suggestion that my curiosity in the physical world was lacking — that it was so dire, in fact, that I couldn’t register the experience of physical contact unless it was literally translated into sound. Beyond that, I resented the implication that my interactions with the world were somehow incomplete without Playtronica to draw out its hidden signals and frequencies. The more I thought about it, the more it bothered me.
Playtronica’s devices register different pitches depending on contact pressure, but it makes no difference if they are connected to a human being, a courgette, or a dead bird
Playtronica is part of a wave of new “synaesthetic” technologies that aim to facilitate multisensory interaction between different entities. Some of these — such as “Zoolingua” or “No More Woof” — attempt to translate animal thoughts into language. Others translate plant thoughts: There’s “PlantWave,” by a record label called Data Garden, a product which converts the “biorhythms” of plants into sound, or the slightly more eccentric company “Music of the Plants.” A fair few center on converting touch into sound: Australian arts organization Playable Streets has roaming exhibits ranging from “Reach out sounds” (a similar concept to Playtronica’s “TouchMe”) to “Dead petting zoo,” which connects visitors to Australia’s extinct megafauna through touch-activated music.
Nearly all of these projects have the stated objective of reconnecting or reengaging us through technology to a world from which technology has disengaged us. (“With Plantwave,” says an ad, “your phone becomes a way to connect to nature, rather than be separate from it.”) The utopia they conjure is a Fantasia-like vision of childlike re-enchantment in which the animate and inanimate alike are made responsive to human input in a way that is immediately interpretable.
The definition of “interaction” that these technologies entail — in which non-verbal entities respond to us in a language we determine for them — is extraordinarily narrow one that nonetheless has wide traction in the realms of architecture and urban design. An ideal of “re-engagement” or “reconnection” is envisaged as a continuum where data passes effortlessly between human and non-human worlds. At the heart of this ambition is something that we might call “synaesthetic fallacy”: the technological promise that all data is translatable into other data, and that these processes of translation are impartial and organic.
But the ambition of perfect communication between entities is a trap. The interaction of machines and human bodies can be exciting terrain precisely because it is fraught with complex processes of negotiation. Ambiguity and complexity is engagement: when we fail to recognize this, and instead prioritize the generalized efficiency of conversions between different sorts of data, we edge towards the commodification of all our interactions.
Playtronica was founded in Moscow in 2014 by theater producer Sasha Pas. The original product was conceived for children, but the target demographic shifted when it became clear to the founders that children were not “natural improvisors” and were not generating the kind of creative content that the studio sought. Playtronica belongs to a much wider group of hardware and software known as MIDI (Musical Instrument Digital Interface) controllers, a communications standard originating in the early ‘80s which enables musical technologies from different manufacturers to speak to one another.
Like a synthesizer, Playtronica’s hardware is essentially a notation device, with the same relationship to sound as a computer keyboard has to words. It doesn’t tap into a melon’s hidden frequencies so much as (literally) instrumentalize the melon. Playtronica’s devices are designed to register different pitches depending on the pressure of contact, but it makes no difference if they are connected to a human being, a courgette, or a dead bird. Nonetheless, the studio actively markets itself as facilitating a deeper knowledge of the object’s essence. “The language of Playtronica is usually inspired by synaesthesia,” co-founder Vincent De Malherbe told an interviewer from Metal Magazine. “Synaesthesia is an unusual experience in which the perception of reality causes other sensations. In this situation, a person can perceive a sound by watching a color, touching water or a wooden texture for example.”
The promise of perfect synaesthetic translation via technological means has always been seductive, perhaps because processes that seem synaesthetic are inherent to the way technology functions: a stream of code becomes an image, a color, a video clip or set of noises. When I was younger, I used to sit in front of the family computer in the basement for hours playing music and watching Windows Media Player generate shapes, colors and patterns that corresponded to the sound I was hearing. I didn’t think of it as creative visualization; I believed I was witnessing an actual correlation, one that knit the world together and gestured towards a deep mathematical order.
The term synaesthesia — which takes its name from the Greek for “perceive together” — was coined in the early 19th century. Interest persisted into the early 20th century, and then it was largely forgotten until the 1970s. In the past decade, synaesthesia has become something of a buzzword in tech circles, often surfacing alongside the vague word “interaction.” From 2012 to 2014, the MIT lab ran a research project called “Digital Synaesthesia” which looked to “evolve the idea of human-computer interfacing and give way for human-world interacting.” Starting with the idea that technology can give us access to data that cannot be registered by the “natural” sensory spectrum of human beings, it aimed to provide an interface for translating this information to sensory data — basically providing users with a level of technological “attunement” to frequencies in the external world that they would not otherwise be able to detect.
When technological experiences are described as synaesthetic, the effect is one of both mystification and naturalization. Synaesthesia, crucially, is a subjective phenomenon: one synaesthete might experience the number seven as the color blue, while another might experience it as a sensation like cold wind on the back of the neck. Its application as a tech buzzword, however, suggests instead a one-to-one correlation (orange = sonorous bloop). Translations and shorthands are adopted as equivalences.
In an article for Real Life, Meredyth Cole wrote that synaesthesia, which was once considered “a rare phenomenon, a poetic gift,” is today “the language of the internet.” Unable to transmit the sensory experiences of touch, taste or smell, the internet must achieve sensory provocation by other means: sound, visual effects and language. As Cole argues, though, these translations are never complete. Communication and interaction is not supposed to be seamless or direct; it is a game of interpretation that is always haunted by an infinitude of possible meanings.
In writing this, I have thought a lot about a recent conversation I had with filmmaker Jenny Brady, whose work focuses on the politics involved in all communicative events. She pointed out that much of our understanding of communicative diversity and technological mediation between different senses and lexicons comes from differently abled thinkers and inventors: for example, textual descriptions of images, text-to-speech software, and EEG technology that converts brainwaves to speech. Within critical disability theory, these technologically mediated processes are never positioned as impartial; they are always treated as relational and value-laden, unfolding at the intersection of many different ways of knowing.
Within the framework of the smart city, the “synaesthetic” processes by which touch becomes a sound are the same processes by which data becomes profit
By contrast, the contemporary use of synaesthesia as a metaphor in interaction design implies the existence of an objective set of sensory correlations that can be brought out by technology alone. The idea that we need gadgets to “translate” the world around us is used to justify the omnipresence of gadgetry in every facet of life, lending both scientific legitimation and an aura magic to the idea of the interactive or animate city. The more we rely on technology to reveal objective truths about the world around us, the more we cement the idea that our own presence in the world, or engagement with it, is not real or valid until it is registered as electronic data.
This attitude underpins the drive for smarter cities, in which the urban environment is increasingly figured as a giant playground of smart and chatty objects. These encourage engagement on terms designed to generate data, and therefore capital. Within this paradigm, a line is drawn between the inert and the interactive: interactions that are logged or signalled in some way are considered valuable, while more intangible and complex moments of contact lose value. Possessed by the animating life-force of technology, ticket machines, pedestrian call buttons (and, increasingly, garbage cans and lampposts) reaffirm and register human presence.
The notorious Sidewalk Toronto, for example — an initiative of Sidewalk Labs, sister company to Google — outlined a plan for the public realm which would “enable open space to be activated more of the time” and “make space more responsive.” The project met with massive community resistance: aside from the privacy concerns associated with Sidewalk Labs’ vision for a city imbued with sensors (park benches, for example, that would count how many people sat down on them per hour), the automation of many of the functions of municipal government signaled the risk of reduced accountability.
Such projects cloak themselves in the language of synergy: an intimacy between the built environment and the people living in it that is so profound it becomes a sort of shared consciousness. In reality, though, it is a convenient way for private interests to seize greater control over the city space through digital overlay. Within the framework of the smart city, the “synaesthetic” processes by which touch becomes a sound, or a sound becomes a number, are the same processes by which data becomes profit. Sidewalk Toronto’s lack of transparency became apparent and suspicions grew. Eventually, the project was abandoned.
While going through Playtronica’s YouTube channel — which includes a combination of promotional material and user-generated content: a sonic massage; musical performances using jellies, pineapples, and a watermelon (the latter of which became a viral summer hit) — I kept thinking about the work of Korean-American artist and “father of video art” Nam June Paik. I had been to see the retrospective of Paik’s work at the Tate Modern in London before lockdown, where a whole room was dedicated to his collaborations with Charlotte Moorman, also known as “the topless cellist.” Paik and Moorman’s experiments in “action music” — which lasted over 25 years, until Moorman succumbed to breast cancer — were strange, irreverent performances notorious for their nudity, genuine bodily risk, and the occasional presence of Paik’s Robot-456: a hulking mess of hardware that walked, spoke in memes and defecated small white beans. Sometimes Moorman played the cello, and sometimes she played improbable instruments of their own invention: a stack of televisions, a military practice bomb. In what has become the most famous image of their collaboration, Paik and Moorman staged an interpretation of John Cage’s 26’1.1499” in which Paik himself became the cello, kneeling between Moorman’s knees with his face to her belly and a single, taut string stretched across his back.
Technological ubiquity under neoliberalism largely flattens the natural complexities of communication, narrowing down ways to interact and to know
Thinking through Paik and Moorman’s work helped clarify what, exactly, it was about Playtronica that irritated me. It has to do with a wider technological reconfiguration of what constitutes the ideal interaction between human and object, and between human and human. One video from Playtronica’s feed begins with simple text — How To: Music on Humans — and goes onto feature a “Human Flute” in which the neck of a flute is drawn in marker pen on a woman’s throat, her eyes just out of shot. Unlike Moorman and Paik’s cello, which is mainly a work of mime that also foregrounds the minute sounds of the body and the awkward non-sound of the bow against a piece of ordinary string, the human flute, when “played,” produces a series of synthesized notes, literalizing and completing the substitution of human for instrument.
There is a latent violence to this action of hand to throat (“touch gently,” the video warns); but unlike in Paik and Moorman’s work, which embraces the easy slippage between violence and tenderness, instrumentalization and intimacy, the Playtronica video deliberately neutralizes this charge, opting for a minimalist, block-color, sans-serif neatness. Though Playtronica’s TouchMe ads employ an aesthetic of eroticism, they staunchly avoid the friction that would be necessary to generate any real erotic charge. “We feel comfortable in minimalist aesthetics; it concentrates your mind and senses, keeps the attention focused,” explains Playtronica founder Sasha Pas in an interview for Metal Magazine. “It helps to make the statement clear, especially when technology can lead you to an endless variety of options and solutions.”
In an essay about the ongoing pursuit of communication with plants, Rahel Aima writes about human-plant interaction technologies that facilitate a kind of “leavesdropping.” Despite the voyeurism involved, this endeavor sometimes comes from a real desire for a deeper connection with the environment. Other times, it errs towards exploiting the marketability of the millennial Swiss cheese-plant aesthetic. Playtronica tends towards the latter. The music it generates isn’t some amplification of the plant’s natural voice, which — if we could hear it — may not sound so pleasing to human ears (as Aima writes, “a 2019 study in Tel Aviv suggests that plants emit high-pitched sounds when cut or otherwise stressed”). The plant itself becomes a prop for music conceived by humans with the assistance of technological interfaces: it is merely hardware for human sound, designed to approximate the mood that a specific human might associate with a tropical fruit or an elegant Monstera leaf.
Writing about Paik and Moorman’s collaborations, Sophie Landres argues that Paik and Moorman’s performances parodied the goal of perfect human-machine synthesis, highlighting the complex negotiations at play when humans, other humans, and things interact. For Landers, the substitutions that take place in their work (a human for a cello, a cello for a human) are simultaneously operatic and farcical, and the interaction between music, body and instrument is a zone of immense unpredictability and sometimes danger. Playtronica, by contrast, takes it for granted that “making the statement clear” — minimizing the proliferation of meaning around a particular moment of contact — is a good thing. It ignores the fact that the story of technological ubiquity under neoliberalism is largely one of the flattening of the natural complexities of communication, a narrowing down of ways to interact and to know.
The success of technologies like Playtronica points to a real desire for communication between human and non-human worlds, something that we seem more ready to believe in when it involves technological mediation. There is a famous TikTok dog called Bunny with five million followers, for example, who “talks” to her owner, Alexis Devine, by pushing a series of buttons that correlate to different pre-recorded audio commands. In one video, Bunny pushes the buttons corresponding to “mad,” “ouch,” “stranger,” and “paw” prompting Alexis to discover that the source of the dog’s discomfort is a foxtail lodged between her toes. It is worth asking what we lose when we substitute the fiction of Bunny “talking” for the more complex reality of the negotiations that take place between Bunny and Alexis Devine, which are fascinating precisely because they arise at the intersection of two distinctly different ways of being. Their ability to touch and influence one another is because of — not merely in spite of — the fact that they can never completely merge.
In her essay “On Touching — The Inhuman that Therefore I Am,” philosopher of science Karen Barad refutes the notion from classical physics that due to electromagnetic repulsion, two entities can never fully touch. Looking to quantum physics, Barad finds in the “perversions” of electron behavior a new explanation for touch. They draw on Feynman’s version of quantum field theory, in which electrons are constantly interacting with themselves and with the void, and thus co-constituted by it. In this model, the electron’s status is determined by “an infinite set of possibilities involving every possible kind of interaction with every possible kind of virtual particle it can interact with.” (Feynman was actually a grapheme synaesthete, meaning that he associated letters and numbers with colors). Touch, Barad argues, is never pure or innocent, but instead the gateway to “a cacophony of whispered screams, gasps, and cries, an infinite multitude of indeterminate beings diffracted through different spacetimes.” This cacophony is a part of us; try as we might, “we cannot block out the irrationality, the perversity, the madness we fear, in the hope of a more orderly world.”
Barad’s emphasis on the infinitude and indeterminacy of the minute sub-atomic interactions that constitute touch has parallels with Roland Barthes theory of “the grain of the voice.” The grain of the voice, for Barthes, is “the body in the voice as it sings, the hand as it writes, the limb as it performs” — in which can be discerned “friction between the music and something else.” In the Metal Magazine interview, Pas cites Barthes’ essay as an inspiration for Playtronica, referencing a collaboration at Paris Fashion week, and missing the point entirely: “Imagine that touching, stretching and folding the dress can change the sound characteristics, thus proposing… a new system of emotional relationships between humans and objects.”
When we react to an iMessage with an exclamation point or a heart, the relationship of the symbol to our original emotion is not synaesthetic; it is abstract and complicated
A dress already has a voice, even without technological appendages. No object is wholly silent. The string in Paik and Moorman’s human cello, for example, does not sing to us as a cello string should: Instead something else emerges, a muffled sound forged between the bow, the string, Moorman’s arm, and the tensile strength of Paik’s grip (the kind of sound that you feel in your teeth as the sensation of biting down on a popsicle stick). Rather than “illuminating” an object’s innate and individual characteristics via an electronic appendage, Playtronica speaks over them, reducing very intricate webs of relational interactivity to a causal link (touch = sound).
In its aim of “making the statement clear,” Playtronica does what many technologies do: it orders the world while simultaneously obscuring the processes and criteria of this ordering. The more our interactions with the world are mediated by technologies that gloss over the frictions and ambiguities by which we acquire meaning for each other, the more we drift towards commodification, because it is through ambiguity that we resist quantifiability. When we react to an iMessage with an exclamation point or a love heart, the relationship of the symbol to our original emotion is not synaesthetic; it is abstract and complicated.
Technology has always offered shorthands for communicating, something that originated from necessity (smaller pieces of information were easier to transmit than larger ones); but as this process migrates away from screens and into the built and natural environment, it becomes more difficult to distinguish between the shorthand and the referent: between the data-portrait of our commute home, for example, and our actual experience of it. The danger is that slowly we are habituating ourselves to the idea that technology possesses some special ability to traverse ontological, experiential or linguistic boundaries, translating seamlessly between all kinds of data (visual, aural, emotive, etcetera). If data corresponds almost directly to capital, then this process is ultimately one of more expedited monetizability.
On his blog, theorist Steven Shaviro takes up Marx’ notion of commodity fetishism, writing that “commodities actually are alive: more alive, perhaps, than we ourselves are. They ‘appear,’ or stand forth, or ‘shine’ (the word Marx uses is scheinen) as autonomous beings” — they seem to possess the ability to act and feel independently of us. Animating the world via technological appendages doesn’t increase our engagement with it; instead, it bolsters processes of commodification, because, according to Shaviro, the commodities possess more agency than we ourselves do. Emphasizing interactivity for interactivity’s sake renders the world meaningless until it is activated by technology — then technology (and capital) becomes the dominant life force.
The technologies that evoke synaesthetic fallacy expedite the easy translation of all experience into data, and all data into capital. At the same time they mystify this process, cloaking it in the language of scientized magic. Synaesthetic fallacy is wielded as a tool for re-branding the interfaces that serve the data economy as essential mediators of a broken relationship between screen-obsessed humans and the external world. It seduces us into the illusion that tech can ever function as simply a neutral translator.
As a consumer good, Playtronica usefully concretizes the tendency to reach for technological solutions for problems that technology generates — namely the supposed disconnection wrought by digital media, and corresponding muting of the power of human touch. Our relationship with technology, though, is a chaotic one; in fact all interaction is by definition chaotic. Any form of play that does not center this chaos — which instead envisages human-machine interaction as a seamless unbroken circuit — will always subordinate the human to the level of object, rather than elevating an object to the level of the human. In attempting to make the world more animate, we risk de-animating ourselves.