Home

Nameless Feeling

Nothing else needs to be said or thought when you can appeal to vibes

Full-text audio version of this essay.

The opening track of Frank Ocean’s 2012 album Channel Orange begins with the PlayStation’s ethereal start-up music, building into a sample of the frenetic character-selection-screen music from the arcade classic Street Fighter II. The single “Thinkin Bout You” follows, with Ocean apologizing for his room being in such a mess before launching into depressive lyrics about unrequited love. There is no necessary connection between all these things, but the image that together they conjure still feels familiar: Ocean gaming away his heartbreak, the bright primary colors of the TV reflecting off his dazed face, piles of dirty laundry strewn about. This feeling is so palpable that there is an entire subgenre of “slowed and reverbed” remixes devoted to further amplifying it.

The lush atmospheric setting Ocean stages invites the listener to build on it with even more connections and associations from their own experiences and to find their own feelings in the images that arise. Some might call this concatenation of elements a “vibe”: something that’s difficult to pin down precisely in words but that’s evoked by a loose collection of ideas, concepts, things that can be identified by intuition rather than by a prescribed logic.

The concept of vibes seems inescapable at the moment. The strangeness of our not-quite-post-pandemic situation is neatly captured by the phrase “the vibes are off.” “Vibe checks” are common on TikTok and Twitter. Tinder has a new feature called Vibes that aims to match people through assorted cultural tastes and opinions. A year ago, Nathan Apodaca went viral on TikTok for drinking cranberry juice while skateboarding and singing along to Fleetwood Mac’s “Dreams.” In a recent New Yorker piece, Kyle Chayka used this and other examples to argue that the popularity of vibes is a “rebuke to the truism that people want narratives.”

The idea of “vibes” discourages the more difficult work of interpretation, foregrounding the idea of affect as inexplicable, ineffable

But even in Apodaca’s TikTok, one can posit a deeper story and not just a set of vaguely related feelings. Maybe it’s not just a carefree guy vibing away but someone who’s decided to move on from a lover who has scorned him: “Well, who am I to keep you down?” Would the video carry such resonance if it weren’t paired with the defiant example of self-independence provided by Stevie Nicks’ lyrics? To simply fixate upon the vibe can result in a more powerful underlying explanation being overlooked.

While seemingly open-ended and allowing for an infinite recombination of elements, the idea of “vibes” is reductive. It discourages the more difficult work of interpretation and the search for meaning that defines human experience. It diverts attention away from narrative and moral implications in favor of foregrounding the idea of affect as inexplicable, ineffable — a matter of chance correlation of elements rather than something that requires deliberate causal explanation. The vibes framework may hone our abilities to identify settings like “cozy” or “cursed,” but it doesn’t give instructions on how we might build them or avoid them in our lives. As an analytic, vibes don’t connect feelings and consequence; as such, it is symbiotic with passive modes of media consumption.


Whereas philosophers, psychologists, and the like search for models of human cognition and behavior, the field of artificial intelligence aims to take such models and turn them into useful tools in reality. As the salience of vibes as a way of (not) explaining experience has grown, so too have the applications of machine learning and neural networks. This parallel may not be a mere coincidence. As Peli Grietzer has pointed out, neural networks behave in ways similar to vibes in capturing patterns in media and culture, online or otherwise. Both serve as perspectives that focus on associations across vast amounts of data or impressions. Ultimately, neither is completely fulfilling: The associations on either side are difficult to explain, and without further analysis, they do not point toward new directions and result only in cultural dead ends.

In his famous 1950 paper “Computing Machinery and Intelligence,” Alan Turing argued for the possibility of a machine that could convincingly imitate a human in a dialogue — that is, a machine that can pass what we now call the Turing test. As his primary paradigm for modeling intelligence, Turing relied on strict logic and reasoning rules, an approach known as symbolic AI that served as the primary conception of artificial intelligence for several decades, both by researchers and in popular science fiction. It required that AI systems be programmed with decision-making rules and logical principles in advance; then in any particular situation, it would be able to deduce the correct action based on those rules.

But in practice, the rules proved inadequate: too rigid, too porous, always incomplete. Recall that HAL 9000 from 2001: A Space Odyssey did not open the pod bay doors for Dave because its programming had been hard-coded with the logic to reject anything that could jeopardize its mission. Much as HAL proved unreliable, the symbolic AI paradigm in actual development was analogously ineffective for building assistive tools at scale. The world is too dynamic, with too many exceptional cases, to be captured in such a static way. No team of engineers could possibly anticipate all the scenarios in which Dave could fall into conflict with HAL or program HAL’s “thinking” accordingly.

A different approach to AI research was needed. Machine learning and neural networks specifically, being able to take advantage of the increasing availability of both cheap data and computing, became the new paradigm. Neural networks are effective because they can find complex correlations in large sets of data in an automatic way with much less manual engineering effort required. Instead of relying on engineers to ascertain and code general reasoning principles and ironclad causal relationships in advance as with symbolic AI, neural networks aim instead to do the “right” thing as much as possible in a probabilistic, statistical sense over a given dataset. They are trained by defining a loss function that gives a number meant to quantify the degree to which the model is doing the “right” thing or the “wrong” thing in a given situation. The training process then aims to optimize the neural network’s predictions to minimize this loss on average over a given dataset. As more data and computational resources are funneled in, this process iteratively reduces the model’s average loss and improves the model’s average predictive performance.

Within all the sensory data that saturates our experience, it becomes more appealing to extract some salient features and then mix and match them in inchoate ways

What the neural network “learns” is emergent rather than deduced. For example, it may notice a pattern that if it’s cloudy, then people are more likely to carry an umbrella. But it would not be able to explain that this is because cloudy implies rain and rain implies umbrella. Instead it effectively identifies a “rainy” vibe through correlations of an initially arbitrary set of parameters. Despite such limitations, neural networks have been successful beyond researchers’ wildest dreams in domains as diverse as computer vision, board games, and protein folding. The AI researcher Rich Sutton has argued that such examples indicate that “minds are tremendously, irredeemably complex,” simply beyond explanation. The sentiment echoes that of the meme “no thoughts, just vibes.”

In our present technological era, humans have also needed a new framework to avoid drowning in the daily firehose of entertainment, media, and information. Given this setting of increasing complexity, it becomes more appealing to use an associative concept like “vibes” as a simplifying framework for understanding or self-expression. If we can’t make sense of all the sensory and conceptual data that saturates our experience, at least we can extract some salient features and then mix and match them in appealing and inchoate ways. Explanations are unnecessary; it’s seen as enough to just recognize a desired mood or feeling.

At the same time, to make the deluge of data more manageable, nearly all popular apps and internet services use machine-learning techniques to surface content that’s been predicted to be most relevant to individual users — a process often referred to as “the algorithm.” Relevance, under these conditions, has a specified meaning that’s ultimately given by the loss function: For example, it could correspond to a probability of clicking a link or an ad, or some other simplistic notion of “user satisfaction” — likes or retweets, for example. On video platforms, optimizing for such metrics as watch time does result in increased watch time, but this is only good insofar as watch time is a perfect proxy for “quality,” which it isn’t. Because of this, such optimization also results in other more negative side effects, like encouraging conspiracy theories and disinformation.

For all the hype that surrounds them, neural networks can’t reflect or explain anything deeper about cultural or societal phenomena any more than sharing a favorite character from The Office can predict long-term compatibility with a Tinder match. These systems can only instrumentalize taste; they turn any expression of self into a reductive data point meant to generate more data at the same level. They presuppose that “liking” just means more “liking” and that is as deep as our desire can be. As with vibes, these metrics carry no context or narrative; they can tell you nothing about how or why something might be desirable, only that they vaguely seem like they might be desirable because they seem similar to other things that are desirable. This opacity encourages users to disregard the possibility of understanding their desire at a deeper level, of probing it, developing it, attenuating it, or even negating it if need be. The vibe induced by machine learning remains a passive experience that only seems more real to the degree that it is inexplicable.


Although the practical success of neural networks is still undeniable, it must be observed that their most powerful applications are in domains where rules are set in advance and don’t change over time — where the goals are clearly defined. The way you win a game of chess or Go is fixed and unambiguous. Protein folding is constrained by the laws of physics and chemistry. But the same is not true for culture. The “rules” aren’t fixed, and it’s not clear how to “win.” As a result, the influence of machine learning and data- and metric-oriented thinking on culture may result in a certain lack of change, repeating the existing patterns and goals over and over again.

The vibes are off because they focus only on feelings and emotional connections that have already existed

Consider the vibes-based music categories like hyperpop and PC Music, which serve as an avant-garde of the moment, mixing a wide variety of other genres with hyper-specific cultural references and inside jokes. The spirit of the genre can be already gleaned from the title of A.G. Cook’s recent album 7G, a reference to 5G conspiracy theories. One of its tracks, “2021,” diagnoses the zeitgeist of this year as an endless repetition of the same concepts and ideas:

Everything you do, it’s been done, done, done before
Everything you say, yeah, you said that yesterday

Although the sounds of PC Music are often made to be high-energy in a maximalist style, they are at the same time deliberately artificial and infused with irony, suggesting an underlying dissatisfaction and depression. These masked bad vibes are actually pointing toward urgent questions: How do we break out of this loop? How do we escape this cycle of political deadlock, Covid lockdown, and the dread of climate catastrophe? How do we create new art forms that aren’t just remixes or nostalgic revivals of existing ones? PC Music can pose the question but can’t become the answer; it can only manifest the problem in a heightened, intensified form.

The vibes are off, but they’re off fundamentally because they focus only on feelings and emotional connections that have already existed. They don’t provide or imagine pathways to new futures; they allow only for an understanding of what feels good or bad based on experiences that have already happened, things that have already been seen.

In other words, “vibes” are similar to the approximations that machine learning systems use, and the two feed off of each other synergistically. The situation is precisely encapsulated by Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” Content systems optimized by machine learning amplify the repetitive quality of internet content by identifying and recycling the same topics that generate interest and controversy, and the tendency spreads elsewhere in culture, such as in the continuous, unnecessary reiterations of movie franchises like Star Wars or The Matrix. The vibes are gamed until they become stale, and an increasing facility in vibes makes this trend all the more evident and noticeable.

Some may worry about whether powerful new neural-network models for generating text and images will replace workers and artists. But this can be true only if beauty and creativity are measurable by one-dimensional metrics, if art and human endeavors are static forms whose rules and objectives do not change, if we reject the possibility of meaning and principle and are content with just vibes. Whether things change or evolve remains up to us. We are beginning to see what it’s like when things don’t.

Ludwig Yeetgenstein is a software engineer. You can follow him on Twitter.