There’s a tab saved in my phone’s browser cache that I can’t bring myself to close. Like that rare archaeological artifact perfectly dated by its own inscription, it reminds me every time I scroll past that there was once a world in which Hillary Clinton was given a 73 percent chance of winning the U.S. presidency as late as 9:15 on election night. I remember my dread, tracked by reddening charts like a fever, and panic-inducing live forecast needles, one whose code actually included a function its programmers named “jitter.” An onrushing, Lovecraftian “thing that should not be” was being calmly measured in real time, despite the dawning sense that no one had any idea what was going on.
It’s a world that feels somewhat alien with the benefit of nine months’ distance. During that time, we’ve come to understand much more about bots, fake news, filter bubbles, psychometric targeting, and the click economy. And yet the more we learn about the nuts and bolts of how our feeds overheated in the lead-up to the election, the more difficult it becomes to know how to proceed.
Social media encourage a situation in which lies that push harder in the direction of your beliefs will always be more compelling than the truth
One thing the election made clear was that concerns about how the public receives information can’t be reduced to solutions like knowing which sources to trust, or distancing events from their interpretation, or broadening the capacity of fact checkers to keep pace with the volume of claims. Political statements circulate today untethered from any consensus on truth because the feeling they offer us — of candidates, of conspiracies, of grievances — is more potent than what fact can offer. Social media encourage a situation in which lies that push harder in the direction of your beliefs will always be more compelling than the truth.
This structure of feeling benefits from a complicated assemblage of financial incentives and algorithmic filters, none of which have been necessarily designed to deceive or divide. And yet the internet — once assumed to be an inherently democratizing force that provided greater access to information, to one another, and in turn to a more robust public sphere — seems instead to have left us with an ever deeper sense of social fragmentation. Antidemocratic sentiment is flourishing online.
But this is not new. We don’t have to look too far back for another moment when dangerous new forms of political feeling seemed to correspond to novel forms of communication that were assumed to encourage the opposite. After World War I, it became imperative to understand not only how masses of people had been driven by their governments to violence and destruction on a scale never before possible, but also the networks through which these masses understood themselves to be part of a collective whole, a public. Out of this environment came early theories of media and communication from thinkers across a variety of fields, among them journalist Walter Lippmann, philosopher John Dewey, linguist Charles Kay Ogden, and literary critic I.A. Richards. These theories weren’t necessarily in conversation with one another, but they shared a desire to understand a moving target: the politics emerging from emerging media.
With emerging media once again conspicuously reshaping political discourse, it’s worth revisiting some of these 1920s theories of communication and their concern with the basic structure of information, deliberation, and consensus in a mass-mediated public. What happens to an individual’s sense of the public when newspapers, film, and radio expand its scale from local communities to an international stage? How do changing forms of communication affect the things people try to say to one another? And how easily can new mechanisms of collective belonging be co-opted by racist and nationalist politics?
These thinkers pair uncannily prescient diagnoses of a world on the precipice of another war with equally bizarre proposals for restoring peace. But their writings offer instructive lessons for our present, as feeling has yet again replaced fact in the establishment of political realities.
Walter Lippmann received his education in the manipulation of public opinion as an officer with the American Military Intelligence Branch during World War I. A journalist who had founded the New Republic and would later win a Pulitzer for his work as a nationally syndicated columnist, Lippmann was at constant odds with another journalist cum propagandist, George Creel, who ran the rival Committee on Public Information. Both groups were tasked with drumming up public support for the war, though they differed in their methods. The success of their efforts informed two influential books by Lippmann on what would come to be called mass communications: Public Opinion (1922) and The Phantom Public (1925).
In these works, Lippmann argued that mass communication and “the manufacture of consent” had made informed, participatory democracy impossible. Democracy, in Lippmann’s view, relied on the presence of what he called “the omnicompetent citizen,” an individual voter capable of reasoned views on everything from matters of economics to ethics. But after the war, Lippmann and others began to see this sort of citizen as a myth, an impossibility. Witnessing firsthand how easily a small cadre of government-backed journalists could influence public opinion, Lippmann began to rethink the mechanisms behind democratic consensus.
Using a metaphor from printing press technology, Lippmann argued that “stereotypes” structure our understanding of reality. (Lippmann’s writings in fact gave us the modern usage of “stereotype.”) The ordinary citizen, just like the printing plate cast from a mold to allow for repeated impressions, by necessity relies on the reproduction of stock images and value-laden assumptions about the world. “We are told about the world before we see it,” he writes, “and those preconceptions, unless education has made us acutely aware, govern deeply the whole process of perception.” These “pictures in our heads” form a “pseudo-environment” in which democratic citizens participate in lieu of the actual world.
So for Lippmann, the proliferation of newspapers, cinema screens, and radio sets did not in fact create a better-informed citizenry. Modern media, each of which uniquely distilled the complexity of things down to a more manageable sense impression, enabled instead a pseudo-environment that was simply easier to manipulate.
The ordinary citizen by necessity relies on the reproduction of stock images and value-laden assumptions. These “pictures in our heads” form a “pseudo-environment” in which democratic citizens participate
Rather than relying on the whims of gullible citizens, Lippmann suggested an alternative vision of democracy managed by scientific expertise and technocratic efficiency. While light on the specifics, one of his proposals involved a massive expansion of the president’s cabinet, giving each cabinet secretary their own “permanent intelligence bureau” that could dictate policy independent of Congress. These managers and the “intelligence workers” they oversaw would make decisions far better than the public ever could, given its inherently limited self-awareness of its own self-interest.
Lippmann’s proposals found echoes across a surprising range of political ideologies in the U.S., from Thorstein Veblen’s argument that engineers were best capable of governing in an advanced industrial society and should form a “soviet of technicians,” to the short-lived 1930s Technocracy, Inc. movement, whose bombastic radio broadcasts, black uniforms, and red armbands eventually alienated its followers during the rise of fascism across Europe.
The force of these ideas was not lost on John Dewey, a polymath who served as president of organizations as varied as the American Psychological Association, the American Philosophical Association, and a student socialist group called the League for Industrial Democracy. Much like Lippmann, Dewey saw modern technology revealing, as he put it, “the fact that man acts from crudely intelligized emotion and from habit rather than from rational consideration.” But he disagreed with Lippmann’s technocratic fix, countering that a public governed exclusively by administrators and executives is no democracy at all but rather “an oligarchy managed in the interests of the few.” In a book-length response to Lippmann, The Public and Its Problems (1927), Dewey attempted to outline a different set of conditions necessary to preserve modern democracy.
Dewey described World War I as the result of local communities dissolving amid a mass-mediated public in which they could no longer see themselves or their values reflected. Held together by a steady pulse of news and information rather than “a common interest in the consequences of social transactions,” this public was far too large, too multifaceted, to serve as a tangible background for the individual life and began to collapse under its own weight.
But where Lippmann saw a political philosophy rendered impossible in practice, Dewey saw a starting point. What if we understood democracy in the broadcast era not as a contradiction in terms but rather as a framework for improving the circuits of public information? The ideal of a truly participatory democracy gave Dewey a series of guideposts for rethinking the structure of mass media so that it better fit the prerequisites of a healthy public: The public needs a means of communication through which it can find and identify itself as a collective whole. It needs to participate in its own representation from below rather than from above, to make a portrait out of the collective activity of all its individual members. And it needs a means of understanding precisely how that exchange among its individual members is facilitated.
“The essential need, in other words, is the improvement of the methods and conditions of debate, discussion, and persuasion,” Dewey wrote. “That is the problem of the public.”
Instead of a public managed by technocratic experts, Dewey emphasized the forms of expertise, the “embodied intelligence” belonging to everyday citizens on their own terms. In a well-informed democratic public, it is not necessary that each member possess the “omnicompetence” required to carry out the operation of every aspect of the polity. In an early plea for something that we might today call information literacy, Dewey argued that it is more reasonable to endow people with “the ability to judge of the bearing of the knowledge supplied by others upon common concerns” — in other words to evaluate the expertise of others.
Using a technology, for instance, doesn’t mean you need to know how to build that technology or even understand how it works. “A mechanic can discourse of ohms and amperes as Sir Isaac Newton could not in his day. Many a man who has tinkered with radios can judge of things which Faraday did not dream of.” Similarly, the act of scrolling through a social media feed sits atop decades of accumulated forms of social and technical expertise. The challenge, as Dewey sees it, lies in finding a way for that isolated experience of receiving information about the world to inform a critical distance from the picture of things being presented. That is, the context in which a story appears — in a newspaper, in a tweet, as a news alert — should offer a lens on the story itself.
Dewey believed that public discourse more closely aligned with a shared sense of meaning could ensure democracy’s survival. But others were far more skeptical about the prospect of meaning ever being accurately communicated from one mind to another, let alone shared among an entire public. These thinkers would argue that words could not be relied on as a means of communication and proposed far more radical changes to the structure of language itself in hopes of preventing further global conflicts.
In 1903, Lady Victoria Welby, an important early figure in semiotics, published What Is Meaning? Studies in the Development of Significance, which introduced a new science that she named “significs.” At the core of significs was her contention that linguistic confusion was everywhere. The moment a thought left a person’s mind as language, that thought was mangled by misleading metaphors and imprecise forms of expression. This problem was compounded by a rapidly changing modern world that outstripped language’s ability to describe what was happening: “Language displays a disastrous lack of power to adapt itself to the growing needs of experience,” she wrote.
During the rising tensions that made the outbreak of World War I seem inevitable, an undergraduate named Charles Kay Ogden began a long correspondence with the 73-year-old Welby, whose work had captured his attention and would direct the future course of his career. Ogden warned in one letter that “differences in language make war possible” but added his hope that some form of “symbolic language could unite sense and meaning.”
After the war, he attempted to fulfill that promise in a collaboration with literary critic I.A. Richards, The Meaning of Meaning: A Study of the Influence of Language upon Thought and of the Science of Symbolism (1923), a book that drew on Ogden’s correspondence with Welby as a starting point. In that work, Ogden and Richards argued that words are imprecise instruments bearing only a tenuous relation to the things they represent. By clarifying the socially agreed-on ties between words and their range of meanings, they proposed that there would be less room for unintended misunderstanding or deliberate misprision when sharing information about the world.
Today, it’s almost as if the public is constituted anew every time we refresh our feeds through a series of invisible algorithmic decisions
Ogden and Richards’s effort to restore words and things to some prelapsarian state of synchrony continued with their creation of “Basic English,” a controlled language consisting of 850 words and, instead of verbs, 10 “operators” that require minimal conjugation. This “system in which everything may be said for all the purposes of everyday existence” was designed to minimize the ability of governments ever again to manipulate the public through propaganda and other abuses of language. But Ogden and Richards also held up Basic English as a new international standard, especially in China, where Richards actively promoted it during his many stays there. It was meant at once as a deterrent to war and a tool of Western expansion. The supposed solution to propaganda was itself a form of propaganda implemented at the level of form rather than content.
Umberto Eco described Ogden and Richards’s desire to distill each of our words down to a single, socially agreed upon meaning as their project’s “therapeutic fallacy,” as if tying each of our words to a universal essence could serve as some perverse talking cure for social discord. The words we use can be ambiguous, and it’s true that many professions (like law, for example) have found ways to fix meaning with precision out of necessity. But it doesn’t follow that a similarly applied science of language is possible or even desirable when it comes to everyday language. Ambiguity provides one of our most interesting channels of meaning. Romantic relationships, storytelling, disagreement: all rely on the play of nuance and allusion and no form of “linguistic therapy” could ever cure us of these constitutive imprecisions.
Given the seeming weightlessness of information today, it often feels as if we’re simply pinging one another for the fact of connection. Publics assemble not around shared awareness of common concerns but by algorithmic fiat. Language is optimized for the benefit of search engines rather than interpersonal clarity. And articles float by eliciting less curiosity about their actual content than the filters that decided to drop them in our feeds to begin with. All too often, politics is what happens when we skim these fortuities for connections that make us feel a certain way.
We can in fact think of the content of our politics in the terms of a later paradigm of media theory. After World War II, cybernetics researchers began working to define “information” as a mathematical quantity independent of meaning or semantics, truth or value. Information, for the military technologists turned computer scientists Claude Shannon and Norbert Wiener, was simply the measure of a message’s faithful reproduction from one point to the next. But there is a clear danger in building emotional investments on top of these mere echoes. Instead of a universal consensus around what can be considered “factual” or “true,” information is increasingly submitted to a unidirectional skepticism, in which we are only willing to question the factual basis of what we already believe not to be true.
The models provided by our proto-theorists of communication in the 1920s point in quite a different direction. Between the wars, the question of information was about defining the basic units of connection made possible by emerging media, from civic participation all the way down to signification itself. All of this hinged on better understanding when and how certain forms of communication bore value and meaning in order to actively shape the connective tissue of newly networked publics.
Though these proposals for better-informed civic discourse address problems that were endemic to very different kinds of networks, they resonate with contemporary conversations about social media, the citizen, and the public, albeit in oblique ways. With the micro-dosing of political affect we now get from glancing at our feeds, Dewey’s hope that mass media might be re-engineered to facilitate civic engagement faces a far more significant challenge. Nor can we hope that any form of machinic intervention like Ogden and Richards’s Basic English will automatically reverse the deepening of social divisions.
But when Mark Zuckerberg, in a 6,000-word apology disguised as a manifesto, introduces Facebook’s mission after the election to “develop the social infrastructure for our communities,” we know exactly why we should be suspicious. Let’s set aside the irony of Zuckerberg’s suggestion that polarization and sensationalism can be solved with just a few tweaks to the algorithm so that it favors “good in-depth content.” Facebook makes money hand over fist by atomizing our attention and limiting our engagement to trivial forms of user interaction.
We should ask instead what becomes of democracy when the possibility of a public is framed as a “social infrastructure.” For Dewey, democracy is a form of communication between people who implicitly understand themselves to share something that makes communication meaningful. Meaning is created in that space of exchange. Today, it’s almost as if the public is constituted anew every time we refresh our feeds through a series of invisible algorithmic decisions. Further, our individual habits and proclivities end up actively shaping the way that public appears to each of us, for instance through suggested stories or targeted ads. Democracy in the broadcast era was for Dewey “the task before us.” In the wake of the 2016 election, we will have to update the terms of that task so that we might locate the common desires that continue to draw us all to that space of exchange.