Home
March 1, 2019

Execution Risks

In its most recent issue, Logic magazine had an interview with an investment banker who specializes in algorithmic trading that shed some light on what makes data financially useful. It might seem like accuracy — data that allows for a more complete representation of the world — would be all important, but more often data’s novelty or secrecy is more relevant than its verifiable truth. The trader notes that there are massive amounts of new data available, and new techniques to synthesize that data, but much of it “may not add any information more useful than what is already available to market participants from the vast streams of data on prices, companies, employees, and so on.”

Useful information, in this context, creates knowledge asymmetries that allow one party to take advantage of another in a zero-sum environment. Whether the information is or will remain accurate is irrelevant; what makes it effective is the timing of who knows it and the existence of a market that it can move. The incentive in gathering and recombining increasing amounts of data — including the data that social media and other similar modes of surveillance generate, the information about who we talk to and what motivates us to talk, about what connections sustain us through everyday life — is not to produce a more complete picture of a shared world; it’s to create proprietary advantage and ignorance and confusion among counterparties, who will learn only too late what no longer matters, as the advantage will have been arbitraged away.

In aggregate, this incentive means that more data leads to a more opaque and insecure world, a greater sense of vulnerability, a increased likelihood that one will be taken advantage of because of something that they don’t know. No level of paranoia seems unjustified. It also incentivizes the collection and synthesis of such data that creates vulnerability, that intensifies the volatility of our understanding of the world. In markets primed to reward intelligence gathering, only information with an expiration date can be capitalized on. So the only knowledge worth producing is that which you know will expire.

At the same it creates a total dependence on “efficient markets” as the only mechanism that can clear up the manufactured confusion. Nothing is “true” until it produces a winner or loser in a trade; its “truth” is it capacity to inflict harm. As the trader points out, this “fallacy” contributed to the 2008 financial crisis: If everyone believes that “you could rely on the market to efficiently incorporate all available information about the bond,” then “all you need to think about is the price that someone else is willing to buy it from you at or sell it to you at. Of course, if all participants believe that, then the price starts to become arbitrary. It starts to become detached from any analysis of what that bond represents.” That is, the market’s processing of information doesn’t produce an accurate picture of the world and its productive capacities but constitutes a hermetic war zone in which information can be tactically manipulated.

New forms of algorithmic trading, the trader warns, could perpetuate this: “If new forms of quantitative trading rely on assumptions of market efficiency — if they assume that the price of an instrument already reflects all of the information and analysis that you could possibly do — then they are vulnerable to that assumption being false.” Feeding more data into algorithms and trusting them to produce “truth” by fighting each other in markets would seem likely only to sow more confusion; every piece of data these algorithms touch will be converted from a potential representation of shared reality to an arbitrary move in a wargame.

In Nervous States, William Davies argues that our contemporary condition can be understood in these terms: The Enlightenment project of using data and statistics to stabilize conflicts by presenting a shared, agreed-upon representation of the world has been rejected in favor of a militaristic approach to information that seeks only “intelligence” that can be used coordinate forces and defeat “enemies.” This rejection has stemmed in part from the recognition that the experts entrusted with producing statistics were not capable of being as objective as they claimed, and that their methods reinforced status quo distributions of status and power and represented them as natural. As Davies points out, “the privilege of knowledge and that of wealth reinforce each other: highly educated consultants, lawyers, and investment analysts are also the main beneficiaries of capitalism.”

But in Davies’s view, the rejection of expertise as a reasonable limit on knowledge also derives from technocrats’ failure to account for people’s feelings of fear and insecurity or the increasing inequality that made for uneven distribution of economic and emotional pain. This disconnect has made it easier for experts’ political enemies (or simply those who have more to gain from a chaotic world) to characterize them as detached elites, producing information that is no less-self-interested and no more valid than any of the alternatives. Those alternatives often seek to convert fear and insecurity into resentment, sharpen social divides and conflicts, and replace common ground with a battleground. These conditions vastly expand the “usefulness” of knowledge, in the sense that it can more readily be used against people.  Under these conditions, Davies notes, “intelligence is really a resource to be hoarded, not unlike physical weaponry and equipment, rather than a set of facts to be shared. Above all, it is a source of competitive advantage, which aims for our survival and their destruction.”

This view of the role of information, Davies argues, is intensified by digital communication: “The promise of expertise, first made in the 17th century, is to provide us with a version of reality that we can all agree on. The promise of digital computing, by contrast, is to maximize sensitivity to a changing environment.” That promise in turn depends on an increasingly volatile world to maximize the value of real-time sensors. Causality inverts, with sensors and other forms of surveillance producing the sort of volatility they are built to capitalize on rather than passively recording what is occurring. Since this surveillance extends into the social sphere at all levels of intimacy, it points to a systematic destabilization of social relations through the practice of invasive and intensive monitoring. This destabilization replaces the possibility of a shared reality across social groups with the mobilization of groups to produce competing realities that can be exploited or securitized.

This is bound up with tech companies’ aspiration of providing “social infrastructure”— this, Davies argues, is the ambition to which all its data collection tends: “Silicon Valley is not seeking to create an accurate portrait of society, but to provide the infrastructure on which we all depend, which will then capture our movements and sentiments with the utmost sensitivity.” So the data collected and put to use through these companies’ platforms will be directed toward producing a reality in which we can only conceive of actions or sentiments through their tools. We won’t know how we feel about something we’ve done unless we avail ourselves of their networks. Hence: Pics or it didn’t happen. Hence: Liking as something that requires a digital document and a digital interface. There are countless times when I’ve photographed something and posted it online to see whether it was “likable.” Liking things outside the “social  infrastructure” has become cumbersome and uncertain; it doesn’t register as information, it can’t factor into the calculations that matter, that produce advantages.

It was once the case that shared reality was necessary for commerce: “A valid representation of reality, whether provided by merchants, scientists, or natural philosophers, was one that would facilitate agreements between strangers,” Davies writes. But that trust-based system has been superseded by one that relies not on trust but direct surveillance. This opens the space for a different approach to profit-making. “What emerges in the context of modern warfare and corporate strategy is less a basis for a social consensus than tools for social coordination.” Davies links that to political demagoguery, but one sees the effects of “coordination” across automated algorithmically driven information systems too. These break down trust and market niche epistemologies to different subsets of the population, using feedback loops to further polarize and radicalize their worldviews. This intensifies distrust — in experts by and large as well as among rival groups — and creates more asymmetries and zero-sum competitions, more reliance on secrecy and tactical information, more dependence on markets as arbiters.

Analyzing the influence of von Mises and Hayek on information theory, Davies points out that in their view, “the market is therefore a type of ‘post-truth’ institution that saves us form having to know what is going on overall. It actually works better if we ignore the facts of the system at large, and focus only on the part of it that concerns us.” This view is precisely the same one that produced the financial crisis, and it also characterizes the attitude toward Big Data (who needs to understand causation when you can operationalize correlations?). Not knowing why something works protects the mechanisms that make it work; it preserves the asymmetry.

The trader interviewed in Logic noted a “a fairly big split between people who have concluded that explainability is holding back the advancement of [algorithmic finance], and the people who hold on to the rather quaint notion that explainability is important.” But the debate is resolved by default by analyzing data at a scale and scope that defies explainability. The point of explanations then is not coming up with an accurate understanding of what is going on, but, as the trader concedes, coming up with a plausible story for marketing purposes: “if you can sustain a story for why your technique is superior, you can manage assets for a long time and make a ton of money without having to perform well.”

A similar logic is at work in advertising business: Algorithmic targeting, which relies on a behaviorist psychology that treats people as passive objects for analysis, is a good story capable of justifying the money being poured into it. The self becomes one of the objects about which asymmetrical knowledge can be created — no true self, only a secret self, and that secret is held by the tech companies and data brokers. Advertising markets make that information operational, valuable.

If we are the commodity being bought and sold on these markets, then the parties to that exchange have reason to generate as much misleading or confusing knowledge about us as possible, seeking advantage in those trades. This is why “surveillance capitalism” works not by actually predicting or controlling our behavior but by allowing different parties to make bets about what we might do. Surveillance doesn’t proscribe behavior; it frames relevant potential wagers, the real-life equivalents of the Super Bowl coin toss, or who will catch the first pass. Will this user click? What site will they go to next?

Davies, in discussing financial derivatives, argues that the instruments work not by predicting the future but by offering odds on all the different possibilities. “The economists and mathematicians  who develop these instruments offer no claim about how things actually are or what will actually happen, but merely calculate the mathematical chance that they might, in order to profit from that like bookmakers … the stark implication is that there is more money to be made in what cannot be known, namely the future, than in what can.”

Surveillance capitalism works similarly, not by predicting our future behavior exactly but by making markets in what it might be. It makes our future behavior into a matter of probability that outside parties can have a direct vested interest in affecting. Surveillance is important to this not merely to establish the parameters of the game, but to determine the outcome, who among the bettors wins based on our actual eventual behavior, as recorded by the tracking systems. “Digital technology means that virtually any cultural trait can now be quantified,” Davies writes. This quantification becomes the basis of defining it within the “social infrastructure” and the means of evaluating what has occurred independent of our input.

But surveillance doesn’t produce the self itself, just the categories, the a prioris within we work it out as gambling game. It doesn’t “detect” our emotions but builds the system through which emotions are expressed and can be assessed as probabilities, as knowledge gaps. The self is made up of a bundle of arbitrage opportunities, in which companies can believe they know more than we do about ourselves and can exploit that knowledge to our disadvantage, until we learn what they know. We can buy back a sense of ourselves by consuming predictive products, watching the ads tailored to us, or we could work to adjust what our “self” is taken to be by contributing more data to the algorithmic systems that produce it. But the more data there is about us, the more we don’t know about how it is being leveraged against us. And after all, what we believe about ourselves is the last thing that matters to these markets; it’s the sort of information that can’t move the price on our heads.