Decision Trees

Trying to automate environmentalism alone won’t resolve political barriers to conservation, but it might help us think differently

Full-text audio version of this essay.

There is a glaring discrepancy between the sophistication of our tools for monitoring earth and the sluggishness with which we respond to the alarming information they provide. As the undersea arrays, data processing centers, satellites, receiving stations, radar platforms, and aerostats bring us ever more refined images of our biosphere’s collapse, they also, in a sense, index the failure of our institutions to react. Beyond collecting climate data, environmental sensors pick up the signs of political paralysis and corruption.

But it hardly takes sophisticated measuring instruments to become concerned about the environmental movement’s political effectiveness. Though the UN’s first Earth Summit was hosted nearly 50 years ago in Stockholm, international action so far has culminated in the Paris Agreement of 2015: an accord which, even if followed to the letter, would bequeath the future a world without coral reefs or a West Antarctic Ice Sheet.

After decades of political indifference toward addressing the climate crisis, a growing number of ecologists have sought ways to automate environmentalism

In response to decades of political indifference, if not hostility, toward addressing the climate crisis, a growing number of ecologists, engineers, and landscape architects have sought ways to automate environmentalism, ostensibly bypassing political gridlock and other forms of institutional resistance to urgent change. Their proposed interventions range from undersea drones tasked with destroying coral-reef-killing starfish to devices for mitigating toxic algae blooms. But in general, they aim to establish what they see as a more functional feedback loop between data about the planet and interventions to change it. In effect, they promise to outsource the work of climate adaptation to the sensors themselves and whatever tools are placed at their disposal, taking it out of the hands of politicians and the constituencies they represent. This, they suggest, would ensure a more consistent response to changing environmental conditions.

For instance, architect Bradley Cantrell and his co-authors have outlined a scenario in which, as a response to climate change, an “artificially intelligent infrastructure” would “create and sustain nonhuman wildness without the need for continuing human intervention.” The thinking is that because no place is spared from human activity, whether in the form of climate change or the spread of invasive species, undoing human effects will require more intervention in a landscape, not less. In this vision, machinic sensing systems monitor changes in a landscape, cross-reference them with predictive models of the near future, and administer a response they deem appropriate using the technical prostheses of drones and other robots. For example, drones might plant seeds to adjust nitrogen levels in the soil while robots would modulate the course of rivers in time with the slow-motion creep of sea levels. In theory, this system’s artificial intelligence would discover new, more effective approaches to conservation, the way DeepMind invented new Go strategies.

It all calls to mind Richard Brautigan’s vision of “cybernetic ecology” in his 1967 poem “All Watched Over by Machines of Loving Grace”: “I like to think of a cybernetic meadow, where mammals and computers live together in mutually programming harmony.” Brautigan imagined a kind of fully automated luxury primitivism — a world in which flower-like computers populate the woods alongside deer while humans are “free of our labors / and joined back to nature.”

But Cantrell’s proposal for automating conservation has been criticized. Often, the more sophisticated machine learning systems are, the more inscrutable their operating logic becomes. As a result, it could be exceedingly difficult to understand the reasons an automated environmental manager would be making particular decisions. Opacity would be one consequence of the system’s overall efficiency.

The concept of a programmed wilderness evokes other computational landscapes largely devoid of people, like Facebook’s flagship data center in Prineville, Oregon, where a single engineer watches over 25,000 servers, or the port facility in Bayonne, New Jersey, where a skeleton crew presides over the transfer of thousands of shipping containers, intervening only in rare cases and mostly through telepresence. Would the infrastructures developed for microchip production, logistics, and data storage be suitable for regulating the unpredictable behavior of complex ecosystems? To what degree would the values prioritized in most computational landscapes — efficiency, predictability, and legibility — end up characterizing automated ecosystems as well? If the success of these automated ports and factories is measured against easily defined metrics such as cost and productivity, what metrics would be chosen to determine the success of an automated ecosystem? As opposed to a game such as Go, which, despite its complexity, has a clear method for assessing victory, it remains an open question what exactly a “winning” ecosystem ought to look like.

Artists Tega Brain, Julian Oliver, and Bengt Sjölén highlight the difficulty of developing AI systems to negotiate the complexity of conservation in a 2019 work titled Asunder. They allude to Cantrell’s “wildness creator” in the description of their project, which “proposes and simulates future alterations to the planet to keep it safely within planetary boundaries, with what are often completely unacceptable or absurd results.” Some of the solutions generated by this digital “environmental manager” include relocating Silicon Valley to Chilean lithium mines or redesigning the entire coastline of Dubai. Asunder highlights the extraordinary difficulty of optimizing something as complex as an ecosystem, not to mention the challenge of determining what is optimal for an ecosystem in the first place. As the project demonstrates, the need to make such decisions raises the very specters of power and sovereignty that such automation had ostensibly arisen to evade. Automated ecologies — a quasi-political response to a political problem — could merely elevate programmers to the status of de facto ecological sovereigns, deciding which species live or die and what landscapes are conserved.

While AI is often criticized as a mechanism for laundering human biases under the guise of technological objectivity, Asunder highlights a different problem. AI can also act in ways that manifest a significant departure from human reasoning. Algorithms can easily confuse dragonflies for manhole covers and Persian cats for candles. Yet this disjunction between human and machine reasoning presents opportunities to try and reconcile the ambition of Cantrell’s proposals with the concerns of his critics. If automation has a place in environmental management, it is not as an “optimizer” of ecological processes (whatever that might mean) but as a prosthesis that could grant various forms of nonhuman intelligence — from termite colonies to crows — a degree of leverage over our economic, legal and political systems.

Insofar as the Anthropocene refers to an epoch in which human beings have outsize influence on planetary processes, perhaps we shouldn’t strive for a “good” Anthropocene at all, in which new technologies shore up a human-centered world. Rather, it may be worth hastening the arrival of what theorist Benjamin Bratton has called the “post-Anthropocene,” in which “Homo sapiens is no longer the dominant geological actor.” Along these lines, we can imagine how automated, artificial landscapes might coincide with an amplification of plant and animal agencies.

Perhaps we shouldn’t strive for a “good” Anthropocene at all, in which new technologies shore up a human-centered world

Bratton provides two images of possible Post-Anthropocenes. The first reanimates the Sanzhi Pod City in Taiwan, a futuristic resort made of flying-saucer-shaped dwellings that was demolished in 2008, as scaffolding for a posthuman society with a distribution of species unlike any currently in existence. It raises the question of how buildings might be designed so that their ruins produce microclimates conducive to harboring new ecological communities. If it is indeed the case, as Eduardo Kohn has suggested, that “forests think,” then we should wonder at the kinds of thoughts formulated by ecosystems emerging out of newly reclaimed ruins — ecosystems without an analogue in earth’s history, shaped by the interactions of plants, animals, and architecture. We can assess cities and structures by how well they will support forms of life for which they were not initially intended, after the tidal wave of human-caused transformation begins to recede.

This possibility has historical precedent: See, for example, the rare plants found under the arches of the Colosseum during its early-modern existence as an overgrown ruin, the endangered mollusks found hugging the limestone keeps of Czech castles, or the assortment of insects, reptiles, amphibians and bats living amid the ruins of the so-called Lost City of the Monkey God, all formerly thought to be extinct. These examples demonstrate that the biodiversity hotspots of the future may have to be built, and we can learn how best to design them from local plants and animals themselves.

Bratton’s other image describes what he calls the “synthetic rainforest,” where technology embedded in multiple objects within a given landscape would allow entirely new regimes of camouflage and symbiosis to evolve. By means of sensors and other devices, researchers are gradually learning more about the vast number of communication systems used by plants and animals, from elephants’ use of infrasound to the biochemical compounds spread through the air by trees to signal danger. Bratton’s concept of the synthetic rainforest hints at a future wherein machines could intervene in this exchange of non-linguistic signs not only interpreting these signals but replicating and amplifying them to combat other forms of human interference. Drones and robots in disguise might help spread these biochemical compounds through a forest — acting as an automated chemical emission network trained by trees, an early warning system broadcasting the arboreal equivalent of air-raid sirens. In an experiment that seemingly foreshadows the synthetic rainforest, robots have managed to coordinate movement between honeybees in Austria and zebrafish in Switzerland. The scientists involved with the study suggest that, in the future, intelligent robots could inhabit multiple animal worlds at once, not only steering animal behavior at multiple scales but spurring hybridization and, eventually, the development of new species.

In this scenario, computational intelligence is capable of spreading, integrating itself with ecosystems, and developing mechanisms to intervene in these systems autonomously. Geological matter folded into microchips acts back on itself, reshaping its geophysical origins in concert with the nerve cells of zebrafish, honeybees, and any other species still haunting the rewilded ruins of a nascent Post-Anthropocene.

If Mark Fisher proposed “Terminator vs. Avatar” as the two competing futures Hollywood imagined — representing the technological singularity and the return to an untainted nature, respectively — what Bratton posits is a synthesis: an emerging artificial intelligence that is bent not on monomaniacally murdering humans but on modeling planetary systems and mastering the subtleties of as-yet-undiscovered forms of biosemiotics.

In practice, this vision might entail something like the protocol described by artists Paul Seidler, Paul Kolling, and Max Hampshire in a 2016 paper, “Can an Augmented Forest Own and Utilize Itself?” It details how the human owners of a piece of forested land could transfer ownership to a “nonhuman actor” — a computer program capable of using satellite imagery and other monitoring systems to assess the extent and economic value of the forest that it now manages. Gradually, through its management of the land, the “nonhuman actor” — which they call terra0 — could theoretically turn a profit, repay the human owners, and begin to purchase neighboring forests.

Given the unpredictability of algorithms in many domains, it may be worth asking how a project like terra0, even if technically feasible, could go wrong. In other words, what is the automated forest’s equivalent of the paperclip maximizer?

On another level, for any AI to claim to act on behalf of a forest entails claiming the ability to know what forests want. But as Christopher Stone demonstrated in his influential 1972 work “Should Trees Have Standing?”, many legal systems already accept the ability of human individuals to speak for comparatively more abstract entities than forests:

The guardian-attorney for a smog-endangered stand of pines could venture with more confidence that his client wants the smog stopped, than the directors of a corporation can assert that “the corporation” wants dividends declared. We make decisions on behalf of, and in the purported interests of, others every day; these “others” are often creatures whose wants are far less verifiable, and even far more metaphysical in conception, than the wants of rivers, trees, and land.

Since Stone published his call for legal rights for nature, various jurisdictions have embraced the idea. The 2008 constitution of Ecuador explicitly cites the rights of nature as an animating principle — an idea tested in 2011 when two Americans successfully sued the Ecuadorian government on behalf of the Vilcabamba River for damages incurred to it during the widening of an adjacent roadway. New Zealand’s Wanganui River was granted legal personhood in 2017, and last year, Toledo, Ohio, granted Lake Erie legal rights as well. Legal scholars have made similar arguments on behalf of software. Shawn Bayern insisted in a 2014 paper that the current framework for limited liability companies poses no obstacle to the creation of legal entities composed entirely of code. More recently, Lynn Lopucki has written that “algorithmic entities” — that is, corporations owned and operated by software — are inevitable and will become indistinguishable from those run directly by humans. Under the current regulations on corporate charters, Lopucki claims, algorithms can legally own property, enter into contracts, seek legal counsel, and even spend money on political campaigns.

We can assess cities and structures by how well they will support other forms of life after the tidal wave of human-caused transformation recedes. This has historical precedent

The project terra0 can be seen as pioneering a new class of legal actor that combines these trends, in which an algorithmic entity becomes the institutional avatar or legal representative for an ecosystem on whose behalf it is programmed to act. Although such a system would be designed by humans, further developments in the fields of “animal-computer interaction” and “plant-computer interaction” may create a space for technical input from nonhuman organisms themselves. Experimental interfaces are currently being designed to accommodate the capacities and cognitive abilities of nonhuman users, and translate their signals and behaviors into intelligible instructions. While algorithms are frequently used to detect and identify plants and animals, it remains to be seen what plants and animals might help train algorithms to do.

In a similar register, the science-fiction writer Karl Schroeder has proposed that AIs could be designed to identify themselves with certain whale pods or bird flocks and use sensors to gauge the preferences of their adopted nonhuman kin. Such algorithmic entities could receive financial compensation from suing chronic polluters or from asserting the whales’ rights to payments for ecosystem services they provide. These funds, in turn, could be invested in green tech startups that might benefit whales, or donated as campaign contributions to political candidates with a decidedly pro-whale platform.

As far-fetched as these scenarios might sound, many of the legal and some of the technological requirements for their realization are already being put in place. Entities like terra0 could provide a model of how automated environmental management could expand a kind of agency to ecosystems threatened by the Anthropocene. Ecological automation might be applied as part of an inclusive democratic politics whereby self-owning ecosystems gain some measure of influence over human institutions. Rather than simply accelerate progress along the same unsustainable curve we are on now, computational technologies might facilitate a reconfiguration of economic policies and the development of new governance mechanisms.

Given the resource-intensive nature of contemporary computation, as well as the limitations and abuses of data science, one may justifiably harbor misgivings about any claims surrounding AI’s potential contributions to sustainability initiatives. From “water wars” to toxic e-waste sites, the terrestrial footprint of the “cloud” is large and growing. Even if the capacities of AI develop to the point imagined by Schroeder, the benefits of such innovations would have to be measured against their own environmental costs. Moreover, there remains the theoretical problem of whether such automated systems could be meaningfully divorced from human intentions and biases, even if they do frequently surprise us. Despite any residues of human intent that remain latent within these systems, the entwinement of nonhuman rights legislation and software-owned enterprises could produce some truly bizarre effects. How can we even understand or conceive of a democracy populated by chatbots working on behalf of algorithmic entities, or by lobbyists hired by a threatened river?

To begin to address these critiques, we can turn to some of the lessons furnished by political anthropology — specifically, indigenous conceptions of the environment as a place alive with political speech communicated through different ecological media. Machine learning research has much to gain from studying such worldviews, as Jason Edward Lewis and his colleagues have eloquently attested.

Concepts like áduyu, war between plants, might help train automated systems (and humans) to recognize trees as political subjects capable of communicating their preferences

We can learn from how the Yagua, an indigenous group in the Amazon basin, conceive of politics as a field of contestation stretching far beyond the human, in which plants and animals make alliances and hold grudges as they compete over certain prized resources. When a large tree falls in the rainforest, it opens a precious gap in the forest canopy. At these moments, the strangler fig and the giant kapok tree are said to engage in a special form of warfare, called áduyu, notable for its brutality, over access to the sunlight above that is their prey. Áduyu is a word for war between plants, a conflict in which other species, including humans, must take sides. For the Yagua people, photosynthesis is the continuation of politics by other means. 

It is precisely this conception of politics — the kinds of conflicts that it brings into focus, and the kinds of agents that it recognizes as political beings — that the techniques of automation might most usefully contribute to environmental governance. Concepts like áduyu might help in training automated systems (and humans themselves) to recognize trees as political subjects capable of communicating their preferences and forests as political spaces replete with competing factions and interests. An AI trained to learn from trees how to hunger for sunlight and interpret biochemical root signals would be a strong potential ally for the environmentalist movement. After all, such an algorithmic entity would not be subject to the violence and intimidation that threaten so many environmentalists today. Granting programs trained by nonhuman entities the capacity to generate policy recommendations, file lawsuits, and organize petitions could result in a formidable challenge to a political economy predicated on the denial of ecological agency.

In this way, automation could form a meaningful component in an ecology of practices designed to enhance the autonomy of environments: that is, to extend their natural capacities for niche construction to the level of human institutions. We can begin to design automated systems that think like forests, that act on the courts and legislatures as a forest, and that help us sense our way to a multispecies polis.

Jason Rhys Parry is a visiting clinical assistant professor in the Honors College at Purdue University. His writing has appeared in Philosophy TodayDiacriticsSubStance, and Theory & Event. In 2020, he was named a fellow of the Future Architecture Platform.