Home

Screen Time, Sacred Time

Demonizing immersive technology limits our conception of what it’s for

Last year, as part of the release of iOS 12, Apple introduced a feature called Screen Time, which allows users to monitor and set boundaries around their phone use. Marketed as a souped-up timer, an addiction fighter, and a surrogate parent all at once, the feature permits you to set time limits on how much you can use different types of apps and displays comparative data visualizations of your daily phone usage. It also includes “family” options with which parents can control their children’s screen time remotely.

Around the same time, Facebook and Instagram also released features that provide similar metrics about usage and can give optional reminders to close the app after a user-defined period of time. Google, too, recently launched its Digital Wellbeing suite of similar features, adding the ability to combine notifications from different Google apps to decrease their noisiness and intrusiveness, plus voice-command functionality to save users from having to interact with a screen. The company claims that these features encourage the healthy integration of technology into everyday life: “So that life, not the technology in it, stays front and center.”

Screen Time construes immersion as a kind of vulnerability masquerading as a desirable form of focus

Whereas earlier time-tracking apps were developed by third-parties as add-ons, these new initiatives are being developed by major tech companies themselves and integrated directly into their products. Why, after devoting so much effort to developing products that are seamlessly, easily, and perhaps compulsively usable, are these companies suddenly working to help us mitigate that usage?

In part, they seem to have been inspired by former Google employee Tristan Harris’s “Time Well Spent” movement, since renamed the Center for Humane Technology. The perspective of the movement is that technology companies have proven too effective at developing absorbing products, transforming engagement from functionality into a form of harm and wreaking havoc in daily life. The Center’s aim is to “realign technology with humanity” — by offering users different tools to manage their impulses. It’s not hard to see how the tech companies’ new time-management features aim to address some of the concerns outlined in the Center’s “Ledger of Harms” about attention, compulsive usage, and children’s vulnerability.

As of its latest redesign, the Center has emphasized the phrase human downgrading — “While we’ve been upgrading our technology, we’ve been downgrading humanity,” reads a graphic from its website — adopting the language of technical releases to describe the deleterious effects it believes technology has on humanity, and positioning its “world-class team of deeply concerned former tech insiders” as uniquely equipped to solve these problems. Throughout, the assumption is made that humans are simply no match for “technology,” which is designed to prey on our natural, evolutionary instincts for the sake of scrolling, clicking, and other increases in metrics. By simply using our phones, our human agency becomes lost, dissolved in immersive digital experiences.

This is fundamental to the Center’s approach: the idea that users need to be protected from immersion in the world of the phone so that they may remain in contact with the “real world.” Immersion is construed as a kind of vulnerability masquerading as a desirable form of focus. This is immersion as submersion, as if one may drown in a bottomless sea of content.

This echoes the phenomenon of falling into a “wikihole” (as referenced in these tweets) and going from one topic to another to another until you’re lost in information. The interconnected, link-saturated structure of Wikipedia becomes a kind of digital quicksand. Platforms like Netflix and YouTube that default to autoplaying content work on a similar principle, allowing users to binge television shows or movies with no regard to how long they’ve spent doing it. And social media platforms are notorious for creating endlessly scrolling feeds of new, possibly important information that users must continually keep refreshing and consuming.

Of course, immersiveness is not in and of itself automatically dangerous. Media producers, especially those working in virtual or augmented reality, also seek to create “immersive experiences,” and viewers intentionally seek out such content precisely because they want to be absorbed by the experience and removed from reality. They aren’t necessarily tricked into a trap. As someone who designs and makes apps and websites for a living, I have created my fair share of “immersive” experiences. Here’s the secret: if someone wants something on a screen to feel more “immersive,” it means it should be full-screen or full-width, with a total focus on the content, while eliminating almost anything else, including clock and the system UI within apps. These approaches are not dangerous in and of themselves; they are often intended to elevate the user’s experience of their chosen content within their digital environment.

What drives the concerns of groups like the Center for Humane Technology — and features like Screen Time — is the idea that immersion and compulsion are linked: that app developers deliberately try to create experiences that induce a “lean-back,” passive state, overriding the will of users who can no longer escape from their escapism.

The idea of a hard divide between the screen and reality is not consistent with much of everyday experience. But built into the very name of Screen Time is the idea that time onscreen is somehow discrete and fundamentally different from time offscreen. Time spent in a digital environment is purportedly immersive to the point where one loses all track of time; this positions time away from screens as ordinary and measurable, a common temporal experience shared among people.

The approach of time-management apps is to shock users out of immersive, digital experiences

The tension between “screen time” and “real-world” time that these features help structure echoes historian Mircea Eliade’s framework of “the sacred and the profane.” Eliade argues that historically, members of religion-dominated premodern societies split their experience of life and of time into those two discrete categories. According to Eliade, sacred time is nonlinear. It is repeatable and cyclical; it doesn’t end or change or become exhausted. It is mythological, religious, sacred, shamanistic, transcendent. It is a time before time, an “original time,” where one is situated with the divine and eternal, participating in and understanding the mysteries of the universe, of creation, of existence, of time. By contrast, profane time is innately chronological, human, and finite. It ends. And it is always structured with ends in mind.

For Eliade, the idea is not that you get one or the other — both sacred and profane time exist simultaneously, and they are directly linked. Sacred time makes profane time possible and is the paradigmatic model for profane time. The experience of linear, real-world time is only possible in opposition to sacred, nonlinear time, and diving back into the experience of sacred, nonlinear time is necessary for the continuation of linear time.

To move between these two states requires ritual markers like ceremonies, festivals, meaningful interactions with objects, or other behaviors that signal the entry into sacred time. Through this conscious action, one’s experience of time is transformed from profane to sacred.

The immersiveness of sacred time could be likened to the immersiveness of screen time. When we fall into wikiholes, aren’t we operating outside time, searching for knowledge, for a better understanding of the universe and its mysteries?

But this is not at all how the Center for Humane Technology conceives of it. Instead, the Center vilifies immersion as a technology-induced form of coercion, focusing on the negative effects that has on society.

It’s undoubtedly the case that immersive digital environments can unmoor users from meaningful markers that reflect real-world needs. There are thousands of anecdotal stories of people forgetting to eat meals or take care of personal hygiene in service of watching just one more video or playing one more level of a video game. But the language around immersion as a danger often frames the problem as one of productivity or optimization. If only we can overcome our “human downgrading,” we can evolve our relationship with technology to become better at work, in relationships, or in life.

From the perspective of Harris and other similar tech reformers, the goal is to prioritize human control and choice over experiences. The more immersive an experience is, the more detrimental it is presumed to be to our ability to take agency over our actions and our time. But this view militates against the possibility that we can develop a holistic relationship with technology, bridging immersive and real-world experiences. Instead, the solution seems to be to fragment our experiences even further, insisting we treat on-screen/off-screen as hard alternatives with strong value judgments assigned to both (bad and good, respectively), rather than recognizing them as mutually supporting and interrelated modes. Even when the well-being language nods to ideas of balance (as in “work-life balance”), this is still positing a split between digital and “real-world” experience.

The approach of Screen Time and the other time-management apps is to simply shock users out of immersive, digital experiences, and use notifications and alerts attempt to return them back into the “real world” of normal, “profane” time. Once you’ve been brought out of immersion, you supposedly will now be able to spend your time ”meaningfully.” For instance, the “about” section of Google’s Digital Wellbeing claims it exists to help users “focus on what matters most,” letting you decide what that really is but strongly implying, somewhat hypocritically, that it is not on your device. But so much of what is on our devices is intensely important to us and is part of what we seek out in our lives, and simply eliminating those experiences may risk removing us from what truly matters most.

Features like Screen Time purport to make our experience of digital immersion less burdensome, but they actually make the duality between screen and “real-world” time an even more prominent part of our everyday lives. The existence of these features underscores the effectiveness of apps and features in isolating us from reality and opening us to manipulation. And the duality they are premised on works against the idea that screen time can be elective and fulfilling and sustaining — and the idea that screens are often integrated into and welcomed as an essential part of our reality.

There are no ritual markers that can move us in and out of Screen Time with intention

While time-tracking tools may help us manage and compartmentalize our time, merely changing focus through the use of alerts does not facilitate a transition between different experiences of time. There are no ritual markers that can move us in and out of Screen Time with intention, as there are in Eliade’s conception of sacred time.

There are reasons for that: companies’ business models are often structured around the immersive state as a means toward a zero-sum interpretation of engagement, measured in advertising views and other attention metrics that are seen as directly correlated with business value. It is easy to track the value of time spent in an experience, but much harder to track the value of time spent transitioning in and out of one.

In its approach to immersiveness, the Center for Humane Technology broadly applies the same user-centric, product-design perspective that produced the experiences it deems a “problem” — and now seems to respond to it with a unilaterally imposed engineering solution. The compulsiveness of an app is, therefore, obviously corrected by reliance on another digital tool. The Center’s concern with tech companies’ business models is similarly conceived from a technologist’s perspective. It has completely failed to acknowledge the work of academics in science and technology studies, whose decades of research and study have articulated how technology use is shaped by broader social forces. Instead, the Center persists in placing responsibility for changing technology in the hands of either the industry practitioners creating technology or the isolated users who must navigate its intercession into their lives more or less alone.

Instead of continuing to battle the issue using the language and presuppositions that nurtured it, we ought to focus instead on multifaceted and interdisciplinary ways of integrating technology with humanity, which may benefit more than just those users most deeply affected by it — or the companies who stand to gain the most.

Skyler Balbus is the Director of Product Design at Postlight, a digital product studio in New York City, where she practices design as both a maker and a leader. Her work is informed by her multidisciplinary background, and she is always interested in finding new ways to bring ideas and people together.