I did not own a smart phone until last year. Before then, I’d made do with the same 2008 flip phone. At first, refusing to upgrade seemed practical. I didn’t have the kind of job that required 24/7 email access, so why pay more for it? But the longer I held out, the more I sensed that I was holding myself back. Experiences and abilities now common for my friends — navigating on the fly, uploading pictures an instant after snapping them, illustrating chats with emojis more expressive than semicolon-parenthesis — were still, for me, the future.
This feeling of displacement was confirmed when, after I finally bought that first smart phone, I accidently dropped it into a river. By then, I had become the sort of person who responds to emails right away, so I got another phone: the absolute cheapest smart phone I could find. After a week or so, its limitations became clear. I was traveling with a friend and we both pulled up Google Maps to try to find our way to our destination. But by the time my phone revealed my instructions, she was already on her way. I might as well not have bothered.
What surprised me about my lack of patience with the phone was not so much my newfound dependence on its technology. What surprised me was how closely my frustration echoed another one with which I’ve long been familiar: Having a slower phone reminded me of having slower eyes.
I live in a world that is loading at a slower pace than everyone else’s — a world that is constantly buffering
I have albinism, which means I’m nearsighted to a degree that corrective lenses can’t fix. This does not mean that distant landscapes blur for me the way they might when you try on someone else’s glasses, only to come gradually into focus as I approach. Instead, far-away things appear as solid forms that cannot be deciphered — writing on a road sign appears as a combination of uninterpretable lines. I have to stand right under it and strain for it to snap into meaning. Even with a text inches from my eyes, it takes me a longer time to recognize and process visual information. Because of this, I can’t drive, and I’m also fairly useless as a backseat driver. I haven’t processed the approach of an upcoming turn until it’s already been missed. Essentially, I live in a world that is loading at a slower pace than everyone else’s — a world that is constantly buffering.
With the slower phone, I felt the same sense of personal failure and frustration as I do when, at my library job, I have to get down on my knees to shelve a book that someone else could replace with a glance and a slight lean. Except with the slower phone, I could fix the problem, for a price I could afford.
That I feel a sense of failure at all shows that I’ve partly internalized what disability activists and scholars like Tobin Siebers refer to as the medical model of disability. This treats individual impairments (poor eyesight, chronic illness, paralysis) as purely medical problems to be treated or cured. As Siebers explains in “Disability Studies and the Future of Identity Politics,” this affects people with disabilities negatively in two ways: “it alienates the individual with a disability as a defective person, duplicating the history of discrimination and shame connected to disability in the social world, and it affects the ability of people with disabilities to organize politically. Since no two people with a disability apparently have the same problem, they have no basis for common complaint or political activism.”
Instead, many disability advocates and theorists promote the social model of disability, a term coined by Mike Oliver in 1983. It contends that “it is not individual limitations, of whatever kind, which are the cause of the problem but society’s failure to provide appropriate services and adequately ensure the needs of disabled people are fully taken into account in its social organization.” Oliver drew on the work of the Union of Physically Impaired Against Segregation, a group of socialist disability activists who came together in Britain in the 1970s. Their “Fundamental Principles of Disability” distinguished between individual impairments (e.g., poor eyesight) and disability, which is created by the way society excludes impaired people. For instance, in a world in which driving wasn’t the primary mode of transportation and life moved at the pace of walking, my impairment would be the same, but I wouldn’t be (as) disabled.
Thinkers like Susan Wendell have complicated the social model, pointing out how it ignores suffering not caused by social exclusion, such as chronic pain. However, the model remains helpful for underscoring how much of our criteria for what a brain or body should or shouldn’t be able to do is externally imposed and ultimately empowers no one.
In fact, disability as a unified category of exclusion developed alongside industrial capitalism. As Roddy Slorach outlines in A Very Capitalist Condition: A History and Politics of Disability, when work was performed in extended family units on the farm or in the home, family members with impairments still typically participated. Factory labor, however, mandated a standardized speed and precision of work that many people with disabilities couldn’t match, forcing them en masse into institutionalized care.
While the nature of work has changed since the industrial era and the institutionalization of people with impairments is thankfully on the wane, the link between productivity and ability has only increased. In Key Words for Disability Studies, Fiona Kumari Campbell explains how the need to integrate large numbers of disabled veterans into the workforce after the world wars meant that “from the 20th century onward, distinctions between abled and disabled bodies have been linked to notions of the productive body within particularized economies.” More and more, ability means the ability to do jobs that the economy requires.
The link between ability and productivity is so strong that, as Robert McRuer points out in Crip Theory: Cultural Signs of Queerness and Disability, the concepts are officially bound in the Oxford English Dictionary. “Abodiedness” is defined as “soundness of health; ability to work; robustness.” McRuer uses this definition to elucidate what he calls “compulsory able-bodiedness,” the constant pressure to present and perform as able-bodiedly as possible, even as you inevitably fall short. In the context of such pressure, to be disabled is to fail. And capitalism admits no greater sin.
More and more, ability means the ability to do jobs that the economy requires. Upgrading one’s devices diligently thus forms a part of contemporary compulsory able-bodiedness
That I should feel similar guilt for my slow phone as I have felt about my slow eyes makes perfect sense, then: In the age of data mining and the gig economy, the internet is becoming the new shop floor. The speed and effectiveness of our devices partly determines how efficiently we work when we are on it, which is always. Upgrading one’s devices diligently thus forms a part of contemporary compulsory able-bodiedness. Having the technology that most effectively enhances the speed and efficiency with which we can access, process, and respond to information is a prerequisite for being a productive worker in the 21st century. To refuse is to fail.
By this logic, I was wrong all those years to think I needed a certain kind of job to justify a smartphone. I should have gotten the smartphone to prove I deserved the job.
As technology becomes a more intimate and integral part of how we process and communicate with the world, the intersections between access to technology and (dis)ability necessarily blur. When Campbell looks to the future of the concept of ability, she predicts that “developments in surgery, pharmacology, and other consumer technologies are rapidly transforming notions of ‘normal ability.’ Todays ‘normals’ may end up being tomorrow’s abnormals, and what seem like hyperabilities can become standards of future ability.”
It’s easy to see that prediction play out in the language Mark Zuckerberg used in April, when he posted about Facebook’s plans to develop a technology that would allow people to type with their brains. He claimed that the technology was exciting because speech “can only transmit about the same amount of data as a 1980s modem,” whereas the new system “will let you type straight from your brain about five times faster than you can type on your phone today.” Already one can sense the pressure building: Who wants to be talking as if it were the 1980s when your friends are brain-typing at five times that speed? How many friends will you have left if you don’t catch up? Would anyone hire you? Interviewing for a job without this technology would be like showing up to the office with a typewriter.
In A Very Capitalist Condition, Slorach points to a 2011 World Health Organization survey that found that 20 percent of the world’s poorest people are disabled. OECD countries, according to the same survey, have an unemployment rate of 44 percent for people with disabilities, despite legislation in many of these countries against discrimination in the workplace. The WHO also notes that people with disabilities are 50 percent more likely to incur medical costs that push them below the poverty line. In less-prosperous countries, people with disabilities have an even harder time accessing accommodations and resources.
Slorach cites these findings to highlight the intersections between economic class and disability, which is entirely logical considering able-bodiedness is literally defined as ability to work. As a result, disability can be self-reinforcing, a spiral in which impairment drives economic disadvantage, which then drives further degrees of disability, exacerbated by the incapacity to spend what is necessary to maintain employability.
If, as Campbell predicts, technological enhancements like Facebook’s brain-typing turn hyperability into standard ability, it’s hard to imagine the divide between the two not coalescing around existing class lines. Already, even the World Bank warns of a “digital divide” that risks “creating a new underclass” of the 60 percent of people worldwide who do not have access to the internet. Its proposed solution is to make the internet “universal, affordable, open, and safe.” This dovetails with the United Nations’ resolution that internet access be considered a basic human right. As more specific technological enhancements become necessary to compete in the job market, it might be possible to extend the logic of rights to the dispersal of these enhancements — to argue we have a right to brain chips as we should have a right to health care.
Widespread acceptance of this idea seems unlikely, though, at least in the United States. Health care as a right is far from agreed upon. The House’s attempt to repeal the Affordable Care Act would force millions of people with disabilities out of the workforce and into institutions because it would make substantial cuts to Medicaid, which funds services like at-home care. Clearly, then, the idea that the government should provide any service as a matter of course is anathema to many politicians, even if the central argument for extending that service is to allow more people to participate in the labor force. A better sense of how politicians might respond to the idea of technology as a right can be assessed from Representative Jason Chaffetz (R-UT)’s suggestion that Americans pay for health care by giving up iPhones.
Even if equal access to technological enhancement were to become politically possible, however, it wouldn’t necessarily lead to equal rewards. The World Bank admits that while access to digital technologies has spread, the financial dividends reaped from that dispersion have concentrated with the already wealthy and well-educated.
Beyond that, though, a rights-based framework within a capitalist system does nothing to alleviate compulsory able-bodiedness. The idea that you must upgrade either your tech or your body — or, increasingly, both at once — for the sake of employment remains intact. The rights framework enshrines the ideal of perpetual enhancement, and implies something is wrong with you if you refuse it. This vision is made explicit in a statement by Andy Miah, director of the Creative Futures Institute: “The human enhancement market will reveal the truth about our biological conditions — we are all disabled.” The implication being that we all must strive not to be.
The problem, then, is not that we might alter bodily or mental capabilities with technology; we’ve been doing it for centuries. The problem is in how assistive technology might be used as a pretense to pressure diverse human minds and bodies to alter themselves in identical ways — ways that are more about fitting into an exploitative economic system than expanding the range of unique human experiences.
Siebers suggests replacing the totalizing, Enlightenment imperative against exclusion with an emphasis on accessibility, which would shift power away from the already powerful to name the excluded and include them. In his framework, “all worlds should be accessible to everyone, but it is up to the individual to decide whether they will enter these worlds.”
Framing enhancements this way, not as a right that can’t be refused but as a choice everyone should have access to but no one should be required to take, would be a step toward resisting compulsory hyper-ability. Though of course, true choice will be impossible as long as the current economic system, with its principle that one must sell one’s labor to be fit for survival, remains in place.
On the What is ableism? Tumblr, an activist named Michelle defines ableism as “the false idea that disabled people are by default inferior. When in truth disability is just another way for a mind and/or body to be.” There’s a simple dignity to that statement that resists the pressure to modify yourself for anyone else’s benefit. It’s the freedom to be who you are that is imperiled by the drive to enhance yourself as the market demands.
I don’t think I should be ashamed of my eyes. And I don’t think I should be ashamed if I choose a cheaper phone either.
Zuckerberg’s pitch encapsulates what I’d call an extractive model of technology, in which the human and the machine exist in a mutually exploitative relationship. The brain-reading tech can extract your thoughts faster than your mouth can speak them. You can extract more speed from this new technology than from your current phone.
True choice will be impossible as long as the principle that one must sell one’s labor to be fit for survival remains. I don’t think I should be ashamed of my eyes. And I don’t think I should be ashamed of a cheaper phone either
While this mind-set might drive the development of technologies that blur the boundary between human and machine, it actually relies on preserving the conceptual distance between them. A technology can’t be seen as an enhancement worth purchasing if it isn’t separate from the purchaser. In this way, it fits into the “border war” mentality Donna Haraway says defines the organism-machine relationship under “racist, male-dominant capitalism.”
This isn’t to say there’s anything inherently wrong with brain-typing, or even the more involved tech-body melds advocated by the transhumanists. In fact, it’s entirely possible to imagine Michelle’s statement applied to people with robotic limbs or neural implants. A cyborg is just another way for a mind and/or body to be.
As Jenny Davis wrote in a previous Real Life essay, digital technology has empowered people with disabilities to engage in social interaction on their own terms. She rightly calls out the standard critique of screens as a distraction from face-to-face interaction as ableist, explaining that digital communication makes interaction much easier, for example, for the hearing impaired or for people on the autism spectrum who understand the nuance of word choice better than facial cues.
Slorach also gives a brief history of technological innovations created by people with disabilities in an attempt to accommodate the world to their needs. IBM founder Herman Hollerith had learning difficulties and invented a machine to process information more quickly; Victor Cerf was hearing-impaired and created email to text with his deaf wife.
For my part, I feel a true affection for my light-up magnifying glass. It makes reading, which I love, so much less of a strain for me that I carry it with me everywhere I go and feel incomplete if I leave it at home. Using it, I don’t feel ashamed for needing it, and I don’t feel as if it is forcing me to see at anyone else’s speed. (There is no pleasure like a good book on a long train ride, and when anyone asks me about my magnifying glass, it’s usually to find out where they can buy one.)
When I, as a teenager, stuck my head and my small-print Crime and Punishment under the lampshade to try to finish my course reading on a dark night, I was in an extractive relationship with that incandescent bulb. On the other hand, when I pull out my magnifying glass to read for leisure, I feel what Haraway describes as the “pleasure in the confusion of boundaries.” I am happy to share the glass’s vision and wish it could also enjoy the book it enables me to read with ease. This is a more relational model of technology: A particular gadget isn’t something we use; it’s a partnership we choose to enter into. (As AI capabilities increase, maybe there will be a meaningful way of asking the technology if it chooses this partnership as well.)
This model eschews the idea that we must use technology to make up our own ever-expanding lack. Instead, we can work with it interdependently as we would with any human team. Moreover, it undermines the notion, central to both compulsory able-bodiedness and capitalism itself, that the primary function of any thing or person is productivity for another’s sake. If even the use of tools is transformed into an equal partnership, than no one and nothing can be seen as merely a tool, to be discarded when not up to standard.