Home

Our Bodies, Ourselves

With internet-connected medical devices, life-and-death decisions meet “move fast and break things”

It’s easy to mock Elizabeth Holmes, her failed biotech company Theranos, and the wealthy investors taken in by it. Yet what the recent bevy of postmortems on the company all seem to miss is why Theranos was so successful to begin with. Yes, it promised to revolutionize blood tests and had a charismatic leader, but that can’t fully explain how a nonexistent technology secured a $10 billion valuation. What Theranos offered was less a new technology than an old story: Knowledge is power. The more frequently we test our blood, the company promised, the better we will know ourselves and the deeper our relationship with our body will grow.

But we must first agree to share our powerful knowledge with ethically dubious tech companies like Theranos. As recent debates around genetic testing have made painfully clear, patients can lose control over their bio-data, which can become the property of various medical-tech companies they interact with. Historically, medical data has been extracted by and kept in hospitals, clinics, research centers, and medical files — institutions subject to regulation, however imperfect, and oriented toward patients’ privacy. The tech industry, however, is obsessed instead with speed, efficiency, and growth.

Tech companies are telling the same “knowledge is power” story again about the internet of medical things — the system of connected medical devices and applications (everything from smart watches and other wearables to implanted devices like pacemakers) that collect data that is then provided to health care IT systems over wireless networks. Much like with Theranos, the internet of medical things is being touted as a way to use advanced technology to democratize health care, reduce the need for hospital visits, and remotely connect patients to their doctors. Already, there are 3.7 million medical devices in use that are connected to both wireless networks and a patient’s body, and a recent industry report predicted that the market for the internet of medical things will reach $136 billion worldwide by 2021.

Historically, medical data has been kept in institutions subject to regulation. The tech industry, however, is obsessed with speed and growth

According to the narrative forwarded by both the popular press and biotech companies, the internet of medical things paves the way to a world in which a combination of nanotechnology, biometric sensors, internet connectivity, and precision calculation capabilities transforms the human body into a machine that can be monitored nonstop. Every biological dimension — breathing, perspiration, pulse, blood pressure — will be capable of being measured, stored, and compared with billions of other pieces of data. This will supposedly lead inexorably to improving or saving the lives of millions. For example, in 2018, business consultant Bernard Marr claimed in a Forbes article that the IoMT revolution would bring about “more personalized health care” by bypassing the individual’s decision to report personal information and allowing patients’ compliance with doctors’ recommendations to be monitored: “A connected medical device provides objective reporting of actual activity, whereas without its reporting providers must rely on subjective patient reports to detail how they feel.”

Yet this new kind of “internal surveillance” from afar has many risks that the optimistic headlines hyping the IoMT tend to overlook — a story I know from personal experience. In 2017, I had a cardiac event and ended up having a Medtronic pacemaker implanted that can connect to apps via wireless technology. Not only can implants like mine expose patients to hacking or surveillance, they also reinforce a dangerous binary between “subjective” patients and “objective reporting of actual activity.” Big data is not inherently “true”: It tends to produce noise and false negatives. More specifically, the biodata collected by wearables and smartwatches have been proved to be inaccurate. On the other side, behaviors that patients “subjectively” observe and which sensors and devices don’t yet track — mood swings, stress levels, and other such factors — are necessary to provide a holistic view of a patient. As remote data collection supplants in-person consultations with doctors, patients risk being reduced to a set of numbers.

The rise of the IoMT is occurring alongside another supposed “revolution”: the coming of the 5G network. As Shannon Mattern has recently warned, 5G’s “promised gains in speed, which we typically attribute to a faster, technologically superior network, will be due in part to advances in tracking. Optimization and customization are made possible because of more thorough customer surveillance.” To fully deliver on its economic promise, the intricate infrastructure of 5G will aspire to track humans in as many ways as possible. Healthy people — and not just cardiac patients — will be encouraged to implant data sensors in their bodies or their clothes to comply with employers’ and insurance companies’ requests. Some employers have already begun moving in this direction: In 2018, Amazon submitted a patent for an electronic wristband that could monitor employees’ tasks; a year earlier, the American tech company Three Square Market started an optional microchipping program for its employees. Such attempts to transform workers into “data points” build on a much longer history of productivity and efficiency studies dating back to Fordism and the assembly line.

Implants like mine reinforce a dangerous binary between “subjective” patients and “objective reporting of actual activity”

Such tracking has dystopian implications, infusing workplace discipline into every aspect of everyday life and using biosensors to move from in-person care to remote, algorithmic-based care. Describing this shift, Israeli historian Yuval Noah Harari warns that by 2050, “diseases may be diagnosed and treated long before they lead to pain or disability. As a result, you will always find yourself suffering from some ‘medical condition’ and following this or that algorithmic recommendation. If you refuse, perhaps your medical insurance will become invalid, or your boss will fire you — why should they pay the price of your obstinacy?”

The experience of patients already living with connected medical devices can therefore shed light on ethical and philosophical questions that will only become more pressing for everyone. Unlike, say, purchasing a 5G-compatible device, medical devices may be implanted in one’s body in desperate moments of urgency and dread. These are never the right moments for a careful weighing of a device’s short- and long-term risks. This only strengthens the need for a public discussion of what kind of relationship we want our bodies to have with wireless technologies and the companies making them.

Pacemakers have been saving lives since long before the digital age. The question is not whether we need them — I, for one, most certainly do — but rather whether medical implants should also turn our bodies into data farms, making us “quantifiable selves” in ways we can’t fully control.


One of the main concerns about the emerging internet of medical things is privacy: Can for-profit companies be trusted with the data they collect and which patients have little or no control over? In the wake of writing an essay for the Atlantic about my concerns about my pacemaker, I spoke to Dr. Robert Kowal, chief medical officer of Medtronic’s cardiac rhythm and heart failure division, and other company employees, and they reassured me that the company has never sold and will never sell the medical data it monitors to third parties like insurance or recruitment companies. In Kowal’s words, “we sell devices — not data.”

But even when the data isn’t sold, the fact that it belongs to Medtronic is troubling. Similar to the clients of genetic-testing services, patients with connected pacemakers don’t control the data their bodies produce. When I tried to obtain my pacemaker data, I was asked by my clinic to sign several forms, after which I was supposed to receive the information by mail. Half a year later, I was still waiting for it. Only after I told senior figures at Medtronic did I receive the information: a thick envelope containing dozens of pages, each stamped “Copyright © 2001-2018 Medtronic, Inc.” I was surprised to discover that the report contains such sensitive data as “Average monthly physical activity” — meaning that my pacemaker snitches to Medtronic how many hours a day I spend as a couch potato.

At the same time, data ownership isn’t the only concern when it comes to privacy. Data from pacemakers has already been used as evidence in court. In 2017, an Ohio judge ruled that the information collected from a defendant’s pacemaker could be used by an insurance company to incriminate an arson suspect. Data from Fitbits and Apple Watches too have already been used as evidence in court in the U.S. and Canada.

Even if companies are sincere in wanting to protect patients’ privacy, it is not entirely clear they will always be capable of it. Implants may stay in the body for decades. What happens to patient data if a devicemaker goes out of business? While that may seem unlikely, many major medical companies (for example, Teva Pharmaceuticals) have crashed or faced insolvency, and Medtronic itself was forced to relocate its main office to Ireland following the 2008 recession. Companies can also be acquired or merged — scenarios never discussed in the consent forms patients sign before receiving implants.

If the companies themselves are insecure, how secure are the implanted devices themselves? Tara Larson, a Medtronic systems engineer in charge of patient-information security, told me her company’s devices are safe because their capacity to transmit information is limited. “In the traditional Internet of Things, these devices are always on and they are always listening. We can’t do that because our top priority is the battery life of the device,” Larson told me. “Take Advisa pacemaker for example: The battery is about half the size of your iPhone’s battery, but it lasts 10 or 15 years.”

But in several recent cases, medical devices have experienced security breaches: Last August, the FDA recalled about half a million Abbott Laboratories pacemakers, and three years ago, the FDA was compelled to remove from the market hundreds of thousands of insulin pumps with network connectivity when a security expert succeeded in breaching them remotely and revising their settings. And as the Guardian reported last year, a pair of security researchers at an information security conference “remotely disabled an implantable insulin pump, preventing it from delivering the lifesaving medication, and then took total control of a pacemaker system, allowing them to deliver malware directly to the computers implanted in a patient’s body.”

These security concerns will become more pressing as more patients have wirelessly connected devices implanted. According to Lior Jankelson, the director of the heart rhythm disorders program at NYU Langone Hospital, “all the medical devices we currently implant in the United States are remotely connected.” In other words, at this point if you need a pacemaker, it will be equipped with IoT functionality whether you want it or not. Consent, as I learned, becomes much trickier when it comes to life-saving technology. According to Jankelson, “Today, every pacemaker is cloud-connected. Even if you were to specifically ask for a non-wireless pacemaker, I doubt the hospital would be able to track one down.”


The issues with the internet of medical things go beyond questions of data security or privacy. The specific types and quantities of data that implants can generate raise concerns even when that data is kept in legitimate hands. Not only do patients effectively have no agency over what kind of device will be implanted in them, they have no say in what bio-data will be monitored and shared with both their doctors and biotech companies. And more data is not inherently good for patients.

While the IoMT promises to reduce stress and anxiety by giving patients more access to their bio-data, based on my experience and my conversations with dozens of other cardiac patients, the effect of these devices is much more complex. The binary between “subjective” patient and “objective” data that the IoMT imposes makes it impossible to trust one’s own body. More data also means more noise, as danah boyd and Kate Crawford famously argued. Monitoring sleep via sleep apps, for example, has been shown to increase stress and anxiety among users, undermining sleep quality. Constantly checking for irregularities or abnormal pulse tends to make patients more rather than less anxious. According to recent studies, constant monitoring over time might generate a sense of helplessness rather than empowerment. Even if a patient is asymptomatic, her body is telling her doctor a different story. It thus creates a world in which I start to think about my body as a ticking bomb, a machine that might break at any given moment and therefore requires constant monitoring.

Whether algorithmic systems will be able to easily detect cardiac events or create unnecessary anxiety with false positives remains to be seen. “Because it is known that there are algorithms that can decode ECG tests more quickly and more efficiently than doctors,” Jankelson told me, “there are programs that automatically analyze the information patients send via their monitors. At this stage, the technology is entering the field of medical analysis, but in the near future it will also give recommendations as to what to do.”

Constant monitoring creates a world in which I start to think about my body as a ticking bomb, a machine that might break at any given moment

Because implant data can be sent remotely, patients may end up seeing their doctors less even as the doctors receive more information to process. This may change how doctors assess a patient’s condition: Instead of taking into account a patient’s complexion, speech patterns, or other criteria that can only be evaluated face to face, doctors review a remote transmission sent via a bedside monitor. While I see my cardiologist only once a year, I send a transmission every three months. True, this enables closer monitoring of my heart rhythm, but it also runs the risk of decontextualizing my data.

This is not “personalized health care” but algorithmic care, in which seeing a doctor in person becomes a rare privilege as patients communicate via monitors and mobile apps. Think of the last time you desperately tried to talk to a human representative when calling customer service, only to be endlessly redirected through prerecorded menus telling you to “press one for more information.” Now imagine that as a paradigm for doctor’s visits.


While I’m grateful for the device that saved my life, I still believe that companies and providers must do more to provide patients with a better understanding of their devices. Patients should receive detailed explanations about data security risks of their implanted devices before signing consent forms, and once implanted, the data produced by the device should be easily accessible to patients who would like to view it or share it with others. Facebook groups such as “Young Pacemaker Patients” or “My Heart, My Data” are already providing the kind of community support that patients might need by sharing current research and warning patients of recalls or privacy concerns. Patient activism — such as the group of cancer survivors who in 2018, with the ACLU’s help, filed a complaint with the Department of Health and Human Services over Myriad Genetics’ withholding of their data — sets an example for future struggles over data ownership and access.

Cardiac patients like me can also draw inspiration from the sleep apnea patients who asked a hacker to create a tool that let them modify their CPAP machines to access their data, as Vice reported. Similar collaborations have been attempted with pacemakers. In Wired, security researcher Marie Moe, who tried to hack into her own pacemaker, wrote, “I encourage more security research of medical implants simply because I do not believe that proprietary ‘security through obscurity’ will make the devices safer for patients.” Activists like Moe push companies like Medtronic to establish more transparent communication with their patients instead of citing “proprietary codes” and using black-box design to control information.

Regulation, patient advocacy, and hacking projects can help us cultivate new relationships between our bodies and ourselves, and our bodies and medical-device companies. As the story of Theranos taught us, “Move fast and break things” is not the best mindset when it comes to people’s lives. For the internet of medical things, “Move slow and proceed with caution” might be the more productive motto.

Neta Alexander is an Assistant Professor in the Film and Media Studies Department at Colgate University, New York. Her work focuses on digital culture, film and media, and science and technology studies. Her first book, Failure, co-written with Arjun Appadurai, will be published by Polity Books this fall.