Home

Draining the Risk Pool

Insurance companies are using new surveillance tech to discipline customers

The rental company U-Haul made headlines recently when it announced that its employees would be prohibited from using nicotine of any kind. The policy, which went into effect on February 1 in 21 states across the U.S., caused a stir, as commentators were quick to point out how invasive it was, undermining employees’ autonomy not merely on the job but in their personal lives. The company framed the decision as an effort to create a “culture of wellness.” Sure, it might seem paternalistic, but really U-Haul is looking out for the best interests of its “team members” by taking a tough-love approach to their bad habits. In reality, this is likely to mean that people who don’t smoke (or vape or chew) will get hired over those who promise to quit. Bonus: no more employees sneaking away for smoke breaks.

Corporate wellness programs pervert the posture of caring to justify the implementation of intrusive technologies

U-Haul is not the only company to prohibit nicotine, and it is far from the first to implement policies that authorize the monitoring of employees’ behavior on and off work. Corporate wellness programs, as Gordon Hull and Frank Pasquale detail in this 2018 paper, are often driven by concerns that exceed supporting healthy habits or even cutting down on “time theft.” They are in fact the insidious product of the U.S.’s unique entanglement of employers and health insurers, which works to create leverage over employees who have little control over the terms of their coverage. Ultimately, wellness programs are about exercising that leverage, reducing the risk profile of employees and thus cutting the employer’s costs for health insurance plans.

Once initiated, corporate wellness programs pervert the posture of caring to justify the implementation of intrusive technologies. Such programs, for instance, will typically offer incentives if people use certain data-collecting devices, share personal data, or meet regular quantified goals. This might mean wearing fitness trackers, using apps to track your eating habits, or recording your moods in a digital diary — all valuable data an insurer would check for compliance and use it to extrapolate other useful underwriting information.

To paraphrase economist Joan Robinson’s famous line about exploitation, the misery of health insurance under capitalism is nothing compared with the misery of not being insured at all. But the difficulty for many Americans simply to acquire health insurance shouldn’t distract us from the inseparable issue of how to avoid being screwed, abused, scrutinized, and controlled by our insurers. While we are absorbed by trying to overcome the misery of getting it, there are major innovations intensifying the complications of having it.

While health insurers have been at the forefront of experimenting with techniques for learning more about and thus exerting further control over their policyholders, other types of insurance have also discovered the power of what has come to be known as “insurtech,” an industry term (akin to “fintech”) for the companies and products that hope to drive these shifts in what insurers can do, what they know about us, and how they operate.

Back in 2002, legal scholars Tom Baker and Jonathan Simon argued that “within a regime of liberal governance, insurance is one of the greatest sources of regulatory authority over private life.” The industry’s ability to record, analyze, discipline, and punish people may in some instances surpass the power of government agencies. Jump nearly two decades in the future and insurance companies’ powers have only grown. As insurers embrace the whole suite of “smart” systems (which I detail in my recent book) — networked devices, digital platforms, data extraction, algorithmic analysis — they are now able to intensify those practices and implement new techniques, imposing more direct pressure on private lives.

The normalization of surveillance by a range of widely adopted and readily available technologies have opened the way for insurance companies to insert themselves straight into our homes, cars, and bodies, thus gaining further abilities to assess our lifestyles and adjust our behaviors. In essence, they conflate a form of surveillance founded on care with one based on control. Surveillance scholar David Lyon calls this the difference between “watching out for” and “watching over”; with insurtech we’re promised the latter while saddled with the former.

For instance, in 2018, John Hancock, a major U.S. life insurance company, added fitness tracking to all its policies, which now require policyholders to share health data from wearables like Fitbit or smart watches or else face penalties. As I’ve explained previously in this paper co-authored with Sophia Maalsen, insurers are also using the smart home — with its sensors that track everything from domestic maintenance to daily routines — as an entry point into our most intimate spaces. The goal is for home insurers to receive regular reports about the status of your home, making it a totally knowable system, as if it were a computer we live inside. This would allow insurers to nudge policyholders into making repairs and predict the likelihood of incidents leading to claims.

Similarly, startups like The Floow collect data from drivers’ phones and assigns each driver a “safety score” based on data like usage, accelerometer, and GPS, which they claim can even predict when certain drivers are likely to have an accident in the near future. And many auto insurers, like Progressive in the U.S. and Admiral in the U.K., now use black-box-like devices installed in cars to record how, when, and where people drive. Do you “hard brake” too often? Do you speed, even when nobody is around? Do you drive through “dangerous” neighborhoods? Do you drive at odd hours? Your premiums will be adjusted accordingly. The car is already a networked computer on wheels that streams data to manufacturers; insurers might argue they are just optimizing the value generated by the vehicle-as-platform.

These kinds of services may sound convenient, until you realize whose benefits they are ultimately meant to serve. It’s not hard to imagine that forgetting to change the battery in your smart smoke detector will increase your risk score, or skipping too many days on your smart exercise bike will raise your premiums. The ability to collect more data about more risk factors opens the way for scores and judgments based on seemingly arbitrary correlations. It doesn’t matter if insurance companies know why people who drink coffee after 5 pm or have low credit scores or are implicated by whatever other random factor may correlate with higher risk. What matters is that the pattern has been identified in the data and can be turned into an “actionable insight” that justifies price discrimination. Such practices fit right in with how actuarial calculation already largely works. It is less concerned with knowing why a relationship might exist and more with showing a probabilistic connection. If you ever want to frustrate an actuary, start by asking them to explain the causal validity of factors used to assess and predict risk.

Judging from the steady stream of reports from consultancies like McKinsey outlining strategies and predictions for insurtech, insurers have only just begun getting “smart,” with much more to come in the near future. “Insurance is emerging as an innovator,” PriceWaterhouseCoopers declared in their 2019 trend report, “There’s currently a unique opportunity for companies to be distinctive, as trepidation about disruption turns to optimism.” Indeed, major tech companies like Microsoft and IBM, in partnership with major insurance companies like AmFam, have established accelerator and incubator programs meant to develop (and capitalize on) the next generation of insurtech. And ton of startups hope to be the voice of this generation. A rising star of the bunch is Lemonade — which has received $480 million in funding according to Crunchbase — a property insurer that describes itself as “a full stack insurance company powered by AI and behavioral economics, and driven by social good.” For Lemonade, along with other insurtech startups like Tractable, this basically boils down to replacing insurance agents with chatbots and handling claims with machine learning.

Insurtech represents a marriage of heedless Silicon Valley disruption and ruthless actuarial exploitation. By partnering with tech companies or producing their own bespoke apps and devices, insurers have an irresistible opportunity to claim continuous, massive streams of valuable data, and with that, unprecedented power over how we “choose” to live.


From the start, insurance has always been an industry based on crunching data to come up with ever-better ways of calculating risk and creating policies. Some of the original “big data” sets were mortality tables compiled by actuaries in the late 17th and 18th century to assist them with predicting risk based on average life expectancy. Similarly, insurers have also always tried to manage risk, not just assess it. The industry term for this is “loss prevention and control.” If we pierce the veil of this euphemism, its implications are clear: Any risk that insurers must bear is potential loss and any claim insurers have to pay is lost profit. Preventing such losses means controlling the source of risks and claims: customers.

Do you speed, even when nobody is around? Do you drive through “dangerous” neighborhoods? Do you drive at odd hours? Your premiums will be adjusted accordingly

We can think of the imposition of “smart” systems as the insurance companies’ latest process for producing the kind of customers they prefer, who either embody the virtues preferred by insurers or pay the price for vice. All the data collected by insurers from all the mechanisms that can now constantly gather it feeds into constructing hyper-personalized profiles that boost the power they have to discipline policyholders and shirk claims. Progressive, for instance, bragged in a 2012 report about its SnapShot vehicle-tracking device that it was already on its way to achieving “personalized insurance pricing based on real-time measurement of your driving behavior — the statistics of one.” In other words, the aim is to have so much data about each driver that they no longer need to rely entirely on pooling risk at aggregate levels but can rather analyze and assess each person’s individual behavioral profile.

Through a combination of three mechanisms — policy conditions, price incentives, and personalized profiling — insurers are taking loss prevention and control to new levels. In short, policy conditions require people to do (and not do) certain things to maintain their plan. Price incentives like discounts coax people into adopting new insurtech and changing their behaviors. Personalized profiles power new ways to assess risk, adjust plans, and administer claims.

To administer these mechanisms, insurance companies must monitor across multiple temporalities. The insurer’s gaze is fixed on the future, informed by the past, and concerned with the present. The actuarial gamble is that, through the right techniques and data, insurers can do more than predict the future; they can also put a price on its various probabilities. The shift happening now is premised on more than just better predictions and better anticipation. With access to granular data from diverse sources about each policyholder, insurers will be able to shift from a reactive mode based on compensating claims to a proactive stance based on reducing risks. The need to model aggregate trends (e.g. average life expectancy for certain demographics), predict likely risks, and hedge against uncertain futures will be supplanted by the ability to tailor individualized policies, control sources of risk, and dynamically adjust premiums. Rather than accounting for the future and mitigating risks, insurers can aspire to prevent some futures from happening at all.

By getting “smarter,” however, insurers are bucking the established logic of their industry. Insurance companies originally were created to pool diverse risks to allow themselves and their policyholders to hedge against uncertainty. Insurers could rely on the law of large numbers to make sure they run at a stable profit, while the insured paid for the peace of mind that there was a safety net to catch them in case of disaster.

Insurers say their new data-driven techniques don’t change that. In fact, they claim the ways they hope to monitor, manage, modify, and monetize what people do are justified because they actually contribute to creating a fairer system. As a report by the Actuaries Institute on the impact of big data for the future of insurance plainly states, “increased individual risk pricing will make premiums fairer in that they will be more reflective of that risk.” In other words, with detailed behavioral profiles, the companies argue that their prices will be more “accurate” and premiums will truly reflect the choices each person makes and the risks they assume. Those who lead safe, careful, or uneventful lives and demonstrate their willingness to make decisions along the lines the insurance company demands won’t have to bear the load of those who are risky and rash, or more recalcitrant.

With an expansive network of stuff recording how you behave across virtually every realm of daily life, your insurer aims to track compliance perfectly and handle claims automatically through agents powered by artificial intelligence. Enforcement, from their point of view, will be unbiased and certain; there can be no confusion or disagreement about who owes what to whom when they can just check the data. By this logic, fiduciary duty compels for-profit insurance companies to discipline policyholders and diminish the horizon of possibility based on what the stats have shown is the most profitable life to lead — and, by extension, kind of person to be. That’s if you’re lucky enough to be deemed worthy of insurance at all.

Insurance companies were created to pool diverse risks and hedge against uncertainty. But that is in direct competition with the logic of insurance as a means of profit

Insurance has always been a kind of mathematical morality. Embedded in the calculations, models, contracts, and other tools of the actuarial trade are judgments about who is responsible, what things are worth, how society should be organized. The promise of insurtech is that these judgments can be made objectively about and applied universally to every individual. Yet this individualized approach is a redefinition of what is “fair”: rather than spreading risks across a population to hedge against the vagaries of life, the data-driven system promotes the sense that no one should bear any expense or risk for the benefit of the collective. From this perspective, insurers are right, even obligated, to treat different people in different ways based on predictions of their future behaviors. Technology, in turn, should be focused on this discriminatory task rather than being directed toward extending better coverage to broader populations and reducing the collective insecurity that impedes a flourishing society.

The idiosyncratic view of fairness underlying this worldview readily justifies a range of perverse consequences. If there’s a great divide between the premiums people pay that happens to mirror other structural and systemic inequalities in society, then at least it was implemented “fairly.” If some people are saddled with restrictive policy conditions and others receive special treatment, then at least it’s in the name of “fairness.” If an underclass of the underinsured and uninsurable is created, then at least it’s the result of a more fair society.

Wielding actuarial fairness in this way “does not avoid the moral minefield,” explains sociologist Barbara Kiviat in a new article on the use of credit scores in insurance. “It simply, if implicitly, holds that people are always accountable, regardless of whether the data look the way they do because of personal fault, structural disadvantage, simple chance, or some other factor. Algorithmic prediction is imbued with normative viewpoints — they are viewpoints that suit the goals of corporations.” Contrary to the promises of accurate premiums and surprise discounts flowing to customers, the insurance industry is getting smarter on its own terms.


Insurance plays an important role in society as an effective way to pool risks and provide mutual aid to those in need — whether because of personal choice, structural constraint, or random catastrophe. But that is in direct competition with the logic of insurance as a means of profit. It’s not hard to see what impulse wins if it’s not tightly regulated. The insurance industry is not unique in succumbing to the imperatives of capital, but it is uniquely adept at exploiting people when they are most vulnerable by controlling access to essential services and security.

Ultimately, as it is currently being implemented — individualizing risk, enforcing compliance, disciplining policyholders — insurtech is helping push the industry further away from public utility toward strictly private benefit. This isn’t new, per se. Insurers have long been extremely skilled at coming up with better ways to discipline those they are meant to serve and exclude those who are deemed not worthy. But things are on the path to get worse.

Insurance has a purpose, but that purpose must be harnessed to fit with the needs of society, rather than the perverse inversion where people are forced to conform with the industry’s interests. At this juncture, that means requiring insurers to be dumber, not smarter. It’s time we establish conditions and enforce compliance on them. And if they don’t like it, then I guess they should have been more responsible and made better choices.

Jathan Sadowski is a research fellow in the Emerging Technologies Research Lab in the Faculty of Information Technology at Monash University.