Skip to content

Insurance Vultures and the Internet of Things

smart fridge control panel

Nothing’s really free on the Internet. From search engines and email to social media and online publishing, if you’re not entering in your credit card number somewhere, you’re paying in a different way. As the adage goes, if you’re not the consumer, then you’re the product. You’re either paying with your eyeballs on advertisements, or with your personal data that gets sold to advertisers.

If this is true of the Internet, then the same logic applies for the “Internet of Things.” This is the buzzword for the fast emerging trend of everyday objects being embedded with sensors. These sensors, which are networked over the web, collect, store, and analyze torrents of data—usually by transmitting data to remote “cloud” servers—about how people use the products the sensors are attached to. Some examples include personal devices like wristbands that measure vital signs, domestic appliances like “smart” thermostats, or automobiles that keep track of how, where, and when we drive.

Google has hinted at using ads to monetize these networks of objects (imagine your refrigerator trying to “upsell” you when the milk runs out). But there’s a different corporate sector looking to take hold, and one that has even wider political implications—insurance companies.

“You know the way that advertising turned out to be the native business model for the Internet? I think that insurance is going to be the native business model for the Internet of Things,” said Tim O’Reilly, founder and CEO of the O’Reilly Media technology publishing empire, at his company’s Solid conference at the end of May.

Whether we’re talking about health, home, or auto insurance, insurers can cast a wide net over our personal lives, and a political economy of an IoT underwritten by insurance companies raises serious concerns. If insurance becomes a, or the, major business model for the IoT, then we can start to expect cheap or free IoT devices and services—much the way advertisers allow this to happen on the Internet—for the low cost of all your data piped into the hungry maws of insurers. With the ability to mine data from devices that monitor many facets of our lives, it’s easy to see insurance companies ramping up their ability to surveil and change our behavior. We’ll be handing them more power—far more so than they have now—over our lives.

Insurance is a game of risks: Companies make money when we conform to a set of non-risky behaviors, and don’t do things that could be cause for a payout. The riskier insurance companies think your behavior is, the higher your premiums. IoT would allow them to more precisely and intimately monitor your behaviors. When they spot some “risky” business, they can act accordingly by enacting behavioral modification schemes so we fall in line with insurance companies’ interests.

This newfound power can be exercised with ease thanks to a variety of methods. Insurers can enact punishing price discrimination at the individual scale based on what your devices say about you. (Decided to have a week of indulging on vacation or didn’t check your engine when the light came on? The insurers are happy to hike your premiums to reflect those naughty decisions.) Or, they can set up systems of control that could, for instance, allow insurers to repossess or shut off our devices if we don’t shape up. There are chilling disciplinary effects of knowing we’re under the watchful eye of our insurers and their data analysts.

This isn’t a conspiracy theory; it’s already happening. Insurance companies are well aware of the abilities they could gain from the IoT. Many insurance trade rags—like Insurance & Technology and Property Casualty 360—have run articles about the opportunities opened by the IoT. And a report on the IoT’s relevance for insurance companies, from the consulting firm Celent’s Americas Property/Casualty Practice, throws the implications into stark relief: This data can not only “provide a much more accurate picture of the exposures, hazards, and risks of what is being insured,” but, as the report recommends, “insurers can create feedback and control processes to command or request things to change their loss-related behavior and performance.”

Unfortunately, we can’t rely on existing legal protections to thwart or dampen insurers’ power. As law professor Scott Peppet explains in a forthcoming article in the Texas Law Review on regulation and the IoT: “antidiscrimination law does not prevent economic sorting based on our personalities, habits, and character traits . . . insurers are free to avoid insuring—or charge more to—those with risk preferences they find too expensive to insure.”

In other words, insurers would be free to, in effect, circumvent antidiscrimination laws by using new data sets, gathered from a plethora of networked devices, that are more accurate, larger, and richer, but aren’t covered in regulations. What’s more, an effect called “sensor fusion”—where two or more sources of data are combined to reveal more than they do alone—would easily let them discover, and surreptitiously make judgments based on, protected categories like race, gender, age, and disabilities.

What hasn’t happened yet, but could, if this pattern progresses, is IoT devices becoming a required part of a health insurance policy. For example, provisions about “wellness” in the Affordable Care Act could potentially allow insurance companies to essentially require that clients use things like personal health and fitness monitoring devices. The regulatory barriers that prevent similar data access requirements for home and auto insurance are almost non-existent.

The prospect of being locked into an arrangement where our everyday devices extend the tendrils of power for insurance companies is frightening—and frighteningly easy. It’s hard to resist free or cheap tech; it’s easy to let that tech become an integral part of our lives.

The first taste of a drug is always free. We should be wary of ways that insurers use the sheen of free or cheap IoT devices to pull us into a deal that we’re not prepared to make.