Your social media data could be used to set your insurance rates
AIs use Instagram too
Back in the day, insurance companies used physical exams, medical tests and questionnaires to determine rates and premiums. But more and more, the information that we post about ourselves online and an social media are being taken into account — and it could have far-reaching impacts on who gets coverage and how.
Earlier this year, regulators for the insurance industry in New York State released a series of new guidelines, telling insurance companies that they would now be allowed to use people's online and social media data to investigate claims and set premiums.
The practice of insurers digging into our digital lives isn't new — insurance companies have been using social media to investigate claims. Last year, a professor in Montreal was denied disability insurance for his depression, then later learned the insurance company was tracking his social media posts.
What was new in New York State's guidance letter was that it allowed the insurance companies to use online data to set insurance rates.
Using algorithms a cause for concern
To deal with all of this additional information, AI algorithms are being deployed to make connections and decide how risky someone is.
According to Rick Swedloff, a professor of law at Rutgers University, this reliance on algorithms can potentially lead to problems.
Related Links
- Bad algorithms are making racist decisions
- From your credit rating to your search results: How algorithms create a black box society
- Weapons of Math Destruction
- AI's problem with disability and diversity
"Because these algorithms are so complicated, it's hard or even impossible to figure out why they're making the decisions they're making," Swedloff, told Spark host Nora Young.
"I think, secondly, that these algorithms are going to come up with prices that end up being discriminatory against, for instance, people of color in a way that is already prohibited by law."
The use 'proxies' could lead to discrimination
One example of an obvious proxy being used is redlining. Redlining is when cities are divided up according to demographics, with some areas inhabited by more people of a protected category. Decisions are then made based on neighbourhoods, and the protected category is discriminated against indirectly.
The new insurance industry guidelines in New York do outline that the collected data can't be used for discriminatory practices. And while companies aren't allowed to use obvious proxies to make their decisions, these algorithms might discover other indicators that the average person wouldn't connect, or "non-obvious proxies".
"A non-obvious proxy," Swedloff explained, "might be something like searching for sunset on the internet." While nothing in that search explicitly denotes race or religions, a practicing Jewish person might be more likely to search for the time of sunset to know when the Sabbath begins.
One way to prevent this potential for discrimination, Swedloff said, is to have more information about what insurance companies are doing. "Simply by having more information, we would have better data about what is really happening. And I think simply doing that would jumpstart a series of conversations about when and where we should price certain kinds of insurance," he said.
"Part of the problem, of course, is that these algorithms are doing things that we don't know and we don't know why."
In reporting this story, we contacted several experts on insurance law in Canada, but none were able to comment before our deadline. If we hear more, we will update this post.