Strong Opinions Emerge on Data Use in Counter Fraud
The Coalition Against Insurance Fraud is made up of insurers, regulators, legislators and consumer groups from across the US. It carried out a detailed survey this summer into data ethics and counter fraud. The results were published earlier this month and reveal some strong opinions amongst consumers on how data should be used, and from insurance people on how data is being used.
Their findings can be summed up in two ways. The public have strong support for the use of data in countering insurance fraud. At the same time, the public have strong opinions on how this should be managed. What stood out for me was the survey’s finding that one big ethical issue is at the heart of people’s concerns. The public’s support relies a lot on ensuring that intentional and unintentional bias is removed from how insurance fraud is tackled.
Before I dive into the findings and the conclusions I drew from them, I want to emphasise how important it is for UK insurers to understand and take on board these findings. Yes, it is a US survey, but the market there is not radically different to the UK market. What’s more, many of the findings overlap with a recent UK Government survey into public attitudes to data and AI (more here).
I would recommend that UK insurers use the survey’s findings to…
- to frame their ethical risk assessment for their counter fraud operations;
- to review how their processes and controls handle those ethical risks;
- to challenge themselves on some of the assumptions built into those processes and controls;
- to particularly challenge themselves on how bias is being managed in both data and analytics;
- to draw the above together in preparation for increasing scrutiny of counter fraud, from inside their firm (e.g. boards) and outside their firm (regulators and consumer groups).
This won’t be easy for UK insurers. There’s a strong and established culture in insurance counter fraud. And the sector has adopted a pretty opaque approach to tackling insurance fraud. Insurers need to prepare for this to change in some way, otherwise there’s a strong possibility that change will be forced upon them.
How the Survey was Organised
The Coalition used an independent external research firm to conduct the survey. Just over 2,000 people were surveyed. Their profile was 67% consumers and policyholders, 17% insurance professionals and 7% tech providers and business partners. All were US based.
Given that about one in six of the people surveyed were from within the insurance community, the Coalition took these steps…
“To those who may be concerned that the 17% insurance professional respondents may have skewed the study results and validity, the data results were run both with, and without, their responses. Statistically, the responses across the study did not change the results in any way, so all responses are included in this report.”
Support with Caveats
“When asked about their level of concern with insurance fraud and how their data is used to fight fraud, an amazing 84% of respondents said they are either ‘very concerned’ or ‘concerned’ about these issues.”
This needs careful interpretation, for it feels like two questions in one. Looking back into the detail of the survey, what this appears to mean is that consumers were concerned as in ‘interested, you have my attention’, rather than concerned as in ‘worried about what you are doing’. This is a topic that has respondents’ attention. They want insurance fraud to be tackled and they’re interested in how their personal data is used to do so.
This is just as much about personal data as about insurance fraud. 35% of respondents were very concerned in general about how their personal data was being used. And 52% were somewhat concerned. What this tells us is that the survey was timely. People want to know what is being done with their data.
How insurers used data to tackle fraud produced a more mixed picture. 40% supported insurers using their personal data to fight fraud, but only in ways that were consistent with government laws and regulations. 35% supported insurers using their personal data to fight fraud, so long as there was a reasonable suspicion of fraud or an otherwise legitimate purpose. 25% said insurers should not use their personal data for any reason without their express permission.
There are three ways of looking at this finding. Firstly, consumers clearly don’t want to give insurers a free hand in deciding how to use personal data. Two thirds of respondents (40% plus 25%) wanted controls of some sort, while one third (the 35%) wanted the use to be justifiable. Secondly, that 35% outcome indicated that a fair number of consumers were relatively relaxed about privacy when it came to counter fraud. And thirdly, 75% want insurers to use their data to fight fraud, but 25% put an opt-in caveat to that.
The Return Consumers Are Looking For
Few respondents wanted a direct financial reward for giving insurers their data to fight fraud. What they did want was governance and accountability.
“Fifty-five percent of respondents said that adherence to laws and guidelines is a requisite for subjecting their personal data to artificial intelligence as a means of fighting insurance fraud.”
Interesting, smaller but still significant portions of respondents gave insurers quite opposing views. 28% gave unconditional support for use of their personal data in an artificial intelligence context to fight insurance fraud. At the same time, 17% didn’t want insurers to use their personal data in this way at all.
We will see this throughout the survey – small but significant results at the ‘go for it’ and ‘no way’ ends of the spectrum, with a solid middle giving their support, so long as insurers stay within the rules. Does this transpose automatically across into a UK or EU context? To a large degree, I think it does, but I suspect that the outlier group concerned about data use will be slightly bigger in a UK/EU setting.
What Concerns the Public the Most
The Coalition found one ethical issue at the heart of public concern about how their personal data is used in relation to insurance fraud:
“Overwhelmingly, consumers responded that issues of bias and prejudice are of critical importance to them. While responses show some level of support for excluding antifraud efforts from such data bias concerns, it is far less than the number of respondents wanting strong laws and regulations adopted to protect them from both intentional and unintentional bias or prejudice.”
This concern about bias is also present in another form. Respondents don’t like facial recognition or voice pattern analysis being used in counter fraud. Such analytics often feature in cases of discriminatory outcomes being experienced by US consumers, and this was reflected in this survey.
What this tells us is that public concern about bias is stronger than say their concern about privacy. Insurers need therefore to be sure to prioritise the right issue, in terms of design, testing and controls. If bias isn’t addressed to the public’s satisfaction, then their support for counter fraud will fall, possibly collapse. It comes across as a non-negotiable.
Standards and Transparency
The survey then moves on to the setting of standards for how data is used to identify and prevent insurance fraud. A big majority (65% of respondents) want standards to be set at the state or federal level. In other words, by government in one form or another. Only 18% of respondents were willing to rely on insurers themselves to come up with those standards.
This comes as no surprise. Respondents support the sector in its work against insurance fraud, but they’re not willing to just let the sector set its own rules for how it goes about this.
At the same time, respondents held strong opinions about how open insurers should be about what they’re doing with personal data. Nearly 90% of respondents (including many insurance professionals) said insurers should be required to have a “straightforward and easy-to-read” policy about how they use personal data.
“Nearly 80% believe insurers should be required to disclose their data-usage policies to both policyholders and claimants seeking benefits under a policy. They favor having those data-usage policies disclosed both in the insurance contract and on the insurer’s website so it may be reviewed in advance of consumers purchasing and receiving a policy or submitting a claim.”
Sharing Data
The Coalition was clearly surprised by how little support there was amongst respondents for insurers sharing data about fraud cases. Only 34% of respondents supported this. 18% didn’t want any sharing at all, while 39% wanted this to be restricted to the insurer and the regulator, but not other insurers or third party anti-fraud organisations. The remainder weren’t happy with any data use.
So why is there such reluctance for data being shared? The Coalition think this is down to the importance of sharing not being communicated properly. I don’t agree. I think it is down to the public being very wary of the whole idea of cross firm data sharing. As for the communication angle, this is often used when the sector gets a push back from the public on something the sector would like to do, almost regardless. Instead, they need to explore what would make the public more supportive of different levels of sharing.
What Insurance People Think
You’ll recall that earlier mention of 17% of survey respondents being insurance professionals. They were asked for their views on their firm’s handling of personal data in a counter fraud setting. The results were not good.
- 47% felt that their firm has strong guidelines
- 26% said their firm had guidelines but they weren’t adequate and should be updated.
- 15% said that their firm was waiting to adopt or update their data guidelines until there was more certainty about what regulations and laws were going to require
- 12% said their firm had no guidelines about how and when data may be used to identify potential insurance fraud and didn’t plan to create any unless they were required to.
There you have a pretty strong justification for the public being very wary about cross firm sharing of personal data. Most firms had inadequate or non-existent guidelines for handling such data. If the sector wants more support from the public, it needs to do more to earn their trust first, starting with a much wider adoption of suitable guidelines.
What This Means for UK Insurers
I think the first thing that UK insurers should note is that a recent UK Government survey into public attitudes to the use of data and AI (more here) found some pretty similar things to those that came out of the Coalition’s survey. The public are supportive but want better independent oversight.
What I believe both US and UK consumers are worried about is use of personal data (be it for counter fraud or more generally) being played to the firm’s advantage. Address insurance fraud for sure, but do it with fairness and without bias. It is bias where the greatest risk of a collapse in public support exists (as I written before, such as here). And from what I hear from sources about some practices in the sector, that risk is high in terms of both significance and likelihood.
UK insurers have said in the past that consumers should just move on and trust them on counter fraud. I think they are behind the times. The public no longer hold with the idea of ‘just trust us’. They now live in the ‘prove to me’ world. Trust is not going to be given without some form of regulation and accountability, especially in such an emotive and consequential thing as fraud. The sector has to come to terms with this. It has to be the one to move on.