Insurers have been using big data analytics to explore new underwriting techniques for several years now. Their aim has been to price policies on a more personalised basis, in order to attract or avoid certain types of business. Their ideal is low risk policyholders who are willing to pay more. And not surprisingly, that sort of value is not experienced in the same way by the customer as by the underwriter. Yet the tension it creates has been hard to address, for personalisation spreads it so widely and so thinly that the public tends to just put up with it.
That may well change though, for developments are pointing to the possibility of direct challenges being made to the sector’s use of analytics in underwriting. To-date, the sector has largely been exposed to stories in the media of “shameful” underwriting practices. These generate passing attention, but achieve little traction on everyday sector practices.
This week saw a clear signal that those media stories of individual impact are now being seen as part of a wider narrative of systemic detriment. The chief executive of the UK’s Financial Ombudsman Bureau (FOS) has predicted that the use of customer data by the financial services industry could be responsible for the next big surge of complaints after payment protection insurance (PPI) dies down.
For many in insurance, the FOS prediction will come as a surprise. Yet the signs were all there back in 2012, when I spoke at a conference about privacy concerns becoming the next big problem after PPI. I gave it 5 years to emerge and so it has. For what FOS has clearly been picking up on are patterns on their planning radar starting to coalesce into this wider narrative of systemic detriment.
FOS isn’t the only ones picking up on those patterns. Some charitable organisations have been joining the dots as well. They’re now realising that what connects the problems that their particular category of consumers have been complaining to them about are the optimised pricing practices that come out of the typical use of analytics in underwriting.
FOS will respond by passing on their concerns to the FCA and by recruiting specialist staff in readiness for some complex cases. Some charitable organisations hold a more potent card: the super complaint. A number of consumer bodies have the right to submit a super-complaint to the FCA, who are then duty bound to respond to it. Could they play that card, and if they did so, what might be its impact?
That possible impact could be gauged by events in the US insurance market. Campaigning by the Consumers Federation of America (CFA) resulted in 20 states banning price optimisation (a principle use of analytics in underwriting) on the basis that it was unfairly discriminatory. So if a US consumer body could have that impact without a super-complaint, the impact in the UK with one could be pretty significant.
Do consumer bodies and charitable organisations have sufficient evidence upon which to build the case for a super-complaint? Their hard evidence is largely of individualised impact. The very nature of big data analytics in underwriting means that evidence of systemic detriment will be hard to come by. End of story? No, just the end of the beginning, for two reasons.
Firstly, consider this from the FCA’s guidance on super-complaints…
“It is not necessary for a super-complaint to demonstrate that the interests of consumers have actually been damaged. Where a complaint does not demonstrate that consumers are actually suffering harm, complainants should provide us with clear information about why they think consumer interests are at risk of being damaged.”
Secondly, consider this from a key working party on the forthcoming General Data Protection Regulation. The Article 29 working party has made a ‘good practice recommendation’ in relation to the GDPR’s ‘right to an explanation’, that while “a complex mathematical explanation about how algorithms or machine-learning work” will generally not be relevant, it “should also be provided if this is necessary to allow experts to further verify how the decision-making process works”.
Put this then alongside the recognition in the report of the Article 22 working party on automated processing and profiling, that ‘significant effects’ should be seen through not just an individual lens, but a group lens as well.
What do these legislative technicalities add up to? They highlight mechanisms, both existing in financial legislation and developing within privacy legislation, for the black box of analytics in underwriting to be opened up and examined.
Yet examined for what? Misconduct? Yes of course, and some might be found, despite the best efforts of many good insurance executives to ensure that their systems are compliant. What is more likely to be examined are the mechanisms within the firm and its analytics that safeguard against unfair or discriminatory practices. For what has to be remembered about UK financial legislation is that it can be enforced not just for misconduct, but also for inadequate safeguarding against the possibility of misconduct.
What also needs to be addressed is who is capable of carrying out such an examination. Recent output from the FCA points to them being well equipped to tackle this, so I don’t foresee capability being a problem. However, given the range of tasks that the FCA is also having to address, capacity might be more of an issue for them. What a super-complaint could do though is force a rethink of their priorities.
So how should insurers respond to the scenario outlined above? Their response should be on two levels.
Firstly, on a firm by firm basis, they should each make sure that they have an evidence trail for having analysed the ethical risks associated with their use of analytics in underwriting. And that should be supported with plenty of evidence of having carried out fairness and anti-discrimination testing on their systems. Ticking the same boxes for business partners and suppliers should be taken as a given too.
Secondly, at a sector level, insurers should be encouraging dialogue about the implications of the analytical tools they’re putting to use in underwriting. All too often, I hear insurance executives talk about competitive pressures driving their use of analytics in underwriting. Their focus is on what others are doing, rather than on ethical values like integrity and trusted. Yet integrity is all about doing what is right, even if it’s not in your best interests. It is this type of sector groupthink that could result in it experiencing a tough regulatory jolt. Being more transparent about the significant changes that big data analytics is bringing about in the sector is one way of rebalancing this.
The great danger of course is that the sector adopts a ‘you don’t understand how insurance works’ stance. Let’s hope for an informative, rather than dismissive, response.
Duncan is the founder of the Ethics and Insurance blog and the author of its many posts. He's a Chartered Insurance Practitioner, having worked 18 years in the UK market. As an adviser to many firms on ethics issues, as well as a regular conference speaker, he is one of the leading voices on ethics and insurance.