It looks like the FCA really does not understand big data

A recent speech by the chief executive of the UK regulator for insurance focused on big data. It clarified one thing: that the regulator may understand some of the key features of big data, but it doesn’t really understand how they fit together and interact. This doesn’t bode well, for the market or its customers.

Andrew Bailey’s speech to the annual conference of the Association of British Insurers was always going to be substantive, for so much is happening n both prudential and conduct regulation. So it was interesting that his speech focused in its first half on big data, and in particular on the implications of big data for risk pooling.

What came out of the speech however is disjointed and contradictory. Take his comments on price optimisation: “..our view is that we should not“ allow differential pricing “…between those who shop around and those who do not”. That’s pretty clear, yet in their September report on big data, the FCA said that price discrimination was “not necessarily problematic” and that they would be doing a ‘piece of work’ on pricing practices in a “limited number of firms” in the retail general insurance market. It’s pretty difficult to square those two views. Has there been a change of opinion, or are there two views within the regulator? I suspect that latter.

Bailey turns out to be a fan of telematics, describing it as “…beneficial all round…” And while there are indeed many benefits that could flow from it, it seems somewhat myopic to ignore some of its impacts. Take vulnerable people for example: with insurer data on vulnerability lagging well behind that commonly collected through telematics, it seems inevitable that underwriting correlations will reduce access to motor insurance for vulnerable people.

What the FCA needs to understand is that big data doesn’t differentiate between risk characteristics that can be improved through incentivisation, and those that can’t be changed. Its correlations, and its clustering of correlations, look at data, just data, and make decisions according to complex algorithms, hard wired through machine learning to develop their own judgements. The human touch to motor underwriting is becoming a thing of history.

Bailey then carries over his points about risk sharing to Flood Re, of which he, like myself, is a fan. And in doing so, he apologises for labouring his point about pooling “because failing to have a clear understanding of …the issue of risk sharing …will lead to less good outcomes.” Did he really understand what he was saying though?

He presents Flood Re as a solution to the Government’s objective to build more houses. Yet it can never be such a solution, for Flood Re only covers houses built before 2009. Moreover, it is a pooled risk solution to an ‘individualisation of risk’ problem, and of a type that might soon be needed for vulnerable people and motor insurance. Ironic?

There’s wasn’t a lot of vision in his speech, even less of leadership. Take his reference (quoted above) to ‘less good outcomes”. If you’re a regulator who wants to move from chasing the tails of ethical problems, to being alongside them as they arise (and so quashing them before their impact spreads), then you absolutely need to stop using language like ‘less good outcomes’. When it comes to risk, failures of understanding and failures of responsibility do not result in ‘less good outcomes’; they result in bad outcomes, they result in financial hardship, they result in loss of trust and they result in the detachment of many people from formal risk sharing mechanisms like insurance. When it comes to ethics, the language of leadership matters.

About the Author Duncan Minty

Duncan is the founder of the Ethics and Insurance blog and the author of its many posts. He's a Chartered Insurance Practitioner, having worked 18 years in the UK market. As an adviser to many firms on ethics issues, as well as a regular conference speaker, he is one of the leading voices on ethics and insurance.

follow me on: