Digital practices in insurance are receiving attention from a variety of policymaking bodies around the world. Some of those practices have been raising concerns, putting insurers at risk of further interventions. The Geneva Association’s (GA) report aims to address those concerns, by putting together an overview of the opportunities and challenges being faced.
The GA is “the only global association of insurance companies”, so it is always going to provide the insurers’ perspective on key trends. That said, as in many of its earlier reports, it doesn’t shy away from recognising both opportunities and challenges. A problem however often lies in how it weighs up the implications of each. This ‘responsible data’ report is no different.
So what, some of you might think. Well, the problem is that for policy strategists within insurers, the GA report is striving to maintain a status quo within an evolving digital insurance market. That however is an unsustainable position, largely because insurers no longer hold the commanding narrative around the implications of how the market is evolving. This report does not give that sufficient recognition (hence my status quo point). Instead, insurers need to look ahead and embrace both the technological changes that they are driving, and the social and ethical narrative that is starting to set expectations of that technology.
Three Things the Report is Saying
So what is the report saying? In short, it is this…
- the opportunities of digital insurance are wonderful and will happen;
- the challenges are being managed with governance and processes;
- things like fairness are too fudgy, but we’ll listen to what people have to say about it.
And sure, I shouldn’t be surprised that an insurance trade body comes up with these conclusions. It’s just that I believe the report is too detached from the impacts that digital insurance is having. It hasn’t done enough to acknowledge those impacts and the solutions it says will allay them are not sufficiently convincing.
So for example, the EU intends to introduce ‘a right to be forgotten’ for people who survive cancer (more here). Insurers want to be exempt from that legislation, despite insurance being a key driver for it in the first place. Will the GA report swing the EU legislators? I don’t think so, for the EU legislators have been giving more weight than the GA to the outcomes that people have been experiencing.
Let’s look now at three key lines of thinking that underpin the GA report. They are governance, collection/use and fairness.
Governance and Processes
Central to the GA’s case for insurers’ responsible use of data is the strength and maturity of each firm’s governance arrangements and process management. These things are already achieving what policymakers want – the proper management of discrimination and fairness issues. It is all being taken care of, runs the narrative.
I have more than a few doubts on this. Firstly, taking the UK as an example, if everything was already being taken care of, the UK market would not have seen challenges emerge around the loyalty penalty, the poverty premium and the ethnicity penalty. This is not to say that governance and processes are not in place. It is saying that they are not having the impact that they were meant to have.
I’ve noted before that the commonly applied ‘three lines of defence’ is very much weakened by its inherent conflicts of interest (more here). This means that the GA narrative of ‘trust in the process’ is not being counter-balanced by ‘guided by the outcomes’. And here in the UK, what we’re seeing are consumer groups (that the public say they trust most) being better than the market and its regulators at gathering and interpreting vast amounts of micro-outcomes from the insurance market. The reality coming out of those vast amounts of micro-outcomes is being recognised by policymakers.
Collection and Use
A core aim of the GA report is to convince policymakers that insurers should be allowed to collect any and all data they want, including on protected characteristics. The GA stress the difference between the collection of data and its use. So for example, insurers will collect all this data to check for bias and to teach their AI to recognise bias in order to avoid it. Those insurers would then not think of using any data other than what they need for insurance purposes.
As a concept, this ticks important boxes. How does AI learn not to be bias without access to data around characteristics like race and gender? How do insurers avoid indirect bias without that same access? In reality however, the public are well behind insurers in terms of confidence and trust in such data being controlled within those parameters. And it looks like policymakers are positioned alongside the public on this.
From what I hear in the market about some uses of data, the GA seems to be massively over optimistic about this collection / use balance being deliverable. To be honest, the sector has not shown through outcome orientated studies that it can deliver that balance.
At the heart of this situation is the commonly held belief in insurance circles that all data is risk related, so insurers should have access to it all. It is that belief that causes data on protected characteristics to stray into decision systems, even when the law very specifically forbids it. And so long as that commonly held belief persists, the collection and use differentiation is undeliverable.
Understanding Fairness
The way in which the GA report addresses fairness is perhaps its weakest feature. Across the report, fairness is presented in this way:
“The notion of fairness is highly subjective as it is based on individual, societal and cultural backgrounds and corresponding moral standards. Therefore, perceptions of fairness often differ between individuals and communities.”
These “varying perceptions of fairness” are then contrasted with the “objective benefits of data”. The narrative being constructed here is that fairness is weak and fudgy, while data and its benefits are clear and strong.
Fairness is indeed a complex thing, but equally it is something that the public have an innate understanding of. There are also varying dimensions to fairness, each emphasised in different ways by different groups. For example, insurers emphasise fairness of merit, while consumer groups emphasise fairness of need and access. It’s what’s called the equality of fairness.
This complexity can seem a challenge for insurance people used to working almost exclusively around the fairness of merit. Disappointingly, the GA focusses on evidence to back up the public’s support for fairness of merit, without balancing it with the public’s lack of support for how insurers use their data. They rightly refer to Professor Barbara Kiviat’s work on the public understanding of insurance and data, but fail to convert it into their recommendations on fairness.
So what does the GA want on fairness? They want insurers to be allowed to adopt a “flexible and principles based governance process for fairness”. Internal stakeholders would have an involvement in how the process is applied, while consumer panels would provide feedback on the outcomes being generated.
Behind these recommendations is an insurance market wanting to maintain control over how it handles fairness. I think this is now fundamentally unachievable, and to still strive for it will put the market in perpetual cycles of ‘challenge and reaction’. Consumer panels are not a strong enough counterweight to market practices. They do not have a strong enough track record of influencing those practices, largely because they are underpowered and underinformed.
Organising Fairness in Insurance
As UK insurers are finding, the market’s handling of fairness is no longer within their sole purview. The loyalty penalty challenge broke the mould within which regulators and insurers had, to be honest, kept a particular view of fairness going beyond its ‘use by date’. Fairness in insurance is now something being shaped by several different stakeholders, some of whom can be as influential and informed as insurers. The emerging digital age of insurance has been at the heart of how this has come about.
The question then is how to organise an equality of fairness in insurance. This is something I did much research and writing about during this last summer, and I hope to have the results published before the year end. What I can say ahead of that publication date, is that two things are vital – information to inform the debate, and an equilibrium of power within the organisation and governance of fairness.
Here in the UK at the moment, we are currently seeing early moves to start to bring this about. To put it one way, cards have been laid on the table (in terms of data and debate) and the next six months will see how the players at the table want to play the game. It could be immensely significant.
Steps for Insurers to Consider
Here are four steps that I believe insurers should be taking to move forward on the ‘responsible use of data’ in digital insurance.
Governance – address the conflicts of interest inherent in a lot of governance structures and evidence that the right mitigation has been achieved.
Engagement – open up your governance and processes around the responsible use of data, both to build confidence in their efficacy and make them a joint venture with external stakeholders.
Fairness – stop thinking in terms of ‘data objective; fairness subjective’. Like it or not, they are both subjective, are both of equal importance and are both capable of working in sync with the other. Do some disruption around how you handle fairness – think in terms of the equality of fairness.
Access All Data – rethink the case for collection and use of data. It is unsustainable in its present form. Instead, think about what you want to achieve, but look at the other ways of achieving it. Giving can be as powerful as taking.