The approach you adopt to such questions depends to a large degree on the perspective from which you approach them. If it is a legal one, then that perspective is expressed as ‘if the law doesn’t say we cannot do it, then we can’. If it is an ethical one, then the perspective is expressed as ’is this the right thing to do’.
In between these two perspectives lie a lot of flexible interpretations, guided often by two things. Firstly, the proximity of a regulator and the drive with which it interprets and applies its remit. And secondly, the competitive and financial pressures that influence the value that a firm gets from give weight to one of these perspectives over the other.
Add in a sector working with principles based regulation and the path forward must at times seem rather hazy. Influenced by insurance’s close relationship with the legal sector, insurers have all too often favoured the legal perspective. It feels the safer of the two.
Yet has that always been the case? The pricing super complaint shattered sector views on the fairness of inception and renewal premiums. From an ethical perspective though, it was obvious. My point then is that the sector’s tendency to fall back on a legal perspective can be a dangerous habit to get into, especially as the debate about AI ethics and data ethics evolves at an increasing rate.
Three Questions
So what can insurers do to make that haziness more manageable? I came across an interesting article by US ethicist Reid Blackman recently, in which he framed three questions around that legal and ethical boundary. I’ve adapted his questions and addressed them to challenges that insurers need to pay attention to. Note that the references to AI encompass data and analytics as well.
The first question I am calling ‘ineffective enforcement’. What is your firm doing with AI that is already illegal but which is not yet being fully or effectively enforced?
The second I am calling ‘delayed interpretation’. What is your firm doing with AI that is not illegal but is likely to be made illegal at some point?
And the third question I am calling ‘uncertain application’. What is your firm doing with AI the legality of which is unclear, and which is likely to receive attention?
Having posed these questions, the important next step is for you to decide how to answer them. Drawing on those above comments about perspective, seeking the answers solely from a legal or regulatory compliance team is not going to be the best way forward. Instead, your firm needs to assemble an inter-disciplinary team, drawing on both internal and external expertise. Add some challenge into that mix and the output will improve by leaps and bounds.
Some Answers to Get Started
How would I answer them then? In very simple terms, this is what I would suggest...
- under ‘ineffective enforcement’, I would suggest discrimination, fairness, privacy, biometrics and emotional AI.
- under ‘delayed interpretation’, I would suggest convictions, genetics and consent.
- under ‘uncertain application’, I would suggest autonomy, predictive analytics and granular fairness.
I’ve written about each of these nine topics in previous articles over the last year or so, so please look in the ‘articles by topic’ page for more detail about them. Alternatively, get in touch for an informal chat.
It’s an Enterprise Risk Thing
For insurers moving their businesses very firmly in a digital direction (and which aren’t!), these questions need to be addressed at the enterprise risk management (ERM) level. The answers should then be reviewed by the board committee responsible for risk.
The first thing you can do then is to look at what the ERM people have so far been reporting through to the risk committee in relation to three three questions. Are they being asked? Are they been weighed up by a suitably inter-disciplinary team? Are significance levels being estimated with the right level of challenge? Do people on the risk committee have sufficient knowledge to review them properly?
I am guilty here of allowing three questions to evolve into several more, so I’ll finish here, other than to say the obvious thing, which is that it is better for your firm to be asking these questions, than for them to come from outsiders seeking to right what they see as a wrong.