A Lot of Life Insurers Are Navigating AI Blind
The National Association of Insurance Commissioners (NAIC) is a membership organisation for the individual state insurance regulators. Its survey looked at how 163 insurers with medium to large life books were approaching and handling artificial intelligence and machine learning (AI/ML). Three things stood out from their findings….
- a large proportion of these life insurers were not using AI/ML;
- of those who were using AI/ML, a significant proportion had not adopted practices in relation to either the NAIC’s AI Principles or any other standards / guidance;
- most insurers using AI/ML had not adopted practices aimed at making their AI systems secure, safe, and robust.
Let’s explore each of these findings in more detail, and then weigh up their implications. One point to note first is that I use the word ‘use’ to encompass those using AI/ML, planning to use it and researching how best to use it.
Use of AI/ML in Life Books
Of the 163 insurers surveyed, 37% were not using AI/ML. These were the top four reasons (in order) given by those firms…
- no compelling business reason at this time;
- reliance on legacy systems / requires IT, data and technology upgrades;
- lack of resources and expertise;
- risk is not commensurate with current strategy or appetite.
What this tells us is that a good number of insurers see the future of life insurance as not automatically tied to the use of AI and ML. At least not yet. Compelling is a key word in that first bullet point.
At the same time, it’s clear that a good proportion feel held back in some way – legacy systems and access to expertise for example. The risks and costs of tackling these are still having an impact.
Accountability Shortfall
Those insurers who were using AI/ML were asked whether they had adopted practices with regard to accountability for data algorithm compliance, in each of these operational areas...
Pricing and Underwriting Yes = 47 / No = 47
Marketing Yes = 42 / No = 52
Risk Management Yes = 25 / No = 68
What this tells us is that governance is not a priority for the life books of many of these insurers using AI/ML.
This could be down to not knowing what practices to use or how to adopt them, or perhaps relying on the supplier to deliver a compliant system. Either which way, this means that a significant question mark hangs over how the US life market is organising ‘digital innovation’.
For sure, this survey doesn’t tell us about accountability for things like the performance of their AI/ML but given the investments involved, I doubt whether the ‘no’ column of answers would be anywhere near so high.
Not Secure or Safe
A similar picture emerged in the survey around the adoption of practices to ensure that AI/ML systems were secure, safe and robust.
Pricing and Underwriting Yes = 49 / No = 46
Marketing Yes = 44 / No = 51
Risk Management Yes = 26 / No = 68
This was perhaps the survey finding that surprised me most. I had expected that insurers using AI/ML systems for any book of business would have taken steps to ensure that those systems were secure, safe and robust.
After all, why adopt a system without first checking that it wouldn’t wobble and fall over, or leave a door open for hackers? Could it perhaps be down to AI/ML systems being seen through too rosy an eye? Are the suppliers of such systems not being scrutinised critically enough?
Some Thoughts and Implications
What this survey indicates is that a significant chunk of the US life sector has been engaging with AI/ML on fairly simplistic terms. There are a lot of firms willing to take the risk that their system may not be secure or compliant.
Perhaps this is down to firms needing to innovate quickly, so as to ‘fail fast’ and ‘learn quickly’. What they’re doing though is thinking about this the wrong way round. They’re putting the technology before the business. Instead, they should be situating the technology within the business, not outside of it.
To be rather blunt, implemented a system into a business that you don’t know is secure or compliant is just plain stupid. Secure and compliant should be the entry requirements for a business to adopt a technology, not after thoughts.
What will come out of surveys like this is a renewed vigour from state regulators to force insurers to prove that they are using AI/ML responsibly. The Colorado laws will be just the start. And few insurers, adopters of AI/ML or otherwise, should be surprised by this.
The outcome will be a new version of 'fast fail', when some life insurers are deemed ‘failed’ by state regulators faster than they thought. That’s because regulators will be putting consumers and the law before either the business or the technology.
This is not a ‘competition’ thing, along the lines of ‘we had to do it this way in order to compete’. There are too many insurers who don’t feel compelled to adopt AI/ML systems into their life book. Instead, it is I believe centred around perceptions of technology and the credibility it is felt to bring to individuals and firms. And that is not the best basis upon which to transform a sector.