Let’s be clear - these cases of algorithm destruction have been in the United States and have not involved insurers. That said, it would be a ‘brave’ insurer who thought that this couldn’t at some point happen to them.
So what’s behind their emergence? Regulators are taking two approaches. One is for regulators to use their own data and analytics to pin point misconduct and unlawful behaviour. The other is for regulators to look at algorithms and models as just another business asset. So if they can evidence that that business asset was connected with the rules being broken, then the firm has to stop the asset from doing that.
Clearly, the former approach can feed into the latter approach, forming part of the latter’s evidence base. However, I also see the latter approach (algorithm destruction) as standing back from all the detail of the data / analytics and adopting a simple mindset of ‘remove the problem’, expecting the firm to fix it offline. The firm isn’t being allowed to ‘tweak out the problem’ while still using the algorithm.
Removing the Benefit
And the reason they’re not being given this time is that regulators want to address both the problem and the benefit the firm has gained from that problem being present. So in the two cases of algorithm destruction so far, the problem stemmed from a lack of consent for what the firms were doing with the data they had collected. The data had to go. As both firms were benefiting from that problem data through having models trained on it, the algorithms and models had to go too. The firm could not benefit from their rule breaking.
This is an approach that is both blunt and effective, sending as it does a powerful message to other firms that their data collection practices must be lawful. And I believe insurers in the US will be listening to that message, especially given a recent announcement by the insurance commissioner in California about how insurers should not use artificial intelligence and big data (more here).
I predict that within three years, a US insurer will incur an algorithm destruction ruling in relation to their gathering and use of data. And sure, for UK insurers, it will be one insurer in a state on the other side of the world, but the ripples sent out will quickly reach these shores.
The Lifetime Value Model Connection
Rather than wait for those ripples, what UK insurers should do now is draw a line from the Financial Conduct Authority’s ban on lifetime value modelling, over to the Federal Trade Commission’s use of algorithm destruction. The two are not that dissimilar. Both involved a ‘stop that model’ ruling. The main difference is that UK insurers were allowed to adjust their pricing models over a year or two.
That ‘time to adjust’ was perhaps not that much of a surprise, given it involved a significant readjustment in the thinking on ‘fairness of pricing’ by both insurers and regulator. When it comes to discrimination in underwriting, claims or counter fraud, that ‘time to adjust’ is likely to be much, much shorter. So I’m willing to add to my earlier prediction that that first case of algorithm destruction in insurance will involve issues around discrimination.
There are two dangers for insurers at the moment. Firstly, here in the UK, the ‘ethnicity penalty’ report is in the public domain and awaiting a response from insurers beyond the ‘we would not do that’ that has so far been given. It’s a current issue, backed up by research.
The second danger is that too much of European insurers’ attention at the moment could be towards having insurance not classified as high risk in the forthcoming EU AI Act (more here). That decision may end up being driven less by lobbying and more by events at the national level.
So, in terms of that first question about ‘are insurers at risk from algorithm destruction’, I would suggest that insurers treat it as an emerging risk of huge significance that needs to be reviewed now, with alerts set up to look out for key developments.
Reviewing Your Exposure
Let’s move on to my second question – how can you start weighing it up? This will involve bringing together various existing elements from across the business and weaving them into your risk management assessment.
I’m sure that you’ve already identified your business critical models, as well as those other business models that depend on those critical ones. You then need to look at your protocols, rules and records relating to the data feeding into those business critical models.
You’re looking for two things. Firstly, what levers do you have for controlling those data feeds? There should already be levers for consent and discrimination, but what about other data ethics issues? What information do you use to determine that you’ve got the right set of levers? How are you mitigating the obvious conflicts of interest here?
And secondly, on what basis are those levers being set at a particular control position? How are you grading the risk from those levers positions? And who has a say in this?
Let’s illustrate this. With the pricing ban here in the UK, most underwriters found that they didn’t have a ‘fairness lever’ for their pricing, despite plenty of evidence that one was needed. Of course, they weren’t helped by the regulator not seeing the need for such a lever either.
With the ethnicity penalty report here in the UK, insurers are in effect being told that while they may have ‘discrimination levers’, it looks like they haven’t been set at the right position.
The next thing to look for is evidence that these levels and their settings have been stress tested. Have the assumptions and controls determining your gross/net risk on each ‘lever issue’ been challenged by, for example, internal people like audit, and/or external people like independent experts?
Business Continuity Planning
Now move on to your business continuity planning (BCP). Do the ‘model related disruptive events’ being monitored include regulatory triggers? Remember that with fairness and pricing, the regulator had set its policy perimeter in completely the wrong place. So, for your business critical models, one lesson to learn here is - do your BCP triggers include ‘regulator wrong’?
There was an example of this several years ago in South Korea, when the mis-use of an insurance counter fraud database resulted not only in the insurers involved having to stop new business until the problem was sorted, but the insurance regulator being punished as well by a super regulator for not responding to the problem earlier.
I’ve written before about an intermediate step that regulators might take before considering model destruction. It’s called ‘machine unlearning’ (more here) and involves reverse engineering out the influence of ‘tainted data’ on how a critical model has been trained. Should such a capability been considered as part of your BCP planning? What other continuity solutions have been prepared?
Finally, look at your business interruption cover (the one you buy, not the one you sell). What cover is triggered by what events? What costs are covered in relation to both critical models and dependent models? How does model destruction sit within all that? What you’re looking to avoid here are the surprises and conflict that swamped the BI market when the pandemic hit.
Think Models First
I have of course focussed here on models rather than the data that they’re trained upon and fed with. That’s because the data governance functions at insurers have tended to think chiefly along the lines of tainted data. What I’ve been seeing is that symbiotic relationship between data and algorithms resulting in the much greater threat to digital strategies coming from those algorithms and models. Insurers need to think through how they intend to handle that threat. It will only be a matter of time before investors will be asking questions about their preparations.