Apr 4, 2017 3 min read

Accountability in an era of algorithm driven insurance – rock or hard place?

There a question mark forming over the capacity of board members of insurance firms to perform their oversight duties effectively. The reason is the various types of algorithm at the heart of all that big data that is transforming underwriting, claims and marketing practices.

Take for example, understanding where responsibilities lie. A natural starting point would be to look at how algorithmic big data would be different from the long line of IT projects where responsibility for any malfunction  or misfunction could be traced to a particular programmer. With a hand written algorithm, there will be little difference: more complex of course, but there will still be some linearity of control to trace a line of responsibility along.

Yet how long will hand written algorithms be the norm? Algorithms with some degree of learning capacity are starting to be introduced. With such algorithms, no one person has enough control over the machine’s learning to be able to assume full responsibility for them. The complex and fluid nature of countless decision-making rules inhibit oversight.

In a recent survey by advisory firm KPMG, 91 percent of insurance CEOs admitted being worried about the challenge of integrating automation.

And their modular design means that no one person or group will be able to fully grasp the manner in which one element responds to another. Thus emerges a significant gap, between the algorithm’s behaviour and a designer’s control.

Yet we’re now entering a period of insurance when algorithmic and human decision making happens in tandem. I’ve seen cases of hugely unfair policy changes being put down to ‘the data’, as the human element to a decision shifts responsibility for a particular unfair outcome on to the algorithmic element. Yet can an artificial agent bear moral responsibility? That’s very contentious. Yet is the alternative any more feasible – would the human element accept responsibility for the unethical behaviour of their increasingly autonomous creations?

Think of these other situations. In an environment where underwriting decisions are derived from a mix of algorithmic and human involvement, should the firm’s code of ethics apply only to the latter and not to the former? Do your firm’s ethical values apply only to the human element and not to the algorithm element of ‘how things get done round here’? Do your metrics for gauging the fairness with which your firm is treating its customers apply to both human and algorithm decisions, or just the latter?

A recent survey by advisory firm PwC found that “72% of insurance CEOs think it will be harder to sustain trust in a digitised market”.

The problem widens when you bring in other obligations that board members will be under, such as compliance with the forthcoming General Data Protection Regulation. Data controllers will be required to evaluate the potential consequences of their algorithmic decision making and report to executives via a date protection impact assessment. They’re also be required to communicate the profiling methods and the significance and consequences of the associated risks from that assessment to data subjects (otherwise known as policyholders), in clear and plain language. More than a little challenging?

You could just about imagine this being feasible for a hand written algorithm. With machine learning, it seems to go out of the window. Yet the accountability stays firmly with the directors in that board room. And in the UK for instance, the regulatory trend is for that accountability to be increasingly fixed onto named individuals.

Insurance executives are being told however, that their greatest challenge with machine learning “is to be comfortable with failure.” Yet this is expressed in commercial terms, in that you should expect to lose money on several projects in the expectation of winning huge amounts on the project that actually does deliver. Such narratives are of course permeated with a large dose of ‘self serving bias’, yet putting aside that somewhat human characteristic (especially amongst consultants), it hardly seems likely that regulators will be quite so accommodating with a casino like approach by senior executives. They will want to see evidence of the firm’s board understanding the risks being taken and monitoring the measures being used to control it. And they will particularly want to see evidence of how the board is monitoring  any impact from those ‘comfortable failures’ on the fair treatment of customers.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.