Just as insurers are evolving in all sorts of innovative ways, so their accountability mindset needs to evolve too. As insurers do new things with data and analytics, so the public wants to be reassured that the right questions are being ask in relation to key ethical concerns like fairness, bias and autonomy.
The right accountability mindset is much more important than most insurers think. In this article, I’ll be doing two things. Firstly, I’ll explain why the right accountability mindset is so important. And secondly, I’ll look at seven accountability challenges identified by the key advisor to the FCA on data ethics. Together, these can be used by insurers in 2023 to evolve their governance arrangements in ways that build confidence in a crucial audience watching from the wings: policymakers.
A Mixed Picture
How well are insurers doing then, in terms of effective governance arrangements for their digital strategies? The evidence for UK insurers is thin on the ground, which when you’re talking about accountability is a problem in itself. In the US, a recent survey by the Coalition Against Insurance Fraud (CAIF) found many insurance professionals not that confident about their firms’ governance of data and analytics in counter fraud (more here).
Ok, so that is counter fraud and the US, but I think it is indicative of a wider ‘mixed picture’ in terms of data governance in insurance. The picture is mixed because of three things...
- the competitive pressures to use all that expensive data and analytics as much and as quickly as possible;
- the complexity of the digital transformation of insurance, which makes it difficult to get an effective grip on all that is happening;
- a sense of ‘we are good people so just trust us’ that pervades the sector, which the public do not share (more here).
A Here and Now Thing
What insurers need to recognise is that policymakers are now picking up on public concerns about how data and analytics are being used. An obvious example is the proposed EU regulation to ban insurers from accessing secondary health data (more here). And in the UK, regulators are currently consulting on how suitable existing accountability regulations are for a financial services sector fully engaged with AI.
While some insurers have put data governance arrangements in place, these tend to be, to put it one way, too near the data. Sure, there’s likely to be model risk management in place as well, but all too often, the questions being asked are along the lines of data and model quality. Not enough attention is being paid to upstream questions, like ‘why are we using this data’ and ‘are model outcomes being used fairly’ (not the same as model outcomes being fair).
Strong and appropriate governance is therefore needed to manage those competitive pressures, that complexity and those cultural norms mentioned above. At the moment, that governance is often insufficient, usually opaque, and invariable under pressure. There’s an accountability gap that needs bridging, and sooner rather than later, given the questions being raised by regulators.
Bridging that gap will be challenging. I’m going to turn now to some thoughts of Professor Luciano Floridi from the University of Oxford. He’s a leading expert on data ethics and, from what I’ve heard, a senior advisor to the FCA as well. His work is clearly influencing the current attention being given to AI accountability by financial regulators here in the UK.
Seven Challenges
In his work on the ethics of information, Prof. Floridi identified seven accountability challenges that firms (in general) face when designing and deploying digital decision systems.
Accountability Blindness. It’s likely that artificial intelligence will produce consumer detriment that is not picked up by the firm. It will therefore continue unrecognised and unaddressed, yet remain significant to those experiencing it. Such accountability blindness could stem from just not bothering to look for it, or from just not seeing it (more on this later).
The Accountability Gap. The complexity of AI can open up a veritable gulf between the decisions of individual people, and the effects that those decisions produce. Even though the detriment is evident, no one in the insurance firm sees it as their responsibility.
Diluted Accountability. That complexity of AI can also result in individuals feeling that their input, their decision, is so marginal as to obviate any responsibility for the consequences that result. The view would be that ‘I couldn’t have done anything wrong because my input has been so marginal’.
Siloed Accountability. That complexity also results in greater compartmentalisation of actions and decisions on AI projects. It’s all too easy for individuals, unable to see how an outcome could have resulted from their particular silo, to decide to ignore it altogether.
Blinkered Accountability. Some people think of ethical issues only in relation to the behaviours of real, physical people. As a result, they fail to see, or even understand, the implications of the decisions they make in relation to ‘artificial’ intelligence.
The Accountability Dynamic. The ever evolving nature of AI can make it difficult to nail down particular impacts to particular decisions. It can be all too easy to assume that a decision system in constant flux cannot be held directly accountable for anything other than fleeting, insignificant micro incidents of detriment.
The Accountability Imbalance. The reach and depth of AI makes it an empowering technology for those utilising it. This can cause firms to see their decisions as perfectly rational, while seeing the decisions of consumers as much less so. The danger is that such empowerment could cause firms to just not see ethical issues associated with their use of AI.
Informational Objects
Let’s look in a little more detail at the first and seventh of the above challenges (about blindness and imbalance). These can be exacerbated by a tendency in big digital projects to think of the consumer more as an informational object and less as a person. The identify given to them by the system is the ‘true one’ and deviations from it are errors or suspicious. There’s a greater emphasis on inputs and outputs like models and clusterings, rather than outcomes like volatility and accessibility.
This will feed directly into how an insurer designs and delivers its governance arrangements. If there’s more trust in the system than in the customer, then those governance arrangements will be reflect this. And they will be very different to a firm that has built its digital strategy around its relationship with customers.
Fundamental Questions
So this brings us to the fundamental question, of ‘what are we doing digital for?’ That will, quite naturally, shape the scope and depth of the question ‘what are we accountable for?’ And that will in turn influence the question ‘what are we able to be accountable?’
Firms may well answer these questions for themselves, but really, firms are accountable for what policymakers and consumers want them to be accountable for. What this means then is that firms cannot answer questions like ‘what are we accountable for’ by themselves. They may prefer to, but in the long run, these questions need to be answered in line with consumer expectations.
An example of this par excellence is algorithm bias. At the moment, many firms, in considering the question ‘what are we able to be accountable for’, are answering it along the lines of ‘not a lot, as everyone knows it is very difficult to nail down and remove’. Yet at the same time, consumers are saying that it is their number one concern.
This in turn is causing policymakers to consider why firms are using systems that they cannot (or struggle to) control for bias. And in turn, this leads to developments like the proposed EU ban on the use of secondary health data by insurers.
So in summary, you can see the feedback loops that these accountability challenges can produce. That’s why the design of your governance arrangements matter, in terms of scope and depth, upstream and downstream, and insurer and consumer versions.