It sounds a bit scary, but could big data result in the oversight provided by board members of an insurance firm soon becoming impossible? Is insurance changing so much that board members will soon feel unable to hold the management of the firm to account? There’s an argument emerging to this effect, and I’ll cover it in this and the next post.
Insurance is entering what looks to be a very transformative period, largely as a result of the influence that big data is having on underwriting, claims and marketing practices. These changes will bring many benefits to insurance firms, and that’s great. But at the same time, no one can be sure that all of these changes will deliver everything that is being promised. After all, how many project directors working in insurance would put up their hand when asked to confirm that they had never experienced problems with the delivery of key projects.
So what do board members ( and non-executives in particular) have to be on the lookout for, when weighing up a project steeped in big data?
Think of these three types of errors in big data projects: errors in design, failures to operate as intended, and the presence of unintended consequences. So, if that’s what they’re looking for, will they actually be able to spot them?
Here’s one challenge: will the information upon which they can judge a big data project be accessible and understandable? That hardly seems likely: the complex decision making processes inherent in the algorithms within big data makes them particularly opaque. And if there is an element of machine learning within the algorithm, then it will be nigh on impossible.
Perhaps there should be an element of trust involved. Yet how do you know whether the risk you’re being asked to ‘just trust people with’ is big or small? And how does a collective of trustors (aka all those board members) decide what level of risk to ‘just trust people with’ is acceptable? That’s the type of question likely to raise some pretty serious differences of opinion between executive and non-executive directors.
So what level of expectations are board directors under in relation to their oversight of big data projects? As one regulator said in 2015, “big data is not a game played by different rules”. That would point to those expectations being just the same as those for the more ordinary, everyday projects.
Let’s take fairness as an example. How will the board satisfy itself that the algorithms driving an increasing proportion of their underwriting is able to deliver fair outcomes for customers? Or to put it in a more prosaic way: how will they be sure that their algorithms aren’t discriminating against certain categories of consumer? Are they seeing evidence of a discrimination prevention strategy? Is there any fairness-aware data mining being done? And just when you thought that was a bit of a challenge, remember that the answer would have to cover both handwritten and machine learning algorithms.
In next week’s post, I’ll look at the responsibility and accountability angles to this.