Why EIOPA is going about its data ethics principles in the wrong way
Data ethics is an important consideration for a data hungry sector like insurance. And working to a set of data ethics principles is seen as important by an increasing number of insurance organisations. Yet the process for determining such principles is far from neutral. The process shapes the outcome, for better and, probably in EIOPA’s case, for worse.
The European Insurance and Occupational Pensions Authority (EIOPA) is an EU organisation, created in 2011. Its core responsibility to support the stability of the overall European financial system. It wants to create “a set of principles of digital responsibility in insurance” and is putting together a Consultative Expert Group to advise them on this. All fine, except for three pretty fundamental flaws.
Partial Principles?
Consider clause 7 of their ‘call for participation’ document: “While the principles of digital responsibility may cover the different areas of the insurance value chain, specific focus will be given to pricing and underwriting…”
Of course pricing and underwriting are an important part of digital responsibility in insurance. Yet to develop a set of principles that have ‘a specific focus’, and which ‘may’ cover other insurance functions, is to imbalance the venture from the start. If their focus should end up being on pricing, then the EIOPA principles should be labelled according to what is inside the tin. In other words, call them pricing principles.
Connectivity Matters
A second problem with that pricing and underwriting focus is that it turns a blind eye to the inter-connectiveness of digital insurance. Underwriting now starts at the marketing stage and continues through into the claims phase (more here). Claims predictions influence underwriting decisions. Neatly boxed functions are rapidly becoming a thing of the past. Viewing a digitising market through the lens of neatly boxed functions is to apply a historical template onto a fluid future.
Like any consultative group, the real efficacy of their role will be set out in their terms of reference. This will make clear the extent to which the consultation is open or closed. In other words, is this group being asked for original input or to sign off something already half formed?
This matters, for consultation is a political and social process that can produce a huge variety of outcomes depending on how it is designed and controlled. This means that confidence and trust in these EIOPA principles will be determined by the extent to which their overall development is made transparent. Terms of reference for both individual members and the group overall should be published, along with agendas and meeting notes.
No Level Playing Field
Consultation is also an economic process, and therein lies another problem with the Consultative Expert Group. It is to be run on a shoestring, funded largely by the time and resources to those participating in it. The call for participation states that no one will be remunerated. Only the expenses of consumer representatives will be refunded. Industry people and independent experts must fund their own participation.
This will result in only those with the spare time and financial resources actually taking part in it. And EIOPA readily admits that it will have a heavy workload. This means an over-representation of industry people and their advisers from big consultancy firms. Consumer groups are often over stretched as it is. Independent experts will hesitate to fund the travel and related expenses entailed.
The upshot of all this will be a set of data ethics principles that will lean heavily towards the interests of those industry people and their big four advisers. Yet if you want your principles to be trustworthy, they need to be developed by a wider range of opinions in order to meet everyone’s expectations.
Missing the Back Door Risks
EIOPA can do better than this. Others I’ve talked this over with have been surprised at the approach being adopted. EIOPA may have thought that their approach would reduce the risk of conflicts of interest. Unfortunately, while watching risks coming in the front door, EIOPA seems to have missed the risks coming in the back door.
Finally, it is worth considering if more data ethical principles are in fact needed. After all, the EU has only just published its ‘Ethics Guidelines for Trustworthy AI’. Rather than individual pan European vertical regulatory bodies developing their own principles, wouldn’t it be better for them to focus on developing responsible digital practices instead? Why add to the growing pile of data ethics principles, when what people actually want to see are the right practices?
EIOPA certainly needs to be involved in the whole question of digital ethics in insurance. Yet it needs to be a contribution that builds consensus, gains traction and influences decisions. In my opinion, they will struggle to achieve any of those three with the present approach.