Why TCF is the wrong path for implementing data ethics

  • 14 October 2019

The UK regulator has chosen data ethics as one of its ‘cross sector priorities’ for their current business year. And one approach they’re considering is whether their existing ‘Treating Customers Fairly’ framework is the best way to implement their eventual findings. That would be a mistake and points to some upside down thinking.

Treating Customers Fairly (TCF) was a flagship initiative introduced by the regulator in 2006. It was designed to put the fair treatment of customers at the heart of insurers’ business models. It came with a set of six principles and support material to help firms implement them.

And you can imagine something similar being prepared by the regulator for data ethics: some principles and supporting material. Given that it's the customer who experiences the outcomes that insurers’ use of data and algorithms generate, you might wonder why TCF is not then a suitable framework. Well, it would be convenient but it would also be ineffective, for these five reasons.

TCF comes with Baggage

Firstly, think of traction and response. TCF is not a framework that has won the hearts and minds of insurance executives. It’s often referred to as a grand ‘tick box exercise’ that has failed to deliver on what it set out to do. And there’s evidence around the market for this. Seminars and conferences regularly urge the sector to really put customers at the heart of what they do. The language is all about this being something that insurers need to start getting to grips with.

So, 13 years on, it still looks like a struggle. Of course fair treatment of customers is more like a journey than a destination. However, too much evidence points to it having been a difficult and chequered one. Bolting data ethics onto TCF would risk condemning it to a similar path. 

Secondly, think of scope. Data ethics raises a much wider range of ethical issues than just fairness. Autonomy, equality, privacy and transparency are just four. Yet if insurers are told to adapt TCF to include data ethics, there’s a danger that some firms will choose to adopt that narrower ethical view and just not see or bother with those wider issues.

Thirdly, think of completeness. Data ethics, despite its name, covers data, algorithms and practices. While TCF covers data and practices as well, it doesn’t present a natural home for understanding the role and impact of algorithms. The danger is that that lack of fit for algorithms will result in it missing out on the attention it deserves.

Round Peg ; Square Hole

Fourth, think of rights. Data ethics encompasses a range of rights embodied in legislation such as equality, data protection and human rights. TCF is less about rights and more about how you treat people. There’s a danger that those rights will be given insufficient weight within a TCF framework not used to handling them.

Fifth and finally, the TCF programmes I’ve encountered tend to see customers as individuals and have processes for treating them that way. Data ethics certainly sees customers as individuals, but it also has to see them as members of broader groups as well. The danger is that that group dimension would be lost within a TCF framework. And one of the clearest signs of this has been the rarity with which TCF frameworks mentioned (at least until recently) the fairness of pricing. 

Properly scoped and implemented, TCF frameworks deliver a lot of benefit for customers and insurers. Yet they come with structural and legacy issues that are likely to undermine any new push on data ethics by the regulator.