Are Insurer Websites Influencing Application Fraud Trends?
Insurance is complex. Websites are complex. Bringing them together can make delivering a good user experience pretty challenging.
Now think about this from a counter fraud angle. The ‘pre-sales space’ starts from the moment the consumer goes onto a website and ends when they either leave or make a purchase, whichever is first. During that time, various forms of counter fraud analytics are being used to gauge the potential fraud risk that that particular consumer is presenting.
This can cover everything from key stroke patterns and how they move around a webpage, to how data is entered and corrected, and at what point they leave the website.
Clearly, if on the one hand, an insurers’ user experience is poor because of how their website is designed and the way in which the quote journey is organised, but on the other hand, their counter fraud analytics are sophisticated and pervasive, then that insurer’s systems are going to be creating an awful lot of ‘false positives’.
In short, the insurer would be feeding its own counter fraud success, at the expense of business lost through no ‘real world’ reason. The saying ‘shooting your business through the foot’ seems apt.
Quote Journeys
One hears anecdotally of quote journeys that are ludicrously complex. Questions with double negatives, or where a sequence of answers flip between yes as good and no as good, or that flip between generic and specific, are just a few examples. Simple webpage design elements such as font and layout just add to the confusion.
Then there’s the growing tendency for insurers to pre-fill some questions for the consumer from their own data sources. Is the consumer changing that data in order to correct it, or to fraudulently tease out a lower quote? Here the question of data quality feeds directly into user experience and the risk that more false positive situations occur.
Much the same happens when data entered by the consumer onto the price comparison website is piped directly through to the insurer with whom cover is sought. Differences in the data fields, their meanings and significance levels create incompatibilities that then lead to a poorer user experience and more false positive signals to counter fraud situations.
It matters also how counter fraud people are performance managed. I know that some insurers still performance manage by number of cases of counter fraud identified, despite the obvious conflicts of interest. In such situations, what matters to those being managed are simply the positives, not whether they’re true or false.
It's a People Thing
What we’re looking at here are not technological issues, but people issues. How you specify a systems design, how you specify a performance management framework : both involve decisions about people, by people.
So when a few years ago a UK insurer took on board new software to check for fraud in its mobile insurance scheme, it was people who decided not to configure the software for their portfolio and instead rely on its default settings (more here). The result was a flood of false positives that triggered a flood of complaints and a subsequent large fine from the regulator.
Is too big a deal being made about false positives in counter fraud? I don’t think so, and I don’t insurers should see it that way either. The insurer has an obvious interest in having decisions systems that work properly. And it has an interest in rewarding its people based upon real results, not false ones.
And there is of course the wider social issue, of people experiencing higher premiums, less cover and more difficult availability because of poor counter fraud systems and poor user experiences. This is then seen as justified within the sector because counter fraud trends show ever increasing numbers. The sad reality is that some of those ‘ever increasing numbers’ are being generated by insurers, not consumers.
Try This
It would not involve any rocket science for an insurer to have their application user experience evaluated by an independent expert. The insurer could then tie the levers and dials within its counter fraud systems to the quality of that user experience. Turn them up for a good experience; turn them down for a poor experience.
Simplistic? Perhaps, but in the absence of anything else?
Challenging? Well, yes, for there's not enough of it in counter fraud.