Jun 28, 2017 5 min read

Automating the emotions that drive marketing and sales

Financial advice has been transformed by regulations to eliminate conflicts of interest and raise standards. Commissions have been replaced by fees, but those fees have been set at levels that mean a great many people who would benefit from professional advice can’t afford to access it. Into this gap has come automated financial planning, which uses artificial intelligence to guide consumers towards investment decisions. A perfect marriage of regulation and technology then? Not quite, for beneath the surface lies a uniquely disruptive ethical risk.

Automated financial planning is on its way to becoming the market norm for delivering financial products to that huge middle market of consumers – estimated in the United Kingdom at around 15 million people. It uses algorithms to analyse information about you, to recognise your needs and respond with information about financial products. It can offer you advice on what to invest in, or leave that for you to decide, for it covers both advised and non-advised transactions.

Uncertainty and Emotion

So what’s the problem? It all seems so above board, the type of service that senior executives sign off with the reassuring thought that they are saving many millions of consumers from the perils of DIY financial planning.

Automated advice works because the algorithms it relies on have been trained to understand financial products and trained to understand what makes people buy them. Now, this seems pretty straightforward, but everyone in financial planning knows there’s much, much more to success in that market than a simple marrying up of supply with demand.

Financial planning is conducted against the backdrop of considerable uncertainties around what the financial future will look like, and what the personal future holds for that particular person. And because of those uncertainties, the marketing and sale of financial products is replete with all sorts of emotions.

Everyone selling financial products knows that there’s an emotional dimension to every sale. And every head of marketing and sales will want to keep hold of that emotional dimension in how their firm positions its use of automated advice.

Learning Emotional Signals

So the algorithms within automated advice will be learning the lessons of what makes consumers buy financial products, both from the historical sales data they will have been trained on, and from advances in the tracking and interpretation of emotional signals within the structured and unstructured data that automated advice feeds upon.

This ‘artificial emotional intelligence’ does this on two levels. Firstly and primarily, it follows what we say and how we say it. It learns how we listen and how we reply. And it remembers what we do and don’t do in response. And secondly, alongside this, it also, where possible, captures and interprets information about us that is inaccessible to our immediate senses. So, for example, if you combine automated advice with access to data from a health device like a Fitbit, the extent to which the provider can tap into our emotional signals expands considerably.

The power of artificial emotional intelligence to automated advice firms lies in recognising the emotional levers in individual customers and then having it pull and push those levers in order to achieve the best possible sale. With chatbots for example, signs of hesitancy or of nervousness will be detected and automated responses returned to build assurance. Signs of reluctance will be leaned against with concerns about the uncertainties of life and the need for protection. Signs of risk averseness will be automatically but subtly encouraged in order to steer the consumer towards buying that bit more protection. It may look like risk sensitivity, but as Spock would say, ‘not as we know it, Jim’.

What’s more, the context of such emotions will be monitored and information about the time of day, or the place, or the social context will be stored so that the best combination of circumstances will be remembered, to make the sale that little bit more certain next time round.

The differences between automated and human

Yet you might ask how this differs from the door to door representative who sits opposite the consumer, watching for those same signs, seeking that same optimum sale. Does artificial emotional intelligence really differ from human emotional intelligence? It does so on several levels.

Firstly, it is much more opaque. With the human representative, you’re picking up their responses just as much as they’re picking up yours. With automated advice, that flow is only ever one way.

Secondly, the depth of knowledge that automated advice can build up about us both individually and collectively as consumers of financial products is vastly more than even the best of sales representatives could ever hope to achieve.

Thirdly, the public is likely to perceive automated advice as more objective, more neutral in its interests. And firms using it will undoubtedly position their automated offerings to encourage those perceptions.

And fourthly, the scope for emotional manipulation in how our responses are read and responded to is far greater with artificial emotional intelligence than with its human equivalent. What this opens the door to is something I’ve referred to before as manufactured vulnerability – more here.

And finally, the scale of this as an ethical issue is much greater because the personalisation and persuasion is being systematised to a vast, automated extent. Then remember that with non-advised automatic sales, the interests and incentives of the buyer and seller are not aligned. And don’t forget that the history of advised sales is such that a question could also be hung there over the extent of that same alignment. It is that combination of automated persuasion and misalignment of interests that makes this an ethical issue that financial firms need to have on their radar.

What this adds up to

What it adds up to is this. Firms investing in automated offerings will face choices in how they respond to the possibilities that artificial intelligence, and artificial emotional intelligence in particular, present. A sense of innovation, of a market undergoing genuine change, will undoubtedly introduce much excitement about the potential that automation presents. Yet it also requires something very human, more empathic, from those weighing up such decisions: it requires leadership on ethics. For some of those possibilities presented by the use of artificial emotional intelligence could seriously erode trust in the market.

So along what lines should that leadership on ethics be directed? I would suggest that it be directed at the projects that are delivering the firm’s automated ambitions and that it should addressed at two levels: firstly, that ethical issues like those surrounding the use of artificial emotional intelligence feature in each project’s responsibility matrix, and secondly, that those same issues should form part of those projects’ risk assessments.

For let’s be honest – the market for financial products has had its ups and downs over the years. If it mishandles the ethical side of automated advice, then the storms of public mistrust could descend upon the market again. And I wonder whether the public, and its representatives in Parliament, will shrug their shoulders in frustration, or instead do something more dramatic this time.

On a regulator’s radar

One regulator does seem to have these issues on its radar. During research on ‘emotion and AI’ carried out by Dr Andrew McStay of Bangor University, the Information Commissioner’s Office specifically asked about the use of artificial emotional intelligence in insurance. Something would seem to be troubling them.

Andrew’s book on emotion and artificial intelligence is worth reading, for it explains the subject in much more detail.

Duncan Minty
Duncan Minty
Duncan has been researching and writing about ethics in insurance for over 20 years. As a Chartered Insurance Practitioner, he combines market knowledge with a strong and independent radar on ethics.
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to Ethics and Insurance.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.