Andrew Boyle, chief executive of corporate finance and investment company LGB & Co, analyses whether artificial intelligence will ever be able to replace quality advice.
Since the term artificial intelligence (AI) was first coined by American academic John McCarthy in 1956, Western society has been obsessed with machines that might one day think like humans.
Soon after the term was used, the British and US governments spent millions in support of research into developing AI, only to halt funding in the early 1970s when scientists admitted they had made little, if any, progress.
More recent advances in technology have led people to believe we are at last on the cusp of a breakthrough in AI amid claims that we are living at the start of the fourth industrial revolution. So much so that a recent survey from Tenemos and Forbes Insights revealed almost half of high net worth individuals believe AI will ‘mostly replace’ humans on portfolio management over the next five years.
In a survey of 310 global wealth managers and high net worth individuals, 35% thought AI would replace humans on investment advice and 29% believed client communication would also be done by AI.
But as recent news that Nutmeg has introduced human financial advisers to its digital-only investment app demonstrates, we are still a long way from achieving that AI dream.
Nutmeg is far from alone. Across the financial spectrum applications are being developed that claim to be powered by AI. Many of them do little more than collect information from prospective clients before a human intervention.
Such applications are useful for advisers in that they help reduce account administration, but they do little else.
Beyond this, robo-advisers require investors to fit into one of a number of predetermined categories, which have to be simple to fit their algorithms.
This obviously restricts investment choice and investment decisions as investors are limited to varying degrees of adventurous, balanced or cautious investment portfolios. This simply won’t work for sophisticated investors, nor first time investors because there is no basis on which to determine their categorisation. Furthermore, investors may move between categories as their circumstances change.
The criteria set for a fixed investment period may also cease to be relevant.
The concept of AI replacing humans in portfolio management seems far-fetched. We have seen at times of sudden market volatility that trading algorithms work until they don’t work. Similarly, AI is based on the recognition of patterns in data. When patterns are suddenly disrupted by new factors the validity of AI-based decision must be questioned.
Other AI claims that continue to be called into question include medical app Babylon AI’s assertion to have matched the diagnostic skill of a human doctor after it emerged data in trials had been entered by doctors and not real-life patients, therefore skewing the results considerably.
Concerns about driverless cars earlier this year led researchers at MIT to ask two million people globally how AI should assess the value of human life – for instance, whether to prioritise hitting an elderly person over a younger person or a child in the event of an imminent car crash. Surprisingly this elicited a huge variety of responses on which life held the most value, presenting a significant problem for computer programmers.
What these examples show is AI is still very much in its infancy. What these applications cannot do still far outweighs the very little they can, which stems from the fact they are still based on a 1960s concept of trial and error machine learning.
Even those involved in the field of AI are critical of it. Ali Rahimi, a senior researcher in AI at Google in San Francisco, claimed machine learning algorithms, in which computers learn through trial and error, had become a form of ‘alchemy’. Researchers, he said, do not know why some algorithms work and others don’t. Nor do they have rigorous criteria for choosing one AI architecture over another.
Herein lies the continuing problem. AI can only learn through trial and error. What it remains incapable of is genuine inspiration, empathy or imagination, the three foundation stones of human intelligence.
Those three distinct qualities also underpin the wealth management industry. The emotional intelligence required to provide the right advice to people, whether they are buying a house, saving for the future or their pension, and the level of risk they feel comfortable with all require more than a simple set of algorithms.
It is precisely the ability of the wealth manager to identify with their clients and their creativity in finding solutions to potential problems that mark out the best from the average in the sector.
AI may be able to help humans do their jobs more efficiently, but for now it cannot do any job that requires it to understand the hopes and fears that often drive client’s investment decisions.
Ultimately that is the point behind the story of the supercomputer Deep Thought in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, published more than 40 years ago. The fact that Deep Thought’s response to the ultimate question of life, the universe and everything – after 7.5 million years of calculation – is 42 should tell us everything we need to know about AI.