Blog Post

Future Fit Advice
5 MIN READ

Rise of the machines

Advisely-Team's avatar
Advisely-Team
Icon for Advisely Team rankAdvisely Team
2 years ago

GPT-3.5, the latest version of the artificial intelligence language model from OpenAI, has attracted global attention.

This article has been taken from the NMP education library which has now moved to Advisely

It’s passed law exams at US universities, Victorian schools just banned access to it and you can use it to shoot the breeze with a cyborg Benjamin Franklin: GPT-3.5, the latest version of the artificial intelligence language model from OpenAI, has attracted global attention. 

ChatGPT, the most popular model based on GPT-3, is designed to respond to users’ queries in a natural, conversational way. Given the right prompts, you could use it to brainstorm potential fixes for a blocked air conditioner drain hose – or write an essay on the tactical advantages of semi-automatic rifles over their bolt-action counterparts in World War II. (That latter example might give some indication as to why schools are wary about students having access to it.) 

The breadth of data the AI has to draw on means it can generate text on a vast array of different topics in a seemingly endless variety of styles. And while it still has obvious limitations – some of these are baked in, like avoiding obscene and illegal content, while others like the repetition of certain phrases should be ironed out over time – it’s already provoked debates over authenticity, plagiarism, labour automation and the underlying value of human input.

While we will be exploring those ideas in more detail over the coming weeks, it’s worth considering what role an AI like this could play in financial advice. Could it be a useful tool for improving efficiency in document generation and fielding simple client queries? And what are the risks involved – both to advisers and their clients? 

Perhaps we should just ask ChatGPT about it: 

An AI model like ChatGPT can play a role in providing financial advice in several ways, such as:

  • Chatbots: ChatGPT can be integrated into a chatbot interface to provide quick and accessible financial advice to customers.
  • Personalized financial advice: AI models can analyse large amounts of financial data to provide personalized recommendations based on an individual’s unique financial situation and goals.
  • Portfolio optimization: AI models can help financial advisors analyse investment options and make portfolio recommendations based on market trends and financial data.

However, there are also risks involved for both financial advisors and clients when using AI for financial advice, including:

  • Bias and fairness: AI models can perpetuate societal biases if trained on biased data, leading to unfair or discriminatory advice.
  • Lack of human empathy: AI models may not be able to understand or respond to emotional or personal factors that can impact financial decisions.
  • Regulatory compliance: AI models may not always comply with regulatory requirements, leading to potential legal issues.
  • Model accuracy: AI models can make mistakes or give incorrect advice, especially if the data used to train them is outdated or incorrect.

Thus, financial advisors should exercise caution and carefully evaluate the potential benefits and risks before incorporating AI models into their advisory practices.

It’s a fairly broad-strokes summary of the benefits and pitfalls of using AI in advice, but it’s reasonable. In fairness, my prompt was quite broad and I used the first response it provided – no fine-tuning involved. Oddly, there’s no mention of how AI could be used to assist with documentation and the perennial “bricks of paper” problem faced by advice businesses, although that might just be because the language model has yet to absorb the dizzying complexity of Australian financial services regulation.

The idea that ChatGPT (or something like it) could be used to provide “personalised recommendations” for clients might be concerning for some advisers, though, since this encroaches on one of the key functions of the profession. While the response acknowledges that these recommendations would be bereft of “human empathy” and might “perpetuate societal biases”, it’s clear that advisers who use AI will need to be very up-front about where automated analysis ends and “human advice” begins. (This is to say nothing of the compliance issues that might arise.)

After all, according to Griffith University research from late last year, Australians are still quite wary of AI and its utility in providing investment advice. The research, co-authored by Dr Gavin Northey and Vanessa Hunter, found that while customers were comfortable using AI for simpler, lower-stakes tasks like account creation and depositing smaller amounts (up to $1000), they were far more likely to request human involvement when handling amounts over $1000.

Northey explained what while AI can reduce the “chance of human error” and judgement bias, “consumers tend to consider the performance of AI as inferior to that of a human expert provider.”

I decided to ask ChatGPT why that might be. According to the AI:

Consumers may be wary of trusting AI-based investment advice for several reasons, including:

  • Lack of transparency: AI models can be difficult for consumers to understand and interpret, leading to a lack of transparency in how investment decisions are made.
  • No emotional connection: Consumers may prefer to work with a human advisor who can understand their emotional and personal motivations for investing, as well as their individual financial goals.
  • Concerns about bias: AI models can be trained on biased data, which can lead to unfair or discriminatory investment advice. Consumers may be wary of relying on advice from a machine that may not fully understand their unique financial situation and goals.
  • Lack of accountability: AI models can make mistakes or give incorrect advice, but it can be difficult to hold them accountable. Consumers may prefer to work with a human advisor who can be held accountable for the advice they provide.
  • Fear of technology: Some consumers may simply be wary of new technology or may not fully understand how AI works, leading to a general mistrust of AI-based investment advice.

Therefore, it’s important for financial advisors to address these concerns and provide clear explanations of how AI models are used to support investment decisions. It may also be beneficial for advisors to offer a combination of AI-based advice and human expertise to ensure that clients receive comprehensive and trustworthy investment guidance.

Could a hybrid AI-human advice model work? What are the most obvious – and most beneficial – areas to implement AI into the advice process? And how would tasks fielded by the AI factor into an adviser’s ongoing compliance obligations, especially in light of the proposed changes in the Quality of Advice review?

Updated 2 years ago
Version 2.0
No CommentsBe the first to comment
Related Content