Blog Post

Advice Efficiency
4 MIN READ

The modern Prometheus

Alex-Burke's avatar
Alex-Burke
Icon for Advisely Team rankAdvisely Team
8 months ago

But how does it work? 

You might have found yourself asking the same thing back when ChatGPT went from a curiosity to a global sensation virtually overnight. And if you were paying attention to the headlines at the time, you would have found some wildly diverging opinions on the matter. 

To some, ChatGPT portended an imminent AI singularity which would lead to a Skynet-style superintelligence and global catastrophe. To others, it was little more than a digital support service for the improv actor desperately in need of a troupe – endlessly “yes and”-ing your prompts by choosing the next most-likely word from its vast reserves of training data. 

One year after ChatGPT’s launch, there appears to be some kind of consensus emerging – at least based on comments made during an FAAA Congress presentation in Adelaide this week. 

Coming soon


The three panellists at the event (Financial Planning Standards Board CEO Dante De Gori, Advice Intelligence CEO Jacqui Henderson and AI consultant Laurel Papworth) were asked whether they could describe AI’s role in financial advice in one word. De Gori chose “opportunity”, Henderson picked “evolving” and Papworth, veering slightly from the terms of the assignment, said “in utero”. 

While each respondent was coming at the topic from a different angle – De Gori in particular was deferential regarding his relative lack of AI expertise compared to Henderson and Papworth – their answers shared a common theme: AI, for all its promise, is not quite there yet. Or, putting a more optimistic spin on it, AI has numerous potential applications in financial advice that have yet to be realised. 

This isn’t to suggest, of course, that AI isn’t already being used in some corners of the advice industry. In fact, Henderson pointed out that her firm is currently working with super funds to help them fulfil their Retirement Income Covenant obligations using AI as part of their hybrid digital advice models. She added that AI could be used in current advice businesses to “automate things like executing off the back of an SOA, the advice production process and helping you serve the mass market.” 

By automating some of the more time-consuming parts of running an advice business, she continued, “AI plays an important role in helping you see more clients. If you have digital supporting you, [it can help you] provide advice at scale through a pure digital experience driven by machine learning.” 

If the idea of a purely-digital advice experience set off any alarm bells in the audience, De Gori noted that in his previous role as FPA CEO, he “got a knock on the door from the robo-advisers saying they’d arrived in Australia and we were probably not going to have any members in 12 months’ time.

“Later, they came back and asked, ‘How can we work with financial planners?’ For me, that is very much the future message as well for AI.”

I’ll be your wingman anytime

The presentation regularly returned to the idea of AI functioning as a “co-pilot” for the modern financial adviser. Instead of replacing them, as the robo-advisers threatened to do in De Gori’s telling, AI could work alongside advisers to streamline specific tasks, handle simple queries and provide suggestions where appropriate. Sticking with the aviation metaphors, though, for many people AI is more black box than co-pilot – largely due to the problem outlined at the beginning of this piece. 

As Papworth noted in her speech, that problem is compounded by the kinds of stories that have recently been emerging from the AI sector’s heaviest hitters. To name a few: there was OpenAI CEO Sam Altman’s firing (he was subsequently reinstated and the board reshuffled hours before this piece was written), Microsoft laying off its entire AI ethics team and Meta reportedly doing the same. 

“If these companies aren’t going to take responsibility,” Papworth argued, “then we have to.” 

But what does taking responsibility actually look like? For one, Papworth said that customers (including advisers) should be forthright on the “how does it work” issue. 

“Don’t let vendors tell you it’s a black box,” she said. “They have the datasets. Ethically speaking, we have to know more about those datasets underpinning AI. OpenAI recently took theirs down. And Microsoft said they’d cover you if you get sued for copyright infringement while using [AI assistant] Copilot. Shouldn’t that be concerning?” 

What’s in the box

There were other issues mentioned, including the current framework for regulating AI – or lack thereof. Regulators were described as slow to adapt to the new technology available, creating ongoing concerns about intellectual property infringement, the quality (and veracity) of data generated by AI and the biases that can emerge in AI models trained on specific datasets. 

In an industry already beset by regulatory uncertainty, this kind of grey area is basically a minefield. Getting advisers on board with AI – and therefore potentially giving AIs access to incredibly sensitive client information – is going to be a difficult task unless the designers and distributors of these models can make commitments to much greater levels of transparency. 

You’ve probably noticed by now that, nearly 20 paragraphs in, I have yet to provide any kind of concrete answer to the question posed at the outset of this piece, and that is because – if it isn’t obvious already – I can’t. But if AI is going to be your co-pilot, someone will have to. 

Updated 8 months ago
Version 2.0
No CommentsBe the first to comment
Related Content