Blog Post

Future Fit Advice
4 MIN READ

Frontier justice

alex.burke's avatar
alex.burke
Icon for Advisely Team rankAdvisely Team
10 months ago

If recent headlines are any indication, 2024 will be to AI what 1890 was to the Wild West – all open territory declared settled (or seized), fences and railroad tracks built and constituents overseen by a central governing and regulatory apparatus. 

That might seem like a tenuous (even specious) comparison, and not just because our understanding of the American frontier is often rooted more in myth than fact. But virtually everywhere you look, you’ll see it crop up in commentary about AI; one would be forgiven for drawing some kind of murky parallel between Sam Altman and Wild Bill Hickok. 

In a recent speech, ASIC chair Joe Longo attempted to pour cold water on these comparisons, arguing that AI is not currently the Wild West – nor was it in the bygone era of 2023, either.

Nothing new

“Nothing could be further from the truth,” he said. “As [the Government's interim response to the 'Safe and Responsible AI in Australia' consultation] noted, ‘businesses and individuals who develop and use AI are already subject to various Australian laws. These include laws such as those relating to privacy, online safety, corporations, intellectual property and anti-discrimination, which apply to all sectors of the economy.’”

In other words, he explained, “the responsibility towards good governance is not changed just because the technology is new.” 

To illustrate this, he pointed to ASIC's 2022 case against RI Advice. RI Advice was found to have breached its license obligations by failing to ensure adequate management of cybersecurity risks; Longo argued that it wouldn't be a stretch to "apply this thinking to the use and operation of AI by financial services licensees."

"In fact," he noted, "ASIC is already pursuing an action in which AI-related issues arise, where we believe the use of a demand model was part of an insurance pricing process that led to the full benefit of advertised loyalty discounts not being appropriately applied."

All that being said, Longo didn’t entirely disagree with the idea that 2024 would be a momentous year for AI regulation. 

“Just because existing regulation can apply to AI,” he added, “that doesn’t mean there’s nothing more to do. Much has already been made of 2024 as ‘the year AI grows up’. Phrases like ‘leaps forward’, ‘rapid progress’ and others abound, suggesting an endless stream of benefits to consumers and businesses in the wake of AI’s growth … The open question here is how regulation can adapt to such rapidity.”

A moving target

Part of the challenge here, as we discussed back in November, is that so much of that “rapid progress” appears to occur inside a black box. And while AI consultant Laurel Papworth suggested in that piece that vendors have more information about their AI models than they might be letting on – “They have the data sets,” after all – this is still an unavoidably complex problem for regulators to solve.

Generative AI models can be poisoned with bad data, output "hallucinations" based on incorrect assumptions and biases and carry the same privacy and security concerns as any other type of software – except, in this case, the risks and vulnerabilities are arguably much harder to parse for the average user. Even inbuilt "constitutions'' governing AI behaviour, Longo noted, have been subverted by researchers "simply by adding random characters on the end of their requests." 

This inscrutability – mystique, even – has no doubt contributed to the surge in popularity of generative AI tools like ChatGPT and Midjourney. But for a corporate regulator, especially one operating within a legislative framework that is famously complex and prescriptive, it makes for a very difficult game of catch up. 

The advice challenge

And what of the businesses overseen by that regulator? Much hay has been made about the opportunities AI presents in financial advice, whether it’s automating client communications, execution based on the contents of the SOA or facilitating simple advice for the mass market. 

At the FAAA Congress in November, AI was described as a “co-pilot” for a financial adviser, working alongside their flesh-and-blood counterparts to streamline specific tasks, handle simple queries and provide suggestions where appropriate.

What happens if that co-pilot veers off course? Who’s culpable if a malicious actor finds some hitherto unknown (or newly-generated) vulnerability in the model and makes off with valuable client data? Longo’s RI Advice example suggests there are clear-cut answers to these questions, but given his own concerns about AI it’s hard to say how financial advice businesses could make use of this technology without taking on serious risk. 

Perhaps as the year goes on and AI continues to proliferate throughout financial services, we’ll end up with a much clearer picture of both the technology and the regulatory guardrails around it. 

In the interim, though, maybe the Wild West comparisons aren’t entirely unfounded.

Updated 9 months ago
Version 6.0
No CommentsBe the first to comment