Earlier this month, Amazon announced it was scaling back the experimental "Just Walk Out" technology across the majority of Amazon Fresh grocery stores in the US.
The original promise of Just Walk Out was that you could waltz into any Amazon Fresh store, pick up what you wanted from the shelf and then leave – no checkout required. In the background, hundreds of surveillance cameras would track your every move, determine what you'd taken with you and – using what Amazon marketed as "the most advanced machine learning, computer vision, and AI" – charge you via your phone's Amazon app on the way out.
The experience was meant to be seamless, frictionless and underpinned by the most sophisticated AI technology available.
It didn't work.
The Mechanical Turk
According to technology publication The Information, more than 1,000 workers in India were tasked with manually reviewing Just Walk Out transactions and labelling footage to train the AI model behind the technology. Approximately 70% of Just Walk Out sales required human review, which was substantially higher than Amazon's 2-5% projections.
Perhaps unsurprisingly, this report prompted numerous comparisons to the Mechanical Turk: both the fraudulent 18th-century chess-playing automaton designed for Holy Roman Empress Maria Theresa as well as Amazon's crowdsourcing service of the same name.
Amazon subsequently pushed back on these "erroneous reports" in a blog post, explaining that most AI systems (including the one behind Just Walk Out) are "continuously improved by annotating synthetic (AI generated) and real shopping data." The post added that employees/contractors "don’t watch live video of shoppers to generate receipts — that’s taken care of automatically by the computer vision algorithms."
So which is it? Was Just Walk Out fundamentally a Mechanical Turk, with its most important functions carried out by hundreds of flesh-and-blood humans, or was it a genuine AI-powered shopping experience?
The advice question
For brevity's sake, let's take the coward's way out, say the truth is somewhere in the middle and focus on the far more important question: what does any of this have to do with financial advice?
It probably hasn't escaped your notice that AI's taking on an increasingly prominent role in advice – at least in a discursive sense. Over the past couple of months on Advisely, we've discussed AI's risks and opportunities, ASIC chair Joe Longo's comments on AI regulation, the merits of adapting to AI (rather than ignoring it) and the question of what a human adviser can do that an AI can't.
Back in November, I wrote about a keynote presentation at the FAAA Congress that posited AI as a "co-pilot" for today's financial adviser; this was followed up with a panel discussion in which AI consultant Laurel Papworth talked about the challenges of treating AI as an inscrutable "black box".
If there is a common thread here, it's probably uncertainty. AI's rapid development over the past decade suggests limitless upside – think of the time saved, the burdens eased! – while portending tremendous upheavals that range from privacy and intellectual property violations to the total annihilation of humanity.
For advisers, though, the questions are usually more practical than existential: how could AI help me in my business? What tasks could it automate? How much would it cost? And how could I use it while ensuring the safety of my clients' data?
Two implications
I think the Just Walk Out story is instructive here for two reasons. First, regardless of the actual division of labour between man and machine, it would appear that AI systems aren't quite an "out-of-the-box" proposition at this stage.
If you're planning on rolling out AI tools to automate parts of the advice process, you'll need to factor in the time and resources – by which I mean actual human beings doing work with their actual human bodies – required for training these models so they work the way you want.
Second, if we accept that at least some of the AI system's capabilities, in some cases, are going to be performed by actual people, it would be prudent for any advice business to determine who these people are and what kind of information they're handling.
Returning to Papworth's "black box" comments, she argued it wasn't acceptable for AI developers to present their systems as impenetrable to their customers. "They have the data-sets," she explained, adding that "if [they] aren’t going to take responsibility, we have to."
This argument can (and should) be extended to the people working behind the scenes, regardless of whether they're manually carrying out an AI's functions or reviewing its performance over time. Given the amount of sensitive data advice businesses handle on a daily basis, it's critical they understand the AI systems they're working with – as well as the people who keep those systems running.
If the AI vendor can't (or won't) provide that information, just walk out.
What you need to know today to prepare for tomorrow.