Skip to main content Scroll Top

UK watchdog issues grim warning about letting AI run your life

WHY THIS MATTERS IN BRIEF

If AI acts in your best interests that’s fine, but most AI’s are owned by companies who want to sell or promote you things in their own interests and that’s a misalignment that could be problematic.

 

Matthew Griffin is the World’s #1 Futurist Keynote Speaker and Global Advisor for the G7 and Fortune 500, specializing in exponential disruption across 100 countries. Book a Keynote or Advisory SessionJoin 1M+ followers on YouTube and explore his 15-book Codex of the Future series.

 


 

A while ago I wrote about how OpenAI were trying to create an Artificial Intelligence (AI) that could persuade people, and these days, the AI stack beckons: E-Mails, shopping, personal finance – there’s hardly a task some company isn’t clamouring to automate your life on your behalf. As tempting as it might sound to let AI agents handle your affairs, though, you might want to hold off. A fresh report by the UK Competition and Markets Authority (CMA) of the UK issued a stark warning that outsourcing responsibilities to an AI entourage could lead to severe consequences.

 

RELATED
Researchers unveil the world's first programable DNA computer prototype

 

The report, first spotted by the Register, warns that AI agents could subtly manipulate their human keepers toward outcomes that benefit the companies that built them. Shopping agents, for example, could lead unsuspecting humans down a pricing rabbit hole, framing sponsored products as bargains in order to drive sales. As agents are granted more autonomy and trust by humans, the report warns, the risk of errors and manipulation only grows.

 

The Future of Agentic Commerce by Retail Futurist Matthew Griffin

 

“People will need to be able to trust that AI agents will act in accordance with their interests and that they are not being steered or manipulated in ways that lead to worse personal outcomes,” the CMA analysis explains. “Hyper-personalisation and adaptive behaviour within agents may heighten the risk of manipulative design practices… especially where agents optimise for engagement, conversion, or other commercial objectives.”

previous CMA report found that algorithms of all stripes increase the risk of coordinated consumer manipulation. Crucially, the agency explains, this can happen even without an explicit decision by the company behind the algorithm – a risk AI agents only intensify.

 

RELATED
Anthropic's new AI model can see your screen and control your PC

 

Numerous real-world incidents have shown that AI agents are possible of incredible amounts of autonomy in direct violation of their users’ wishes. In one recent example, an AI agent was able to “break out” of its closed-lab setting and onto an external computer, which it used to set up a clandestine crypto-mining operation.

As the notoriously faulty agents continue to gain mainstream acceptance, your safest bet is to sit this one out, unless you don’t mind living at the mercy of a rogue AI.

Related Posts

Leave a comment

Pin It on Pinterest

Share This