WHY THIS MATTERS IN BRIEF
It turns out that the old Turing Test to determine if an AI was intelligent or not isn’t quite fit for purpose, so now a new one is being proposed.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
With Artificial Intelligence (AI) hype leaving venture capitalist firms foaming at the mouth, ready to buy into any new company that sticks an “A” and “I” in its name, it turns out that we need a new way of defining what constitutes an AI after the Turing Test recently failed to define real intelligence in today’s world of Large Language Model (LLM) AI’s like those from OpenAI.
Now, stepping up to the plate to try to define a new Turing test, one leading figure in the AI field has a better idea that could only come from the wealth-obsessed realm of big tech: the best way to tell if AI is smart is if it can get rich quick.
Mustafa Suleyman, a major AI developer who previously co-founded DeepMind – now owned by Google – has an upcoming book where he reportedly discusses judging an AI’s smarts based on its ability to stack cash. According to a Bloomberg report, in The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, Suleyman argues that AI research needs to focus on short-term developments, rather than pipe dreams like a supposed Artificial General Intelligence, or AGI. What’s a good example? His “modern Turing Test” would give an AI $100,000, and then the researchers would wait for the AI to make $1 million on its initial investment.
The AI developer called this measure of understanding artificial smarts “Artificial Capable Intelligence,” and like any good capitalist, AI should be judged on its financial accomplishments rather than its capacity for human-level interaction. So how would the AI make bank? Essentially, it would have to complete your average business degree term project. The AI would research E-Commerce business opportunities, create an idea for a product, then figure out the process for both manufacturing and selling the service.
The proposal sounds more like an AI-based get-rich-quick scheme you would find on the seedier ends of YouTube and TikTok financial influencers. AI is certainly more capable of designing products than having true imagination, but modern systems are still far from giving Suleyman a flush bank account – although recently one AI bot built a software company so they might not be as far off as you think.
There are several AI agents floating around that had their time in the AI spotlight earlier this year. They essentially act as multiple versions of a chatbot working in concert to complete specific tasks. Users have even made agents that could perform tasks like ordering a pizza over the phone.
A program like AutoGPT could, in theory, craft a business plan for a new product, though today’s LLMs certainly don’t have the power to dream up a new product that people actually want and follow through with any accuracy. The AI is inherently iterative, and it does not actually comprehend its generated content in any real-life context. Its output is inherently based on its training data, so any kind of “novel” innovation it creates is going to be born of existing examples that best fulfill the original prompt.
AI evangelists often squawk that all human creations are inherently iterative, but unlike AI, mankind’s creations come from the comprehension of basic human needs, or at least the simple sense of empathy. Sure, an AI can create a product, and perhaps that product would even sell. But would it be a good product? Would it actually help people? If the end goal is “make me a million dollars,” does that even matter?
Suleyman used to lead AI development at DeepMind though he is now the CEO and founder of startup Inflection AI. The company’s leading chatbot Pi is a very capable AI that tries to come off as empathetic. It’s also far more restrictive than other chatbots like ChatGPT. It cannot research information on the internet, it can’t generate code, and it constantly emphasizes that it’s no replacement for a real-life therapist or even a human companion.