Scroll Top

Microsoft says latest AI shows signs of human reasoning

WHY THIS MATTERS IN BRIEF

Reasoning is seen as a key human skill, and AI’s are getting better at it …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

When computer scientists at Microsoft started to experiment with a new Artificial Intelligence (AI) system last year, they asked it to solve a puzzle that should have required an intuitive understanding of the physical world.

 

RELATED
Schwarzenegger and Stallone become DeepFake besties in new rip off

 

“Here we have a book, nine eggs, a laptop, a bottle and a nail,” they asked. “Please tell me how to stack them onto each other in a stable manner.”

The researchers were startled by the ingenuity of the AI system’s answer. Put the eggs on the book, it said. Arrange the eggs in three rows with space between them. Make sure you don’t crack them.

“Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up,” it wrote. “The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”

 

RELATED
IBM's Project Debater AI barely convinces Cambridge University audience that "AI is good"

 

The clever suggestion made the researchers wonder whether they were witnessing a new kind of intelligence. In March, they published a 155-page research paper arguing that the system was a step toward Artificial General Intelligence, or AGI, which is shorthand for a machine that can do anything the human brain can do. The paper was published on an internet research repository.

Microsoft, the first major tech company to release a paper making such a bold claim, stirred one of the tech world’s testiest debates: Is the industry building something akin to human intelligence? Or are some of the industry’s brightest minds letting their imaginations get the best of them?

“I started off being very sceptical — and that evolved into a sense of frustration, annoyance, maybe even fear,” Peter Lee, who leads research at Microsoft, said. “You think: Where the heck is this coming from?”

 

RELATED
New analysis suggests AI could upend everyones pay and flatten wages

 

Microsoft’s research paper, provocatively called “Sparks of Artificial General Intelligence,” goes to the heart of what technologists have been working toward — and fearing — for decades. If they build a machine that works like the human brain or even better, it could change the world. But it could also be dangerous.

And it could also be nonsense. Making AGI claims can be a reputation killer for computer scientists. What one researcher believes is a sign of intelligence can easily be explained away by another, and the debate often sounds more appropriate to a philosophy club than a computer lab. Last year, Google fired a researcher who claimed that a similar AI system was sentient, a step beyond what Microsoft has claimed. A sentient system would not just be intelligent. It would be able to sense or feel what is happening in the world around it.

But some believe the industry has in the past year or so inched toward something that can’t be explained away: A new AI system that is coming up with humanlike answers and ideas that weren’t programmed into it.

 

RELATED
AI can now beat polygraph tests to tell when you’re lying

 

Microsoft has reorganized parts of its research labs to include multiple groups dedicated to exploring the idea. One will be run by Sébastien Bubeck, who was the lead author on the Microsoft AGI paper.

About five years ago, companies like Google, Microsoft and OpenAI began building Large Language Models, or LLMs. Those systems often spend months analysing vast amounts of digital text, including books, Wikipedia articles and chat logs. By pinpointing patterns in that text, they learned to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation.

The technology the Microsoft researchers were working with, OpenAI’s GPT-4, is considered the most powerful of those systems. Microsoft is a close partner of OpenAI and has invested billions of dollars into the San Francisco company.

 

RELATED
US legislators new SELF DRIVE Act lays the foundation for a country full of driverless vehicles

 

The researchers included Dr. Bubeck, a 38-year-old French expatriate and former Princeton University professor. One of the first things he and his colleagues did was ask GPT-4 to write a mathematical proof showing that there were infinite prime numbers and do it in a way that rhymed.

The technology’s poetic proof was so impressive — both mathematically and linguistically — that he found it hard to understand what he was chatting with.

“At that point, I was like: What is going on?” he said in March during a seminar at the MIT.

For several months, he and his colleagues documented complex behaviour exhibited by the system and believed it demonstrated a “deep and flexible understanding” of human concepts and skills.

 

RELATED
Scientists in New York have created a working tractor beam

 

When people use GPT-4, they are “amazed at its ability to generate text,” Dr. Lee said. “But it turns out to be way better at analysing and synthesising and evaluating and judging text than generating it.”

When they asked the system to draw a unicorn using a programming language called TiKZ, it instantly generated a program that could draw a unicorn. When they removed the stretch of code that drew the unicorn’s horn and asked the system to modify the program so that it once again drew a unicorn, it did exactly that. And if that’s what it can do today, then you have to just wonder what it’s going to be capable of doing tomorrow …

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This