Scroll Top

New Quantum ML algorithm could revolutionise Quantum AI before it even begins

WHY THIS MATTERS IN BRIEF

Artificial Intelligence, whether it’s Artificial General Intelligence or even Artificial Super Intelligence, could be completely eclipsed by the power of new Quantum AI models.

 

One of the ways that intelligent computers and Artificial Intelligence (AI) platforms “think” is by analysing the relationships between and within large sets of data. Now, using a new type of Quantum Machine Learning (QML) algorithm, an international team have demonstrated that quantum computers can analyse a far wider array of data types than was previously expected.

 

RELATED
Shutterstock integrates AI generated images into its library of 700 Million images

 

The details of the team’s new “Quantum Linear System Algorithm,” or QLSA, was published in Arvix, and in the future it could help crunch numbers on problems as varied as commodities pricing, social networks and chemical structures, and usher in a new era of Quantum AI.

“Previous quantum algorithms only worked on very specific types of problem. We needed an upgrade if we want to achieve a quantum speed up for other data,” said Zhikuan Zhao, who co-authored the paper, and that’s exactly what he, and his colleagues, Anupam Prakash at the Centre for Quantum Technologies in Singapore, and Leonard Wossnig from ETH Zurich and the University of Oxford, have done.

QLSA’s were first proposed in 2009 by a different group of researchers and since then the idea’s helped kick start research into new exotic forms of AI such as Quantum Artificial Intelligence (QAI), which gradually I’m seeing more and more research papers reference.

 

RELATED
Australia's Little Ripper search and rescue drone goes looking for sharks

 

A linear system algorithm works on a large matrix of data, for example, a quantitative hedge fund or trader, or Quant for short, might be trying to predict the future price of goods. The matrix may capture historical data about price movements over time and data about features that could be influencing these prices, such as currency exchange rates. In this case the algorithm calculates how strongly each feature is correlated with one another by “inverting” the matrix. This information can then be used to extrapolate trends into the future which is then used as the basis to invest, or not as the case may be, in those goods.

“There’s a lot of computation involved in analysing the matrix. When it gets beyond say 10,000 by 10,000 entries, it becomes hard for classical computers,” explains Zhao. This is because the number of computational steps goes up rapidly with the number of elements in the matrix – every doubling of the matrix size increases the length of the calculation eight fold.

The 2009 proposal suggested that these QLSA’s could cope better with bigger matrices, but only if the data in them is what’s known as “sparse,” in these cases, there are limited relationships among the elements, which is often not true of real world data, and that’s what the teams new algorithm has cracked.

 

RELATED
Watch your mouth, Google's DeepMind lip reads better than humans

 

Zhao, Prakash and Wossnig say their new algorithm is faster than both the classical and the previous quantum versions, and, unlike previous versions it isn’t restricted to the kind of data it works for.

As a rough guide, for a 10,000 square matrix, the classical algorithm would take on the order of a trillion computational steps, the first quantum algorithm some 10,000’s of steps and the new quantum algorithm just 100’s of steps, and this is because the new algorithm relies on a technique known as “Quantum Singular Value Estimation.”

The team have so far conducted a few proof of principle demonstrations of their new algorithm using small scale quantum computers, and now Zhao and the team hope to work with an experimental group to run a proof of principle demonstration with a scaled up version of it. They also want to do a full analysis of the effort required to implement the algorithm to see what the overhead costs might look like.

 

RELATED
Two US lawyers just got fined for using ChatGPT to generate fake court cases

 

To show a real quantum advantage over classical algorithms the team will need access to bigger quantum computers and Zhao estimates that “we might be looking at three to five years in the future when we can actually use the hardware built by the experimentalists to do meaningful quantum computation with applications in AI,” and when that happens, not only will it give quants a huge boost, and other industries, but these new algorithms will likely also revolutionise AI, helping to create massively performant and powerful AI’s that could make even the very best of today’s AI’s look like the evolutionary equivalent of the bacteria that first swam through the Earth’s primordial ooze all those billions of years ago. AI is about to get an upgrade, and it’s a hell of an upgrade, and I haven’t even discussed what happens when we begin to see the emergence of the first Neuromorphic computers, self-learning computers modelled on the human brain that will not only revolutionise AI, again, but also pack the power of today’s supercomputers into a package no larger than your fingernail

Related Posts

Comments (1)

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This