The majority of companies today see AI as the future, but there is a shortage of individuals and skills in the market to meet demand, as a result companies are now developing AI’s that design and build new AI’s for them.


In recent months the term white collar automation has become a common buzzword as people debate Artificial Intelligence’s (AI) ability to automate the work of an increasing number of jobs and tasks, like, to name but a few, those performed by bankers and lawyers. And now the AI researchers at Google are trying to automate the tasks of highly paid workers more likely to wear a hoodie than a coat and tie – themselves.


Uber pulls the covers off of its new AI lab


In a project called AutoML, that was first teased earlier this year, Google’s researchers have announced that they have managed to teach Machine Learning software to build, well, more machine learning software, and in some instances, what the new platform is coming up with is more powerful and efficient than the best systems the researchers themselves were able to design. In short, in some instances, Google’s new AI built better AI’s than their own researchers could – and that symbolises another milestone in AI’s evolution.

Google says the new AI’s that the platform built recently scored a record 82 percent at categorising images by their content, and on the harder task of marking the location of multiple objects in an image, an important task for Augmented Reality, Autonomous Robots and Autonomous Vehicles the auto-generated system scored 43 percent. By comparison the best human built system scored 39 percent.


This creative AI paints like the old masters


The results are significant because increasingly as the demand for data scientists, developers and AI talent heats up the expertise needed to build cutting edge AI systems is increasingly scarce – even at Google. The competition for talent is now apparently so fierce that top AI researchers can, on average, command salaries of over $2.5 million a year, putting them well out of reach of most companies, and if you’re a government organisation or non-profit then good luck.

“Today these are handcrafted by machine learning scientists and literally only a few thousands of scientists around the world can do this,” said Google CEO Sundar Pichai last week, briefly name checking AutoML at a recent event, “we want to enable hundreds of thousands of developers to be able to do it.”

AutoML though remains a research project because somewhat ironically, right now it takes exactly the kind of rare AI expertise it seeks to automate to get it working, but that said, a growing number of researchers outside Google are working on this technology as well, and if AI made AI becomes affordable and practical, which it inevitably will, then the benefits of machine learning could propagate throughout other industries outside of the tech industry, such as finance and healthcare much faster than ever before.


Despite its guardrails experts still made ChatGPT create Ransomware


Meanwhile back at Google AutoML could accelerate Pichai’s “AI first” strategy where the company is looking to leverage AI in every corner of its business and in every product and service it produces. Researchers from the company’s Google Brain research group or the London based DeepMind research lab it acquired in 2014, for example, have already used AI to help slash the company’s datacentre energy bills by hundreds of millions of dollars a year, and map cities even more effectively, and it’s envisioned that AutoML could make those experts even more productive, as well as help less skilled engineers build powerful AI systems by themselves.

Google lists just over 1,300 people on its research website, not all of whom specialize in AI, and it has thousands more software engineers in its ranks, and researchers outside the company say the idea of automating some work of AI experts has become a research hotspot, and is needed as AI systems become more complex.

A lot of most researchers time is taken up by what’s known as Metalearning, or Learning to Learn, an AI development process that aims to speed up the process of building and deploying AI neural networks that involves feeding huge volumes of training data through networks of maths operations that are loosely inspired by the studies of neurons inside the human brain.


Coca Cola plunges into the Metaverse with pixel flavoured cola promo


That may sound highly sophisticated, but a good part of getting neural networks to perform useful tricks, like processing audio or images, comes down to well paid grunt work where even these highly paid experts must use instinct and trial and error to discover the right architecture for a neural network.

“A large part of an engineer’s job is essentially a very boring task, trying multiple configurations to see which ones work better,” says Roberto Calandra, a researcher at University of California Berkeley, “and the challenge is getting harder because researchers are building larger networks to tackle tougher problems, this is something that products like AutoML can help automate.”

Calandra began researching metalearning after spending two frustrating weeks trying to get a robot to learn to walk during his PhD studies in 2013. He tried an experimental technique to automatically tune its software, which was based on a machine learning technique less complex than a neural network. The recalcitrant machine walked within a day.


Amazon is building a creative AI that can design clothes


Generating a neural network design from scratch is harder than tweaking the settings of one that already exists, but recent research results from around the world, including Facebook’s labs, suggest that AI’s building and designing new AI’s is getting closer to becoming practical, says Mehryar Mohri, a professor at New York University.

Mohri is working on a system called AdaNet, in a collaboration that includes researchers at Google’s New York office. When given a collection of labelled data AdaNet builds a neural network layer by layer, testing each addition to the design to ensure it improves performance, and so far it’s shown capable of generating neural networks that can accomplish a task as well as a standard, hand built network that are twice as large, or “bloated.”

That’s promising, says Mohri, because many companies, including Google, are trying to cram more powerful AI software onto mobile devices that often have limited resources like computing power, and, or GPU.


Robo-evolution in action, researchers unveil self-healing transformer robots


Making it easier to generate and deploy complex, lean AI systems though might also come with drawbacks. Recent research has shown that it is all too easy to accidentally make systems with a biased view of the world, for example that “Mexican” is a bad word, or have a tendency to associate women with domestic chores but Mohri argues that reducing the tedious hand tuning required to make use of neural networks could make it easier to detect and prevent such problems.

“It’s going to make people’s hands more free to tackle other aspects of the problem,” he says, so when, not if, Google finally releases AutoML into the world, probably alongside TensorFlow, it’s effects will be felt far beyond the company that designed it.


OpenAI’s MuseNet AI can generate 4 minute songs across a wide range of genres and styles


“We want to democratize this,” said Pinchar.

When that happens you can expect the rate of AI development to accelerate, again, and its benefits to spread and propagate faster and further than ever before and if you think we’ve seen AI pass some amazing milestones, such as being able to code themselves, dream, fight other AI’s and learn by itself, then you haven’t seen anything yet.

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *