Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
In short if you can talk or text you can now use AI to generate everything from content to products as creativity gets democratised.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
As we continue to see Generative Artificial Intelligence (AI) produce everything from books and blogs, to code, drugs, imagery, and videos, now Google researchers have unveiled that they’ve made an AI that can generate minutes-long musical pieces from text prompts, and can even transform a whistled or hummed melody into other instruments, similar to how systems like DALL-E generate images from written prompts. The model is called MusicLM, and based on Google’s earlier research that I talked about many years ago called Magenta, and while you can’t play around with it for yourself, the company has uploaded a bunch of samples that it produced using the model.
The examples are impressive. There are 30-second snippets of what sound like actual songs created from paragraph-long descriptions that prescribe a genre, vibe, and even specific instruments, as well as five-minute-long pieces generated from one or two words like “melodic techno.” Perhaps my favorite is a demo of “story mode,” where the model is basically given a script to morph between prompts. For example, this prompt:
Electronic song played in a videogame (0:00-0:15)
Meditation song played next to a river (0:15-0:30)
Resulted in the audio you can listen to here.
It may not be for everyone, but I could totally see this being composed by a human, I also listened to it on loop dozens of times while writing this article. Also featured on the demo site are examples of what the model produces when asked to generate 10-second clips of instruments like the cello or maracas, eight-second clips of a certain genre, music that would fit a prison escape, and even what a beginner piano player would sound like versus an advanced one. It also includes interpretations of phrases like “futuristic club” and “accordion death metal.”
MusicLM can even simulate human vocals, and while it seems to get the tone and overall sound of voices right, there’s a quality to them that’s definitely off. The best way I can describe it is that they sound grainy or staticky. That quality isn’t as clear in the example above, but I think this one illustrates it pretty well.
That, by the way, is the result of asking it to make music that would play at a gym. You may also have noticed that the lyrics are nonsense, but in a way that you may not necessarily catch if you’re not paying attention — kind of like if you were listening to someone singing in Simlish or that one song that’s meant to sound like English but isn’t.
Google released a research paper explaining it in detail. AI-generated music has a long history dating back decades; there are systems that have been credited with composing pop songs, copying Bach better than a human could in the 90s, and accompanying live performances. One recent version uses AI image generation engine Stable Diffusion to turn text prompts into spectrograms that are then turned into music. The paper says that MusicLM can outperform other systems in terms of its “quality and adherence to the caption,” as well as the fact that it can take in audio and copy the melody.
That last part is perhaps one of the coolest demos the researchers put out. The site lets you play the input audio, where someone hums or whistles a tune, then lets you hear how the model reproduces it as an electronic synth lead, string quartet, guitar solo, etc. From the examples I listened to, it manages the task very well.
Like with other forays into this type of AI, Google is being significantly more cautious with MusicLM than some of its peers may be with similar tech.
“We have no plans to release models at this point,” concludes the paper, citing risks of “potential misappropriation of creative content” and potential cultural appropriation or misrepresentation.
It’s always possible the tech could show up in one of Google’s fun musical experiments at some point, but for now, the only people who will be able to make use of the research are other people building musical AI systems. Google says it’s publicly releasing a dataset with around 5,500 music-text pairs, which could help when training and evaluating other musical AIs.