Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank working between the dates of 2020 and 2070, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, Europe’s largest utility company, and his recent work includes mentoring XPrize teams, building the first generation of biocomputers and re-inventing global education, and helping the world’s largest manufacturers envision, design and build the next 20 years of devices, smartphones and intelligent machines. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, the USAF and many others.
It’s a long way to Carnegie Hall, but we bet that Google researchers are already thinking of the day when their Artificial Intelligence protege “Magenta” can cut an album that beats Taylor Swift to the top of the US billboard charts.
Magenta, a new project from the Google Brain team is built around two simple questions: Can machines make art? And can machines make music? And, dare we say it, there’s also an unstated third question: Can machines make either art or music that’s any good?
We’ll let you judge the last one. Here’s the first piece of music from Google’s machine learning system. It’s only 90 seconds long, but it’s at least an early demonstration of Magenta’s capabilities.
“To start, Magenta is being developed by a small team of researchers from the Google Brain team. If you’re a researcher or a coder, you can check out our alpha-version code. Once we have a stable set of tools and models, we’ll invite external contributors to check in code to our GitHub. If you’re a musician or an artist (or aspire to be one – it’s easier than you might think!), we hope you’ll try using these tools to make some noise or images or videos… or whatever you like,” reads a blog post from Google.
“Our goal is to build a community where the right people are there to help out. If the Magenta tools don’t work for you, let us know. We encourage you to join our discussion list and shape how Magenta evolves. We’d love to know what you think of our work – as an artist, musician, researcher, coder, or just an aficionado. You can follow our progress and check out some of the music and art Magenta helps create right here on this blog. As we begin accepting code from community contributors, the blog will also be open to posts from these contributors, not just Google Brain team members.”
The Magenta project runs on top of Google’s open source AI engine, TensorFlow and while it might sound a little odd at first that Google is opening this not-so-simple source code for anyone to use, it’s part of the company’s hope that open sourcing its AI engine will allow the technology to grow far faster, and more widespread than if Google kept it under wraps.
“Research in this area is global and growing fast, but lacks standard tools. By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products. Google engineers really do use TensorFlow in user-facing products and services, and our research group intends to share TensorFlow implementations along side many of our research publications,” Google writes.
As Billboard reports, Google’s Magenta built its first tune with just a four note prompt. Drum tracks were added afterwards to give the song a little more zest. And this, as the researchers note, is the trickiest part of Magenta: not making a song, but making a song that makes people want to listen to it.
“The design of models that learn to construct long narrative arcs is important not only for music and art generation, but also areas like language modelling, where it remains a challenge to carry meaning even across a long paragraph, much less whole stories. Attention models like the Show, Attend and Tell point to one promising direction, but this remains a very challenging task,” reads Google’s blog post.