Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
There is a great tussle going on between people who believe AI can never become “creative” and those who do, but depending on how you define “imagination” Microsofts latest AI could prove the naysayers wrong.
Ever since computer scientist Alan Turing first proposed his famous test of machine intelligence in 1950 the question of what it means for a computer to “think” has revolved around one basic question – can it imitate a human’s own thinking so closely that nobody can tell the difference?
Now one Artificial Intelligence (AI) from Microsoft is trying to go one step further than think, it’s trying to imagine, and if combined with Microsoft’s other pet AI project, DeepCoder, an AI which can program itself then the future could become very interesting indeed. If you look closely at the picture of the bird in the photo below you might notice little signs of blurriness in the feathers and the surrounding branches that you feel are a dead giveaway that the bird isn’t real, and you’d be right because this bird, even though for some it would be close enough to the real thing to fool them, was “imagined” by Microsoft’s newest AI that was only given the simplest text based directive from the researchers: “Create a bird that is red and white with a very short beak.”
This in itself is a difficult task for an AI, after all, like a human, here it’s being asked to draw on its knowledge of birds to create a new type of bird, but it’s worth noting all the things that it wasn’t told to do and did anyway.
For example, there was no specific directive on where the bird should be in the space and a particularly blockheaded AI might have just placed a stationary bird against a blank background. This AI, however, “decided” to place the bird on a branch despite not being told to, and it demonstrated what some researchers are now calling the existence of “artificial imagination,” the computer equivalent of what we think of as a uniquely human trait.
“You need your machine learning algorithms running your artificial intelligence to imagine some missing parts of the images,” said Microsoft researcher Pengchuan Zhang, “from the data, the machine learning algorithm learns this common sense where the bird should naturally belong.”
The ability to move beyond just basic instructions at all is a clear sign of what some are calling “artificial imagination” and it’s clear that the new AI, even though it wasn’t specifically told to do so, still likes to generate images that it “thinks” are realistic. The decision to place the bird on the branch though, for instance, is actually a by product of the fact that many of the images in the data set show birds in that position instead of flying or, say, driving a car. Well, I assume there aren’t any photos of birds driving cars, but who knows, maybe one day…
As a consequence, naturally as you’d expect, the new AI is considerably more literal minded than humans can be and it lacks our flair for the absurd. For example, as a test to see how far the team could push its burgeoning imagination they asked it to create an image of a double decker bus floating on a lake and the best it could do was a fuzzy image of a decidedly boat looking bus on top of a lake, because at the moment it doesn’t have the capability to “imagine” that scene.
The AI also doesn’t always get all the visual details right with the researchers noting mis-coloured beaks and mutant-like bananas as examples of problems with its creations, but as far as the team are concerned their new prodigy’s ability to imagine details beyond the original directives speaks volumes, and in some circles it’s now prompting people to ask the question could AI’s one day think and imagine for themselves in the same way humans do?
As for my feeling, and as the rise of new creative and innovative machines are already showing us, whether we call it artificial imagination or something else, increasingly more and more machines are assuming traits that give them the appearance, at least, of being “creative.”