Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Google can now see a time when coding is automated and human programmers “solve harder challenges.”
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
As I’ve been talking about for years by now many of us have read about Artificial Intelligence (AI) that can code, at some level anyway, or translate one programming language into another. But, as expected, over those AI’s have been getting better and better, and now they’re getting to the point where they are “interestingly good.”
When it comes to tracking the incremental advances of AI’s potential humans have an odd tendency to think in terms of board games and computer games where AI beats human experts, but those tests only go so far in illustrating the tech’s effectiveness at solving real world problems.
A potentially far better “challenge,” would be to put an AI side by side with humans in a programming competition. Google owned DeepMind did just that with its AlphaCode model. The results? Well, AlphaCode performed well but not exceptionally.
The Future of Artificial Intelligence, by keynote speaker Matthew Griffin
The model’s overall performance, according to a paper published in Science shared with reporters corresponds to a “novice programmer” with a few months to a year of training. Part of those findings were made public by DeepMind earlier this year.
Despite this though in the test AlphaCode was able to achieve “approximately human-level performance” and solve previously unseen, natural language problems in a competition by predicting segments of code and creating millions of potential solutions. After generating the plethora of solutions it then filtered them down to a maximum of 10 solutions, all of which the researchers say were auto-generated, “without any built-in knowledge about the structure of computer code.”
AlphaCode received an average ranking in the top 54.3% in simulated evaluations in recent coding competitions on the Codeforces competitive coding platform when limited to generation 10 solutions per problem. 66% of those problems, however, were solved using its first submission.
That might not sound all that impressive, particularly when compared to seemingly stronger model performances against humans in complex games but the researchers note that succeeding at coding competitions are uniquely difficult. To succeed, AlphaCode had to first understand complex coding problems in natural languages and then “reason” about unforeseen problems rather than simply memorizing code snippets. AlphaCode was able to solve problems it hadn’t seen before, and the researchers claim they found no evidence that their model simply copied core logix from the training data. Combined, the researchers say those factors make AlphaCode’s performance a “big step forward.”
“Ultimately, AlphaCode performs remarkably well on previously unseen coding challenges, regardless of the degree to which it ‘truly’ understands the task,” Carnegie Mellon University, Bosch Center for AI Professor J. Zico Kolter wrote in a recent Perspective article commenting on the study.
AlphaCode isn’t the only AI model being developed with coding in mind. Most notably, OpenAI has adapted its GPT-3 natural language model to create an autocomplete function that can prejudice lines of code. GitHub also has its own popular AI programming tool called Copilot. Neither of those programs however, have shown as much prowess competing against humans in solving complex competitive problems.
Though we’re still in the relatively early days of AI assisted code generation, the DeepMind researchers are confident AlphaCode’s recent successes will lead to useful applications for human programmers down the line. In addition to increasing general productivity, the researchers say AlphaCode could also “make programming more accessible to a new generation of developers.”
At the highest level, researchers says AlphaCode could one day potentially lead to a cultural shift in programming where humans coders are replaced and mainly exist to formulate problems which AI’s are then tasked to solve.
At the same time, some detractors in the AI space have called into question the efficacy of the core training models underpinning many advanced AI models. Just last month, a programmer named Matthew Butterick filed a first of its kind lawsuit against Microsoft-owned GitHub, arguing its Copilot AI assistant tool blatantly ignores or removes licenses presented by software engineers during its learning and testing phase. That liberal use of other programmers’ code, Butterick argues, amounts to “software piracy on an unprecedented scale.” The results of that lawsuit could play an important role in determining the ease with which AI developers, particularly those training their models on past humans’ code, can improve and advance their models.