Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Many of today’s technology companies are creating proprietary AI’s that can’t interoperate with each other but a new framework from Facebook and Microsoft is setting out to change that.
As Artificial Intelligence (AI), and in this case more specifically Machine Learning becomes more pervasive across our society it’s inevitable that more companies have jumped onto the bandwagon to create their own versions, and in several cases companies like AMD, Fujitsu, Google and Nvidia have all teamed up to ensure their products are compatible. But that said, the software that runs most of today’s Deep Learning (DL) and AI specific hardware is still proprietary so now Facebook and Microsoft have teamed together to develop a new common framework for developing DL models that can interoperate and talk with each other.
The Open Neural Network Exchange (ONNX), which is available on Github, is described as a standard that will allow developers to move their neural networks from one framework to another, provided both adhere to the ONNX standard.
According to the joint press release from the two companies, this isn’t currently the case. Companies must choose the framework they’re going to use for their model before they start developing it, but the framework that offers the best options for testing and tweaking a neural network aren’t necessarily the frameworks with the features you want when you bring a product to market.
In their press release the companies state that Caffe2, PyTorch, and Microsoft’s Cognitive Toolkit will all support the ONNX standard when it’s released later this month, and that models trained with one framework will be able to move to another for inference.
Facebook’s side of the post has a bit more detail on how this benefits developers and what kind of code compatibility was required to support it. It describes PyTorch as having been built to “push the limits of research frameworks, to unlock researchers from the constraints of a platform and allow them to express their ideas easier than before.”
Caffe2, in contrast, emphasizes “products, mobile, and extreme performance in mind. The internals of Caffe2 are flexible and highly optimized, so we can ship bigger and better models into underpowered hardware using every trick in the book.”
By creating a standard that allows models to move from one framework to another AI developers can now take advantage of the strengths of both, but rhere are still some limitations. ONNX isn’t currently compatible with dynamic flow control in PyTorch, and Facebook states other incompatibilities with “advanced programs” in PyTorch that it doesn’t detail.
Still, despite the teething issues it has to be said that this early effort to create common ground, and a common dl framework, is a positive step, after all, most of the ubiquitous ecosystems we take for granted, such as USB compatibility, 4G LTE networks, and Wi-Fi, just to name a few, are fundamentally enabled by standards and over time those standards have helped propel their growth and adoption.
A siloed go it alone solution is fine for a company that wants to develop a solution that it’s only going to use in house, but if you want to offer a platform that others can use to build content, then standardising that model is how you encourage others to use it.
The major difference between Microsoft and the other companies developing AI and DL products is the difficulty Microsoft faces in baking them into its consumer facing products. With Windows 10 Mobile effectively dead, MS has to rely on its Windows market to drive people towards Cortana, and that’s an intrinsically weaker position than Apple or Google, both of which have huge mobile platforms or Facebook, which has over a billion users.
While it’s hoped that the new ONNX framework will benefit everyone in the space it therefore goes without saying that it might benefit Microsoft more than many of the other players, but now at least we can all start to rest assured that all our models will play nicely with each other. That is of course unless they don’t develop their own secret language and lock us out of our systems… ahem.