WHY THIS MATTERS IN BRIEF
- Many of today’s technology companies are creating proprietary AI’s that can’t interoperate with each other but a new framework from Facebook and Microsoft is setting out to change that
As Artificial Intelligence (AI), and in this case more specifically Machine Learning becomes more pervasive across our society it’s inevitable that more companies have jumped onto the bandwagon to create their own versions, and I several cases companies like AMD, Fujitsu, Google and Nvidia have all teamed up to ensure their products are compatible. But that said, the software that runs most of today’s Deep Learning (DL) and AI specific hardware is still proprietary so now Facebook and Microsoft have teamed together to develop a new common framework for developing DL models that can interoperate and talk with each other.
The Open Neural Network Exchange (ONNX), which is available on Github, is described as a standard that will allow developers to move their neural networks from one framework to another, provided both adhere to the ONNX standard.
According to the joint press release from the two companies, this isn’t currently the case. Companies must choose the framework they’re going to use for their model before they start developing it, but the framework that offers the best options for testing and tweaking a neural network aren’t necessarily the frameworks with the features you want when you bring a product to market.
In their press release the companies state that Caffe2, PyTorch, and Microsoft’s Cognitive Toolkit will all support the ONNX standard when it’s released later this month, and that models trained with one framework will be able to move to another for inference.
Facebook’s side of the post has a bit more detail on how this benefits developers and what kind of code compatibility was required to support it. It describes PyTorch as having been built to “push the limits of research frameworks, to unlock researchers from the constraints of a platform and allow them to express their ideas easier than before.”
Caffe2, in contrast, emphasizes “products, mobile, and extreme performance in mind. The internals of Caffe2 are flexible and highly optimized, so we can ship bigger and better models into underpowered hardware using every trick in the book.”
By creating a standard that allows models to move from one framework to another AI developers can now take advantage of the strengths of both, but rhere are still some limitations. ONNX isn’t currently compatible with dynamic flow control in PyTorch, and Facebook states other incompatibilities with “advanced programs” in PyTorch that it doesn’t detail.
Still, despite the teething issues it has to be said that this early effort to create common ground, and a common dl framework, is a positive step, after all, most of the ubiquitous ecosystems we take for granted, such as USB compatibility, 4G LTE networks, and Wi-Fi, just to name a few, are fundamentally enabled by standards and over time those standards have helped propel their growth and adoption.
A siloed go it alone solution is fine for a company that wants to develop a solution that it’s only going to use in house, but if you want to offer a platform that others can use to build content, then standardising that model is how you encourage others to use it.
The major difference between Microsoft and the other companies developing AI and DL products is the difficulty Microsoft faces in baking them into its consumer facing products. With Windows 10 Mobile effectively dead, MS has to rely on its Windows market to drive people towards Cortana, and that’s an intrinsically weaker position than Apple or Google, both of which have huge mobile platforms or Facebook, which has over a billion users.
While it’s hoped that the new ONNX framework will benefit everyone in the space it therefore goes without saying that it might benefit Microsoft more than many of the other players, but now at least we can all start to rest assured that all our models will play nicely with each other. That is of course unless they don’t develop their own secret language and lock us out of our systems… ahem.
Matthew Griffin Global Futurist, Tech Evangelist, X Prize Mentor ● Int'l Keynote Speaker ● Disruption, Futures and Innovation expert
Matthew Griffin, Futurist and Founder of the 311 Institute, a global futures think tank, is described as “The Adviser behind the Advisers.” Recognised in 2013, 2015 and 2016 as one of Europe’s foremost futurists, innovation and strategy experts Matthew is helping governments and multi-nationals re-invent everything from countries and cities to energy and smartphones. An award winning author, entrepreneur and international speaker Matthew also mentors XPrize teams and is regularly featured on the BBC, Discovery, Kurzweil, Newsweek, TechCrunch and VentureBeat. Working hand in hand with accelerators, investors, governments, multi-nationals and regulators around the world Matthew helps them transform old industries, and create new ones, and shines a light on how new, powerful and democratised technologies are helping fuel disruption and accelerate cultural, industrial and societal change. Matthew’s clients include Accenture, Bain & Co, Bank of America, Booz Allen Hamilton, Boston Consulting Group, Dell EMC, Deloitte, Deutsche Bank, E&Y, Fidelity, Goldman Sachs, Huawei, JP Morgan Chase, KPMG, McKinsey & Co, PWC, Qualcomm, SAP, Schroeder’s, Sequoia Capital, UBS, the UK’s HM Treasury, the USAF and many others.