0

WHY THIS MATTERS IN BRIEF

Today the vast majority of AI workloads run in centralised hyperscale datacenters, tomorrow they’ll run at the edge of the network on our smart devices, and revolutionise industries in new ways.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

The US military’s bleeding edge research arm DARPA, whose projects recently have included everything from using gamers brainwaves to train swarms of killer robots in battle, and turning plants and animals into living sensor networks, among many other fabulous and weird things, including the development of the world’s first conscious robots, this week announced that they’re looking to fund research into developing what they call Shallow Neural Network Architectures (SNNA’s). In other words, they want to develop lean AI’s that could run on low powered systems at the networks’ edge rather than having to rely on large centralised hyperscale datacenters to do all the heavy lifting for them.

 

RELATED
Scientists breakthrough turns common garbage into miracle material Graphene

 

The project, codenamed Hyper-Dimensional Data Enabled Neural Networks (HyDDENN) would be able to provide similar results compared to existing state of the art Deep Neural Networks, or DNN’s, running in hyperscale data centers, but without the latency and massive computational requirements.

Conventional DDNs, like OpenAI’s revolutionary GPT-3 natural language model which now has over 175 billion parameters, are “growing wider and deeper, with the complexity growing from hundreds of millions to billions of parameters in the last few years,” a DARPA presolicitation document says. “The basic computational primitive to execute training and inference functions in DNN’s is the ‘multiply and accumulate (MAC)’ operation. As DNN parameter count increases, SOA networks require tens of billions of MAC operations to carry out one inference.”

This means that the accuracy of DNN “is fundamentally limited by available MAC resources,” DARPA says. “Consequently, SOA high accuracy DNNs are hosted in the cloud centers with clusters of energy hungry processors to speed up processing. This compute paradigm will not satisfy many DoD applications which demand extremely low latency, high accuracy Artificial Intelligence (AI) with severe size, weight, and power constraints.”

 

RELATED
Japan's newest digital pop stars blur the line between virtual and reality

 

With HyDDENN, the agency hopes to break free from a reliance on large MAC-based DDNs it says.

“HyDDENN will explore and develop innovative data representations with shallow NN architectures based on efficient, non-MAC, digital compute primitives to enable highly accurate and energy efficient AI for DoD Edge systems.”

The aim is to reduce parameter counts by at least 10 fold, while maintaining accuracy in comparison with a similar MAC-based DNN solution. “With efficient digital compute hardware, these innovations will lead to at least 100 fold reduction in combined compute power and throughput, while retaining high-accuracy output when compared to the SOA DNN approach.”

 

RELATED
Baidu's AI can clone anyone's voice in under a minute

 

Although DARPA’s focus is on military applications – where the increasingly connected battlefield will require significant tactical Edge deployments, such as the ability of AI’s to build military networks on the fly, the agency believes the technology could find use elsewhere.

“It is expected that HyDDENN will have significant impact in the areas of Edge/IoT communications and contextual Edge sensing and classification,” the document states. DoD-relevant applications mentioned by DARPA include contextual communications, speech recognition, gesture recognition, and medical diagnostics.

For a project with such lofty aims, HyDDENN has limited funding available. The award value of the Phase 1 Feasibility Study (6 months) tops out at $300,000, while the Phase 2 Proof of Concept (12 months) should not exceed $700,000.

 

RELATED
A portrait created by an AI just sold at Christies for $432,000

 

By the end of the second phase, the project’s researchers are expected to have developed an ASIC architecture and high level logic designs at register-transfer level “as well as a project plan to implement a future fully programmable integrated chip-scale digital IC with the proposed HD data representation, logic primitives, and shallow HD NN to attain the HyDDENN performance goals and metrics for the targeted application.”

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Your email address will not be published. Required fields are marked *