Scroll Top

US Military puts the call out for autonomous AI’s that can defend its networks

futurist_cyber_Security

WHY THIS MATTERS IN BRIEF

After the US Pentagon announced it can no longer keep up with the variety or voracity of cyber attacks the US DOD wants to develop autonomous AI’s that can.

 

After a damning admission from the Pentagon that warned that the US military can no longer keep up with the increase in cyber attacks, both in terms of volume and type, the US Army has announced that it is seeking information about “Autonomous Cyber” capabilities, or Robo-Hackers to you and I, that use “adaptable Artificial Intelligence (AI) and machine learning technologies” to autonomously defend its networks and protect its own intelligent cybersecurity systems against sophisticated cyberattacks. In other words, they want to pit adaptable, autonomous AI’s against their ever present cyber threats, and eventually launch them into cyberspace – something that the Pentagon did last year when they embedded one of the first generation autonomous robo-hackers, called “Mayhem,” into their critical systems to defend them from attacks.

 

RELATED
Facebook's cryptocurrency Libra was so powerful it could have ended state control of money

 

A branch of the Army’s research and development enterprise known as the Space and Terrestrial Communications Directorate, or S&TCD, is seeking cybersecurity tools able to make “automated network decisions and defend against adaptive autonomous cyber attackers at machine speed,” according to a recent solicitation.

The RFI reflects the Pentagon’s growing interest in algorithmic cybersecurity tools. Public and private sector organizations around the world are investing in intelligence and automation, to a varying degree, to compensate for the global shortage of trained cybersecurity analysts.

Another reason organizations are turning to AI powered cyber defences is to counter the threat posed by intelligent cyber weapons. In February 2018, a group of more than two dozen researchers representing the Washington based Center for a New American Security, Cambridge University, Oxford University, and nonprofit organizations including the Electronic Frontier Foundation and OpenAI, issued a groundbreaking report warning that AI technologies could amplify the destructive power available to nation-states and individual criminal enterprises.

The report outlines dozens of ways attackers could use AI to their advantage, from generating automated spear-phishing attacks capable of reliably fooling their human targets, to triggering ransomware attacks using voice or facial recognition, to designing new forms of malware that mimics normal user behaviour, like we first saw last year, to evade detection.

Although there haven’t yet been confirmed cases of AI-enabled cyberattacks, the researchers conclude that, “the pace of progress in AI suggests the likelihood of cyber attacks leveraging machine learning capabilities in the wild soon, if they have not done so already.” And Pentagon officials appear to be taking the threat seriously.

 

RELATED
US to turn its aircraft carriers into floating drone factories

 

In December last year testimony before the House Subcommittee on Emerging Threats, Pentagon chief information officer Dana Deasy said that researchers at the new Joint Artificial Intelligence Center (JAIC) are already working on ways to develop technologies to “detect and deter advanced adversarial cyber actors.”

The JAIC will play a key role alongside the Pentagon’s Research & Engineering (R&E) enterprise to “deliver new AI-enabled capabilities to end users as well as to help incrementally develop the common foundation that is essential for scaling AI’s impact across DoD,” Deasy said.

One of the JAIC’s first tasks is to organize the massive stockpiles of data that Pentagon agencies will use to train their machine learning algorithms and make the data more resilient to tampering, Pentagon Undersecretary for R&E Lisa Porter told lawmakers. Porter highlighted the risks posed by “adversarial AI” – attacks designed to trick intelligent systems into making mistakes – that could undermine trust in these systems.

“Adversaries have the ability to manipulate AI data and algorithms to the point where the AI system is defeated,” wrote Celeste Fralick, chief data scientist and senior principal engineer at the cybersecurity giant McAfee, in a recent op-ed. For example, attackers could target the Pentagon’s own intelligent cyber defenses with what’s known as a “black box” attack – reverse-engineering the defender’s algorithms in search of blind spots malware can slip through.

 

RELATED
"Unblinking Eye" pre-crime tech gets investment boost from UK government

 

Similarly, said Fralick, machine learning systems are vulnerable to “poisoning” attacks, where a hacker injects false data into a training data set to bias the results of the algorithm. To use predictive maintenance as an example, the Air Force is investing heavily in machine learning tools to analyze flight records, maintenance logs, and sensor data in an effort to predict when parts will wear out or which aircraft will require maintenance on a given day. If adversaries are able to “poison” the predictive maintenance algorithm, the Air Force could, in theory, be forced to spend millions on redundant repairs. Worse, it might send unsafe aircraft out on duty.

Maintaining trust in these systems is central to the mission of the JAIC and the larger R&E enterprise, said Porter, noting that countering adversarial AI is a top priority for DARPA, the US military’s bleeding edge research arm, five-year, $2 billion AI Next initiative.

The Army appears to be working toward the same objective in parallel. Its Autonomous Cyber solicitation calls for multiple tools and methodologies it can use to “red team” – or pressure test – its current AI-based cyber defenses and improve resistance to manipulation.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This