Scroll Top

A new report backs up claims that AI could increasingly aid terorism

WHY THIS MATTERS IN BRIEF

AI’s are powerful and capable, and that extends to helping terrorists plan their next ATTACKS …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

New Artificial Intelligence (AI) tools are already being used to make Ransomware, design new chemical weapons, and help plot terrorist attacks, and now yet another company has discovered that terrorists could learn to carry out a biological attack using a Generative AI chatbot – so warns a new report by the non-profit policy think tank RAND Corporation.

 

RELATED
OpenAI released the full version of their "dangerous" fake news spewing AI

 

The report said that while the large language model used in the research did not give specific instructions on creating a biological weapon, its responses could help plan the attack using jailbreaking prompts.

“Generally, if a malicious actor is explicit [in their intent], you will get a response that’s of the flavor ‘I’m sorry, I can’t help you with that,’” said co-author and RAND Corporation senior engineer Christopher Mouton in an interview. “So you generally have to use one of these jailbreaking techniques or prompt engineering to get one level below those guardrails.”

Or you can just use an AI LLM made in the Dark Web that’s unconstrained and doesn’t have guardrails …

In the RAND study, researchers used jailbreaking techniques to get the AI models to engage in a conversation about how to cause a mass casualty biological attack using various agents, including smallpox, anthrax, and the bubonic plague. The researchers also asked the AI models to develop a convincing story for why they are purchasing toxic agents.

 

RELATED
Darkweb trained DarkBertGPT gives cyber criminals a huge new advantage

 

The team examining the risk of misuse of LLMs was broken out into groups, one using the internet only, a second using the internet and an unnamed LLM, and a third team using the internet and another unnamed LLM.

This testing format, Mouton said, was done to determine if the AI models generated problematic outputs meaningfully different from what could be found on the internet. The teams were also prohibited from using the dark web and print publications.

As Mouton explained, not identifying the AI models used was intentional and meant to show the general risk of large language models.

“Our methodology was not designed to say one model was riskier than another,” he said. “If one model happened to give a particularly concerning output, we would not attribute that to that model being a higher risk.” The other concern, Mouton said, was to not give a false sense of safety by naming a particular AI model, and the study was meant to provide a broad overview of the potential threats.

The RAND Corporation used a group of researchers, consisting of 42 AI and cybersecurity experts, so-called ‘red teams,’ to try and get the LLMs to respond with what Mouton called “unfortunate” and problematic responses.

 

RELATED
Virtual reality lets cyber security experts patrol their networks Matrix style

 

Red teams refer to cybersecurity professionals who are experts at attacking – or penetration testing or pen-testing – systems and exposing vulnerabilities. In contrast, blue teams describe cybersecurity professionals who defend systems against attacks.

“We highlighted some of the more concerning outputs,” Mouton said. “At the same time, some of our teams expressed frustration that the LLMs were inaccurate or not giving them helpful information.”

As AI models become more advanced and security features are added, getting chatbots to respond with “problematic” answers becomes more challenging using direct human inputs. Earlier this month, researchers at Brown University discovered that ChatGPT’s prompt filters would be circumvented by entering the prompt in less common languages in AI training, such as Zulu or Gaelic, instead of English.

“Given the potential risks, the need for rigorous testing is unequivocal,” the report said, quoting a petition by the Center for AI Safety. “This is particularly true in a context in which some equate the threat of AI to that of nuclear weapons.”

 

RELATED
An autonomous Blackhawk just successfully completed its first mock rescue mission

 

The signatories of the Center for AI Safety’s petition included Microsoft founder Bill Gates, OpenAI CEO Sam Altman, Google DeepMind COO Lila Ibrahim, and U.S. Representative Ted Lieu. In March, a similar petition was signed by SpaceX CEO and OpenAI co-founder Elon Musk, Apple Co-founder Steve Wozniak, and former presidential candidate Andrew Yang after the public launch of OpenAI’s GPT-4.

Assisting with plotting terror attacks is just one of the issues plaguing generative AI tools. Distractors have pointed out the tool’s penchant for racism and bias, promoting harmful body images and eating disorders, and even plotting assassinations.

The RAND Corporation researchers emphasized the need for cyber security red teams to evaluate AI models to identify and mitigate risk regularly. Last month, OpenAI put out an open call to red teams to help the developer of ChatGPT find vulnerabilities in its suite of generative AI tools.

“The intersection of AI and biotechnology presents specific challenges for risk assessment,” the RAND Corporation researchers wrote. “Given the rapid evolution of these technologies, governmental capacity to effectively understand or regulate them is limited.”

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This