Scroll Top

Predicting cyber attacks is the new cyber defense as Forta predicts $200 Ml Euler crypto hack

Futurist_predcyber

WHY THIS MATTERS IN BRIEF

Its’ possible to watch data on the dark web and from other sources to predict who’s going to be the next cyber victim and the tech’s getting better …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

As we head into a future where almost anything and everything that’s connected will be or could be hacked it seems like cyber defenders will have their work cut out for them in what many think is a zero sum game. But, rather than trying to stop hackers from doing more damage when they’ve already hacked into something what if you could predict their targets and moves before they made them?

 

RELATED
New AI computer vision system detects malware with staggering accuracy

 

A long time ago I talked about how BAE and the US government who are both developing the technology to predict cyber attacks before they happened using a combination of AI and dark web data. And now, years on another company has made a significant “breakthrough” which has largely gone unnoticed – they predicted the theft of over $200 Million in crypto. Which then, ironically, happened.

Advancements in Artificial Intelligence (AI) are a double-edged sword for cybersecurity companies that work in decentralized finance.

 

The Future of Cyber In-Security, by keynote Matthew Griffin

 

Forta Network monitors more than $40B in on-chain assets for clients such as Polygon, Compound, Lido, ZenGo, and crypto lending platform Euler Finance – which lost $200M last month in a cyber attack that Forta saw coming.

“Many of our machine learning models in the Euler attack detected [it] even before the funds were stolen, giving the Euler team essentially a few minutes heads up that, ‘Hey, your protocol is about to be attacked, you should take some action,’” Christian Seifert, Forta Network researcher, told reporters.

 

RELATED
OpenAI's latest updates introduce a huge new security risk

 

Blockchain lends itself very well to these machine learning approaches because the data is public,” Seifert explained. “We’re able to see every single transaction, every single account, we’re able to see how much money that is actually lost – and that is a great precursor to train some of these models.”

Despite the fact that the Forta system recognized the malicious activity on Euler’s blockchain protocol and sent alerts to Euler, the company was not able to act quickly enough to shut its network down before funds were stolen, which then brings us to another enterprise problem – the ability to act on such information fast enough to make a difference.

Although, after negotiations with the hacker, however, customers were made whole.

“All of the recoverable funds taken from the Euler protocol on March 13 have now been have been successfully returned by the exploiter,” reads the post shared by Euler’s official Twitter account.

 

RELATED
Blockchain experts are putting a stop to governments putting backdoors in software

 

“Before exploitation, three critical Forta alerts were raised,” Forta said in a blog post. “Sadly in this case, the [Euler] attack still happened too fast for the standard manual response of a multisig to pausing the contract.”

Seifert joined Forta in April 2022 following 15 years at Microsoft where he was a Principal Group Manager overseeing the tech giant’s cyber security and threat detection team. Forta launched in 2021 with $23 million raised by Andreessen Horowitz, Coinbase Ventures, Blockchain Capital and others.

While Forta can leverage its own machine learning to identify malicious activity on blockchain, Seifert sees the downside of AI in potential manipulation of ChatGPT – the chatbot developed by OpenAI that’s received $10B in investment from his former employer.

“There [are] two sides of the coin,” Seifert says. “I think a lot of AI technology can be used to create more customized and compelling social engineering attacks.

“I can probably feed your LinkedIn profile to ChatGPT and ask it to author an email that entices you to click on that link, and it’s going to be highly customized,” he explained. “So I think the click-through rates will increase with the malicious usage of some of these models.”

“On the good side, machine learning is an integral part to threat detection,” Seifert noted.

 

RELATED
Researchers use brain scanning tech to stream movies from peoples brains

 

A report earlier this month from Immunefi found hacks in the crypto industry increased 192% year-over-year from 25 to 73 this past quarter. Another significant crypto hack has seen $10 million in Ethereum stolen since December.

Scott Gralnick is the director of channel partnerships at Halborn, a blockchain cybersecurity firm that’s raised $90M in funding and whose clients include Solana and Dapper Labs.

“New technology will always create a double edged sword,” Gralnick said. “So as people will be trying to harness AI to try new attack vectors, so will our white-hat hackers ethically trying to protect the ecosystems at large by utilizing this technology to strengthen our armory of tools to protect these companies and ecosystems.”

Microsoft recently launched Security Copilot, a chat service that lets cybersecurity personnel ask questions related to security incidents to receive AI-generated answers back for step-by-step instructions on how to mitigate risks. Seifert expects cybersecurity employees to use AI language models to their advantage through essentially dumbing protocols down.

 

RELATED
New Deloitte study says 861,000 UK public sector jobs can be automated

 

“What is new now is these large language models that are able to understand context quite well, they’re able to understand code quite well,” Seifert says. “I think that will open the door primarily for incident responders.

“If you think about an incident responder that is faced with an alert and transaction in the Web 3.0 space, they might not know what to look at, and so can a large language model be used to transform this very technical data into natural language, such that it is more accessible to a broader audience?” he asked. “Can that person then ask natural language questions to guide the investigation?”

A recent Pew Research study of 11,004 US adults found 32% of Americans believe that over the next 20 years, artificial intelligence will have a mostly negative impact on workers, while just 13% said AI will help more than harm the workforce.

Count Seifert in the minority.

“One thing that folks always talk about is, ‘Oh, is AI going to replace humans?’ I don’t think that is the case,” he says. “I think AI is a tool that can augment and support humans, but you always will need a human in the loop for some of these decisions being made.”

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This