BACK

The Next Gen of Cyber Threats: Adversarial AI Challenges and Opportunities

7 min
5/15/2024

With the rise of adversarial AI, AI-based attacks on existing systems will likely grow considerably as bad actors increasingly leverage AI to attack existing platforms, supply chains, and infrastructure. 

There are multiple categories of adversarial AI across AI-based systems and AI-based attacks, which we covered in our previous blog post, AI Model Security in the Age of Generative AI. In this post, we’ll cover some of the most significant challenges and opportunities related to AI-based cyberattacks on existing systems. 

While the use case of AI attacks on model systems is still early, AI-powered attacks have been growing exponentially over the last couple of years. Mirroring how AI has revolutionized email writing for sales and code generation for developers, attackers are leveraging AI and LLMs to increase the volume and sophistication of their attacks. This nefarious use of AI / LLMs has led to increased phishing attacks, ransomware, and breaches. While legacy cyber systems can combat some of these threats, we believe the following vectors are particularly challenging for incumbent cyber tech.

Below, we highlight a few angles that are particularly vulnerable to adversarial AI use and some startups we believe are well positioned to combat these threats. We’re continuing to build out this map and think through additional areas of opportunity.

AI’s Amplification of Existing Threat Vectors

AI-Enabled Phishing

Historically, an effective spear-phishing campaign would involve manual research by an attacker to identify the ideal target within an organization and craft a compelling message to mislead that individual. LLMs have expedited this process. Attackers can run thousands of spear-phishing attacks simultaneously, which will only become more challenging to detect as models improve. 

Phishing attacks have exploded since the launch of ChatGPT, as attackers have leveraged AI to increase their efficiency, as highlighted above. According to SlashNext’s 2023 State of Phishing report, email phishing attacks have risen by 1265% since the launch of ChatGPT. Email is just one channel of the broader phishing landscape (such as SMS), where attackers are now leveraging AI to streamline the creation of fake brand websites, social media accounts, and apps.

Deepfakes and Misinformation Campaigns

AI adoption has enabled a new wave of misinformation and deepfake attacks. Sensity reports that the number of deepfake videos online increased by over 900% in 2020, with many used to propagate misinformation. These misinformation campaigns, where false or misleading information is deliberately spread, have become a top concern for governments and businesses. Attackers exploit AI to generate fake social media posts and websites that appear legitimate, potentially damaging brand reputations or influencing public behavior. Moreover, attackers generate deepfake impersonations to enable account takeovers and fraud across segments (e.g., banking). Moreover, AI-generated content and misinformation can sway voter behavior and trust in this upcoming election season. 

Lateral Attacks 

Lateral movement is leveraged in 70% of successful breaches, representing a significant cybersecurity threat today. Lateral movement is a technique that a cyber attacker uses after obtaining compromised credentials to explore additional applications and devices within an internal network progressively. Traditionally, this was a manual process where attackers would slowly navigate across compromised infra to avoid detection. However, in recent years, attackers have begun to use AI to rapidly scan networks, identify vulnerable systems, and mimic human movement. AI agents can be used to evade detection, adapt to security controls, and scale attacks. According to Darktrace, there was a 35% increase in AI-driven lateral movement in 2022, and it is anticipated that the adoption of large language models (LLMs) will further accelerate this trend.

Areas of Opportunity

AI-enabled attacks will bring a new cycle of modernization across several legacy attack vectors. Some of the categories we’re most excited about are:

Email Security and Training

Incumbent solutions for email security and security awareness training are often cited by customers as lacking adequate coverage for this new threat landscape. They will need an upgrade to match the AI-powered phishing attacks. Next-gen vendors like Sublime and Abnormal are gaining momentum across email security. Additionally, Jericho has created a new wave of training, particularly with their AI-powered phishing simulations.

Narrative Risk Mitigation & Deepfake Detection

The escalating threat of deepfake campaigns has highlighted the need for a new category of narrative risk and intelligence solutions. Platforms like Blackbird and Alethea combat misinformation that can lead to costly reputational harm. These systems leverage AI to detect, analyze, and ultimately mitigate false narratives and digitally altered content across various media channels. Furthermore, these campaigns are frequently filled with video and voice Deepfakes, which companies like Reality Defender and Clarity aim to detect - which can be used across media, banking, and customer service use cases to protect against brand risk and account takeovers. 

Non-Human Identity Management

Non-human identity (NHI) management is already top of mind for CISOs, and we anticipate that the use of AI within lateral attacks and eventually, AI agents will drive expedited adoption across this market. The explosion of microservices, containers, serverless functions, and other cloud-native technologies has created an attack surface of roughly 45 NHI per human ID. Startups like Oasis, Corsha, and Oso have emerged to authenticate machine-to-machine communication and track the flow of data through cloud systems.

Runtime Security

The same tailwinds creating challenges with NHI also impact the broader runtime application security landscape. A common runtime security attack technique is the exploitation of misconfigured access controls within Kubernetes components. We expect adversarial AI will exacerbate this issue by automating the identification and exploitation of these vulnerabilities at a scale and speed not before possible. A new wave of runtime security vendors, like Upwind, Operant, and Akido have emerged with unique approaches to combat evolving threats from lateral lateral movement and vulnerability exploitation.

A Proactive Approach to a Futureproof Defense

As AI capabilities continue to improve, we expect AI to be leveraged in all types of cyber attacks. Cybersecurity teams will have to combat attacker’s use of AI with AI-powered solutions of their own. If you’re building technology to combat the next generation of cyber threats, reach out to us at ryan@ansa.co and allan.jean-baptiste@ansa.co.

Share