What is the adversarial impact of generative AI?

Generative AI is bringing about transformations across areas of human output. As with all prior technological revolutions, it is no surprise that doomsayers have again taken center stage. Although it is our firm belief that these technologies will be an overwhelming net force for good (this is covered in other articles in this series), it is naive to think that they won’t also be used effectively by adversaries who are out to harm us.

This article explores the impact of generative AI technologies like Large Language Models (LLMs) on current and future methods of cyber threat actors. We argue that the best way to manage and defeat these new threats is with a combination of good sense and better defensive use of those same generative AI techniques.

The arc of cyber attack history

The primary general use case for generative AI is to compose new high-quality content at scale, be it prose, code or imagery.

When we look at the arc of threat actor history, we see four distinct phases: First, in the early-web era, the “human to machine” phase was where attacks primarily took the form of one attacker hacking one machine – this early form of attack is how most threat actors are depicted in stock imagery and in many people’s imagination.

Quickly though, attackers leveraged scripting and automation in the “machine to machine” era starting in the early 2000s. This was enabled by the infinite scalability of another new technology at the time, cloud computing, which besides powering most modern web services, also enabled threat actors to target and compromise millions of endpoints at minimal cost. The botnet was born, and suddenly, cyber became an asymmetric war theater for the first time.

Then, in the mid-2010s, when endpoint security really turned a corner, the battlefield expanded from computer engineering to social engineering: We had entered the “human to human” cyber era. Here, the attacker shifts focus to the human access layer since it is now by far the weakest exposure. Cyber criminals craft targeted, sometimes highly-researched attacks designed to fool the operators of endpoints rather than the endpoints themselves: Phishing becomes the major cyber challenge of our time.

The main weakness of phishing as an attack vector from the threat actor perspective is its embedded scalability dilemma. There is a direct tradeoff between time invested and quality of an attack. That tradeoff, or rather the end of that tradeoff, is the first major adversarial impact. Generative AI has broken the linear relationship between the time and effort spent preparing a phish, and the attack’s quality or its tailoring to the exact target. Since LLMs can produce repeatable, high quality prose such as email content, social media profiles, and even voice and faces of colleagues, the art of phishing is moving from the artisanal to mass production without much sacrifice in quality. The “machine to human” cyber era has well and truly started. 

Convincing deep fakes at scale

In addition to the humble phishing email, these generative techniques will be used to spin other related content, all with the direct or indirect goal of manipulating the human targets into doing something they would not normally willingly do. These are: fake branded websites, fake e-commerce shopfronts, fake social media profiles and fake news articles to create an environment of fear, uncertainty and panic. These techniques are not new in themselves, so we will not explore them in much detail in this article. What is new is that they can now be produced en masse, in any language, at higher quality by a single attacker with minimal research and creative input. Indeed, most of the context required to craft the outputs already forms part of the LLM’s learnt corpus.

These models are not just useful for creative output, they are also highly effective at producing executable code. Most modern software houses are already employing generative AI to do development grunt work and it is no great stretch of the imagination for threat actors to do the same. Mostly this takes an assistive form but the quality of these models is rapidly evolving. ChatGPT has already been used to write malware. Anyone, good or bad, who writes software today is able to produce code faster and more accurately thanks to generative AI tooling.

LLMs could themselves be exploited

Models are trained on a large learnt corpus of training data. It is highly likely that the immense bodies of knowledge that these models are built on contain credentials (known as ‘secrets’ in software development) such as activation keys, username and password pairs, and API keys, some of which could still be valid today. While there are no doubt safeguards in place to either scrub these pre-training and/or gag the model output, it would not be the first time that a model did something that it shouldn’t do

In effect, we can imagine a whole new class of exploit whereby the attacker tricks an LLM into providing information or doing something it was explicitly trained not to do – just like a human targeted by phishing. It is also conceivable and likely that one LLM will be pitted against another ‘target LLM’ to automate such exploits.

This approach becomes more likely as the capability gap between open-source, “uncensored” models that can run on any machine and their larger closed-sourced commercial, cloud-run incumbents is narrowed.

Brute force attacks

Generative AI has the ability to make brute force attacks less brutish and more forceful.

Password brute forcing is probably the most familiar type of exploit. All it involves is trying every permutation of a key lock until one succeeds. Since passwords are a form of language, LLMs could be trained to produce letter and number combinations that are much more likely to be the target’s password based on contextual input such as the username, the target’s language, location, age or other identifiable details. This context, combined with the statistical methods of LLMs would produce “smart forcing” and drastically reduce the time to exploit (enforce your 2FA, folks!).

Elsewhere, adversaries will no doubt use generative AI to interpret attack surface configurations with incredible accuracy at great scale. Rather than attempt common exploits on every encountered host, attackers will focus on hosts that contain the right cocktails of patchable and non-patchable vulnerabilities – hitting defenders in the softest spots that they might not even know existed. This will drastically increase the importance of “security by default” and CTEM methods for defenders.

The genie is out of the bottle

Whatever your position on the AI scale of doom, it is undoubtable that the generative AI genie is out of the bottle. These techniques, built on decades of layered research, are now easily accessible on Udemy or YouTube. The cost to compute training is also going only in one direction: lower.

While regulation has a role to play particularly in the antitrust arena, it is unlikely to curb “the bad guys”. Indeed, all the exploits described above are already illegal, and that hasn’t stopped threat actors from doing what they do. 

In cyber, where the technological adoption gap has always been high, threat actors have adopted new technology at mass scale faster than defenders. We must ensure that regulation does not widen this gap further, and instead provides a framework for rapid and safe adoption.

The cyber defenses of the future will need to determine the true from the false at massive scale and inline with human decision-makers and their processes. It is a daunting challenge for defenders facing adversaries that are often backed by states who do not share our principles on liberty or privacy. We must ensure the odds are stacked in our favor.

Check out other blogs from our ‘Generative AI at work’ series (Part 1 and Part 2) and keep an eye out for Part 4!

PUBLISHED BY

Nadim Lahoud

19 Jul. 2023

SHARE ARTICLE:

Categories

AI

Recent Posts

VIEW ALL
News

Introducing DNS Guardian: Stop impersonation and spam caused by domain takeovers 

Rahul Powar

tl;dr: We’re thrilled to announce DNS Guardian — a new feature in Red Sift OnDMARC that can swiftly identify and stop domain takeovers that lead to malicious mail. Back in February, we shared updates with the community about SubdoMailing – an attack discovered by Guardio Labs. The attack was a form of subdomain takeover,…

Read more
Email

“What’s Next for DMARC”: Red Sift & Inbox Monster Webinar Recap

Red Sift

The recent webinar hosted by Inbox Monster, “What’s Next for DMARC: Data & Predictions for a New Era in Email Authentication,” featured insights from Red Sift and examined the significant changes brought by Yahoo and Google’s bulk sender requirements earlier this year.  It also offered a forward-looking perspective on the future of email authentication.…

Read more
Security

Navigating the Information Security Landscape: ISO 27001 vs. SOC 2

Red Sift

As cyber threats evolve, so do the standards and frameworks designed to combat them. Two of the most recognized standards in information security are ISO 27001 and SOC 2. What sets them apart, and which one is right for your organization? Let’s delve into the key differences. Purpose and Scope: Global Framework vs. Client-Centric…

Read more
News

G2 Summer 2024 Report: Red Sift OnDMARC’s Winning Streak Continues

Francesca Rünger-Field

We’re delighted to announce that Red Sift OnDMARC has again been named a Leader in G2’s DMARC category for Summer 2024. This recognition is based on our high Customer Satisfaction scores and strong market presence. Red Sift appeared in 11 reports – 5 new ones since Spring 2024! – earning 5 badges: A few…

Read more
News

Google will no longer trust Entrust certificates from October 2024

Red Sift

Tl;dr: Google has announced that as of October 31, 2024, Chrome will no longer trust certificates signed by Entrust root certificates. While there is no immediate impact on existing certificates or those issued before 31st October 2024, organizations should start reviewing their estate now. On Thursday 27th June 2024, Google announced that it had…

Read more