In late January 2024, I was shocked to learn about the abusive use of generative AI as deepfake videos and images of Taylor Swift circulated online. One such photo had nearly 47 million views before it was taken down.
While the public was outraged, behind closed doors, CISOs and tech founders thought about how much generative AI can damage a company‘s data, reputation, and operations. Their verdict? It’s the same old game. Bad actors will use generative AI to cause damage, while cybersecurity teams will use it to defend against attacks.
The best part of this is that the benefits of generative AI in cybersecurity outweigh the risks. I’ll share these benefits with you below, including some examples and insights from cybersecurity experts.
Table of Contents
How Can Generative AI be Used in Cybersecurity?
- Copilot for security teams.
- Analyze vulnerabilities in code
- Adaptive threat detection.
- Simulating zero days.
- Enhanced biometrics.
- Phishing detection and prevention.
1. Copilot for security teams.
I dug into the numbers and found that a cybersecurity team spends about 15 hours a week manually sorting over 22,000 alerts. Investigating and getting approvals to contain an attack can push this to 25-30 hours, leaving little time to address the root cause or patch vulnerabilities.
Cybersecurity tools help. However, security teams must recheck alerts to avoid false positives and negatives. I learned that this is where generative AI in cybersecurity shines.
How to Use Generative AI in Content Marketing
Find out how to integrate AI tools into your content workflows.
- Understanding generative AI.
- Limitations of AI.
- Rolling out AI to your team.
- And more!
Download Free
All fields are required.
When a cyber threat occurs, generative AI acts as a copilot, quickly scanning systems to identify how the attack happened and what data was compromised. It can even generate scripts to contain the attack when the team feels overwhelmed.
In the Cyber Risk Leaders podcast, Ben Verschaeren, a cybersecurity specialist at Sophos, explained that junior analysts once relied heavily on senior staff for approvals and guidance. But now, generative AI offers coding support and technical guidance, which boosts their skills and proficiency.
Brian Murphy, CEO of ReliaQuest, shared a similar view on the Datacamp podcast. Before generative AI, security professionals often mastered one tool or technology — usually the most popular tool. This makes it difficult to handle attacks launched in less common technologies. Now, AI-trained security tools help teams swiftly respond to threats, no matter the technology used.
2. Analyze vulnerabilities in code.
Code vulnerabilities that bad actors can exploit are inevitable due to human error, and generative AI has worsened the issue.
Verschaeren talked about developers using generative AI to write code. While it's fine for short, verifiable code, it’s risky when unchecked code gets integrated into a company’s systems.
Since these generative AI tools are backed by large language models (LLMs) trained by good and bad actors, malicious users can feed in false information and vulnerable code, hoping developers will use it without verifying — especially since most developers aren’t security experts.
In an episode of the Domain Tools podcast, Carl Froggett, the CIO of Deep Instinct, suggested we fight generative AI with generative AI. I agree! He recommended using generative AI to scan through code, help developers follow cybersecurity best practices, and catch vulnerabilities.
For legacy companies, generative AI can dig into software libraries, policies, and APIs to identify vulnerabilities and suggest patches. I found an NVIDIA research supporting this generative AI functionality. They built a generative AI pipeline that analyzes legacy software containers and identifies vulnerabilities with high accuracy — up to 4x faster than human analysts.
3. Adaptive threat detection.
When malware hits a system, security teams create rules to block it based on known signatures. But hackers just tweak the malware to create a new variant that slips through the security tools, prompting security teams to create new signatures, and the cycle repeats.
With generative AI, security teams can generate multiple variants of the malware themselves and load them into their security tools. This helps the tools detect all versions of the malware and keep systems protected.
In the Domain Tools podcast, Mark Wotasiak, VP of Product at Vectra AI, mentioned that security experts can also use generative AI to simulate network traffic and generate malicious payloads that mimic sophisticated attacks. By pushing these payloads to their systems, security teams can stress-test their intrusion cetection systems (IDS) and see how well they handle advanced threats
4. Simulating zero days.
According to Google’s Threat Analysis Group and Mandiant, zero-day attacks increased by 50% between 2022 and 2023. In 2023, ransomware attacks cost victims over $1 billion, with zero-day attacks being a common method of attack.
Zero-day attacks have always been a gold mine for hackers because security tools rely on known attack signatures to defend systems. But that has changed. Companies can now use generative AI to create synthetic data that simulates realistic zero-day scenarios, which allows them to improve their defenses.
A study by the National Crime Information Bureau (NCIB) suggests using generative adversarial networks (GANs), a subset of Generative AI, to generate synthetic data mimicking zero-day attacks. This helps companies train their intrusion detection tools without relying on actual, potentially sensitive data.
Another interesting method to combat zero days using generative AI is by creating smart honeypots. I spoke with Jaxon Parrott, the CEO of Presspool.ai, who explained that generative AI can clone digital systems to mimic real ones and lure hackers. By monitoring how attackers exploit vulnerabilities in these fake systems, security teams can strengthen their defenses and reduce the risk of real attacks.
5. Enhanced biometrics.
Between 2021 and 2023, over 6,869 companies in the U.S. were compromised, with at least 1,800 breaches each year. 83% of those breaches were due to password compromises. In my opinion, this shows that eliminating passwords could significantly reduce data breaches.
Accenture was ahead of the curve, as they kicked off a passwordless move in 2019 in partnership with Microsoft and passwordless technologies like Windows Hello and FIDO2 tokens.
Merim Becirovic, Accenture’s Managing Director of Global IT, explained how they used generative AI to simulate realistic biometric scenarios, such as facial expressions and environmental conditions. This helped train and improve their systems to detect deepfakes and manipulation attempts.
Five years later, Accenture has eliminated passwords from over 90% of its systems and reduced attacks by 60%.
6. Phishing detection and prevention.
Generative AI is often seen as an enhancer of phishing emails, and I get why. Before generative AI became popular, poor grammar was a telltale sign of phishing attempts. Now, with generative AI improving email quality, it's harder to spot them based on grammar alone.
But it’s not as bad as it’s painted to be. In February 2024, IBM security researchers ran an experiment comparing human-written phishing emails to those generated by AI. Surprisingly, AI-generated emails were still easy to detect despite their correct grammar.
Moreover, cybersecurity teams can use AI-generated phishing emails to teach employees what to watch for. Plus, by simulating enough phishing attempts, AI could identify patterns in sender behavior that can be flagged as indicators of compromise. This would allow security tools to flag such emails before employees fall for a phishing attempt.
Also, security tools can generate automatic responses to false positive phishing emails to verify their authenticity before employees work on them. NVIDIA's spear phishing generative AI tool is an example. It uses generative AI to improve phishing email detection by 21% compared to traditional security tools.
Examples of Generative AI in Cybersecurity
Here are three cybersecurity tools that I find impressive — they use generative AI to help companies defend their digital infrastructure.
Vendict
No one likes responding to security questionnaires because they are repetitive and boring. Worse still, automating this process might produce generic responses that miss important nuances specific to your company's security situation.
Vendict fixes these challenges by using generative AI to automate the collection of unique insights for your security situation. It pulls from a company's knowledge base to fill out security questionnaires up to 50 times faster. This provides accurate and bias-free responses and saves teams from spending countless hours on repetitive tasks.
What I like most about Vendict is that it syncs security and GRC (Governance, Risk, and Compliance) teams together, allowing them to contribute or correct inaccurate responses without going through various communication channels that can cause distractions to the team.
For companies with multiple products/services, Vendict helps you develop a unique compliance library for each product. That way, it won‘t hallucinate while answering questionnaires and can tailor its responses to each product’s features.
NVIDIA Morpheus
NVIDIA Morpheus is practicalizing fighting generative AI with generative AI with their NeMo's NLP models, which use generative AI to tackle spear phishing.
Morpheus uses NVIDIA NeMo's NLP models to analyze and classify emails quickly. Amazingly, it is trained using synthetic emails, yet NIVIDA Morpheus accurately detects AI-generated emails by over 20% within 24 hours of integration.
I love how it adds an extra layer of protection by profiling user interactions to detect unusual or suspicious actions that may indicate a security threat. In my view, it’s a practical solution for quickly identifying and preventing sophisticated cyber threats.
VirusTotal Code Insight
VirusTotal Code Insight uses generative AI to analyze code snippets and produce natural language summaries. It's a type of ChatGPT dedicated to security coding, making it hallucinate less than other generative AI models.
VirusTotal Code is powered by the Sec-PaLM model from Google Cloud. It helps cybersecurity professionals identify potentially malicious files, especially where traditional antivirus engines may miss hidden threats.
According to founder Bernado Quintero, what sets VirusTotal apart is its ability to “generate natural language summaries of codes from the point of view of an AI collaborator specialized in cybersecurity and malware.” In simple terms, it acts as a security consultant to explain to security teams what a code is up to.
Implementing Generative AI Company-Wide
While generative AI offers great advantages to the cybersecurity industry, it also has disadvantages that can lead to costly data breaches. To avoid this, I advise that companies be cautious when adopting generative AI.
One effective approach is implementing an AI policy, which sets rules and guidelines for its use. Your policy should specify who can use generative AI — whether all employees, only third-party partners, or a specific subset like the marketing or security teams. It should also outline the allowed generative AI tools, the AI security training employees receive during onboarding, and the types of queries users are allowed to pass to the designated generative AI tool.
How to Use Generative AI in Content Marketing
Find out how to integrate AI tools into your content workflows.
- Understanding generative AI.
- Limitations of AI.
- Rolling out AI to your team.
- And more!
Download Free
All fields are required.