Generative AI has entered cybersecurity with full force, and like every powerful technology, it comes with its pros and cons.

On one side, attackers are already experimenting with AI to generate malware, craft phishing campaigns, and create deepfakes that erode trust. On the other hand, defenders are beginning to use AI to scale penetration testing, accelerate application security, and reduce the pain of compliance.

The stakes are high. A recent ForeScout Vedere Labs 2025 report showed zero-day exploits have risen 46% year over year — a clear signal that attackers are accelerating. At the same time, Gartner predicts that by 2028, 70% of enterprises will adopt AI for security operations.

The reality sits in between: AI is already changing penetration testing, application security, and compliance — but it’s not a replacement for human expertise. Instead, it’s a force multiplier, reshaping how quickly and effectively security teams can discover weaknesses, meet regulatory obligations, and prepare for adversaries that are also harnessing AI.

The Dual-Use Nature of Generative AI

Generative AI in cybersecurity is best understood as a dual-use technology — it amplifies both attack and defense capabilities.

GenAI for Attackers

AI lowers barriers by generating sophisticated phishing emails, fake personas, malicious code, and even automated exploit chains. Tools like CAI (Cognitive Autonomous Intelligence) demonstrate how autonomous agents can be tasked with scanning, exploiting, and pivoting through systems — blurring the line between proof-of-concept research and adversary capability. BlackMamba (an AI-generated polymorphic keylogger) and WormGPT (marketed on underground forums as “ChatGPT for cybercrime”) have already shown what’s possible.

GenAI for Defenders

AI provides scale, speed, and intelligence. Beyond SOC copilots, AI is being embedded directly into the software development lifecycle (SDLC) via AI security code reviewers and AI-powered vulnerability scanners. GitHub Copilot (with secure coding checks), CodiumAI, and Snyk Code AI catch issues earlier, reducing downstream remediation costs. Microsoft’s Security Copilot helps analysts triage alerts and accelerate investigations.

This duality is why many experts warn of an “AI arms race” between security teams and cybercriminals — where speed, automation, and adaptability may decide outcomes.

Offensive Security & Penetration Testing

Penetration testing has traditionally been time-intensive, relying on skilled specialists to probe for vulnerabilities in networks, applications, and infrastructure. AI is shifting the tempo.

Large language models and autonomous agents can now:

A striking proof point is XBOW, the autonomous AI pentester that recently climbed to #1 on HackerOne’s U.S. leaderboard. In controlled benchmarks, XBOW solved 88 out of 104 challenges in just 28 minutes — a task that took a seasoned human tester over 40 hours. In live programs, it has already submitted over a thousand vulnerability reports, including a zero-day in Palo Alto’s GlobalProtect VPN.

Other examples include:

Yet despite its speed and precision, tools like XBOW still require human oversight. Automated results must be validated, prioritized, and — critically — mapped to regulatory and business risk. Without that layer, organizations risk drowning in noise or overlooking vulnerabilities that matter most for compliance and trust.

This is the shape of penetration testing to come: faster, AI-augmented discovery coupled with expert judgment to make results meaningful for businesses under pressure from regulators and partners.

How Can Generative AI Be Used in Application Security

Application security (AppSec) is another area seeing rapid AI adoption. The software supply chain has grown too vast and complex for purely manual testing, and generative AI is stepping in as a copilot.

Key applications include:

The promise is efficiency — but the challenge is trust. An AI-generated patch may fix one issue while creating another. That’s why AI is best deployed as an accelerator in AppSec, with humans validating its findings and ensuring fixes align with compliance frameworks like ISO 27001, HIPAA, or FDA MDR/IVDR for medical software.

How Can Generative AI Be Used in Compliance & Governance

Beyond pentesting and AppSec, AI is finding a role in the often overlooked world of compliance. For companies in healthtech, biotech, or fintech, compliance can make or break growth — and AI is beginning to reduce the heavy lift.

Emerging applications include:

This is particularly powerful in genomics or diagnostics, where startups face heavy regulatory burden and need to show both security and compliance maturity to win partnerships or funding.

Industry Examples

The use of AI in cybersecurity isn’t hypothetical — it’s playing out across industries today:

Emerging Risks of Generative AI in Cybersecurity

With opportunity comes risk. AI introduces new attack vectors and amplifies existing ones:

Best Practice Strategy for Secure AI Adoption

To adopt AI in pentesting, AppSec, or compliance responsibly, organizations should:

Conclusion & Outlook

So, how can generative AI be used in cybersecurity? It won’t replace penetration testers, application security engineers, or compliance leads. But it will accelerate their work, expand their coverage, and reshape how vulnerabilities are found and reported.

The winners won’t be those who adopt AI blindly, nor those who ignore it. They’ll be the organizations that harness AI as a trusted copilot — combining speed with human judgment, technical depth with regulatory alignment, and automation with accountability.

By 2030, AI-driven pentesting and compliance automation may become table stakes. The deciding factor will not be whether companies use AI, but how responsibly, strategically, and securely they use it — especially in regulated sectors where compliance and trust are non-negotiable.

Further Reading & References

  1. ForeScout Vedere Labs H1 2025 Threat Review

  2. Gartner – The Future of AI in Cybersecurity

  3. CAI – Cognitive Autonomous Intelligence

  4. BlackMamba AI Keylogger

  5. WormGPT Underground Tool

  6. GitHub Copilot

  7. CodiumAI

  8. Snyk Code AI

  9. Microsoft Security Copilot

  10. XBOW Autonomous Pentester

  11. Palo Alto GlobalProtect VPN Vulnerability

  12. AutoSploit

  13. AI in Bug Bounties – PortSwigger

  14. MITRE ATLAS

  15. OWASP Top 10 for LLM Apps

  16. ISO 27001 Standard

  17. HIPAA Security Rule

  18. FDA Medical Device Regulation

  19. FDA SPDF Guidance

  20. Vendict

  21. Scrut

  22. Thoropass

  23. IBM Security AI

  24. NVIDIA AI for Security

  25. Accenture Security

  26. DARPA AIxCC

  27. North Korean APT Attacks – Mandiant

  28. WSJ – Deepfake CEO Fraud Case

  29. FT – Deepfake Audio Scams

  30. GDPR Text

  31. Samsung ChatGPT Data Leak – The Register

  32. Microsoft – AI Red Teaming

  33. NIST AI Risk Management Framework

  34. ENISA AI Security Guidelines