The most striking thing about Anthropic's Project Glasswing announcement this week was not that Claude Mythos can find zero-day vulnerabilities at scale. It was why Anthropic launched the project in the first place.

Anthropic did not build Claude Mythos to hack software. It just turned out to be shockingly good at it during testing. The company's response was to build a defensive cybersecurity program specifically to deploy the model's capabilities against the same class of vulnerabilities it could exploit - before others could use it offensively. That logic captures the central tension in AI and cybersecurity in 2026: the same capability that creates the threat is also the most powerful tool to defend against it.

The Scale of What Has Changed

For decades, cybersecurity operated on a relatively stable model. Humans write code, humans make mistakes, attackers find mistakes, defenders patch them. The game was slow enough for human-speed responses.

Generative AI has changed the tempo. "The level of opening that this provides attackers is dramatic," Charles Harry, a research professor at the University of Maryland who leads the Center for Governance of Technology and Systems, told Yahoo Finance. AI can automate reconnaissance, accelerate vulnerability discovery, and generate code to exploit findings - compressing timelines that used to take weeks into hours.

The offensive use is already documented. A Russian-speaking attacker in January used multiple AI tools, including Claude, to compromise over 600 devices across 55 countries. Earlier, a Chinese state-sponsored group used Claude Code's agentic capabilities to infiltrate roughly 30 organizations including tech companies, financial institutions, and government agencies. A security researcher this month discovered malware that had been "vibe coded" - created by an attacker using AI coding tools - embedded in an open-source project connected to an AI training company.

AI Also Shortens the Skills Gap on Defense

The same capabilities that accelerate attacks also accelerate defense. CrowdStrike, Palo Alto Networks, and Google's Mandiant unit are deploying AI to detect attacks, triage alerts, and automate incident response at machine speed. By end of 2026, analysts predict 30% or more of Security Operations Center workflows at large enterprises will be executed by AI agents rather than human analysts.

"Somebody who's a junior today has capabilities like a very experienced cyber defender," CrowdStrike's George Sentonas told Yahoo Finance. AI gives less experienced security staff access to the same pattern recognition and response playbooks that previously required years of practice.

CrowdStrike CEO George Kurtz has been more colorful about the risks than the benefits. "The challenge that you have with some of these AI agents," he said, "it's like giving full access to a drunken intern. Who knows what they're going to do?" Organizations deploying AI agents without proper governance are creating new attack surfaces faster than they are securing them.

Who Is Winning

The honest answer, according to most experts, is that attackers are currently ahead. They iterate faster, face fewer constraints, and have financial incentives to find one way in rather than defend every surface. As Andrew Lohn, a researcher at Georgetown's Center for Security and Emerging Technology, put it: "As more code gets written by AI, and more of the code gets inspected or certified by AI, it raises more questions about where the vulnerabilities are." More AI-generated code means more potential vulnerabilities, even if AI also improves detection.

For business leaders, the practical implication is that AI cybersecurity is not a product to buy but a posture to maintain. Organizations that deployed AI tools aggressively in 2024 and 2025 without governance frameworks are now sitting on attack surfaces their security teams cannot fully see. The defenders who close that gap deliberately, starting now, will be better positioned for what is already a structural shift in how attacks are executed.

Keep Reading