
Microsoft published a threat intelligence report March 6 warning that nation-state hackers and cybercriminals are deploying agentic AI to automate entire cyberattack workflows, fundamentally changing the economics and speed of offensive operations.
The report, titled "AI as Tradecraft: How Threat Actors Operationalize AI," documents how attackers use AI agents—autonomous systems that plan, execute, and iterate without constant human prompting—to compress attack timelines from weeks to hours while lowering the skill floor required to conduct sophisticated campaigns.
North Korean Groups Lead Agentic AI Adoption
Microsoft Threat Intelligence identified North Korean state-sponsored groups Coral Sleet and Jasper Sleet as the most advanced adopters of agentic AI workflows. Coral Sleet demonstrated "rapid capability growth driven by AI-assisted iterative development," using AI coding tools to generate, refine, and redeploy malware components in continuous cycles.
The group deployed fully AI-enabled workflows spanning end-to-end attack operations: creating fake company websites, provisioning remote infrastructure, and testing payloads—all orchestrated through natural language commands to AI agents. Microsoft researchers found code characteristics consistent with AI-assisted creation, including emojis as visual markers within code paths and conversational inline comments describing execution states.
Coral Sleet also leveraged jailbreaking techniques to bypass large language model safeguards, generating malicious code that commercial AI platforms are designed to refuse. This enabled the group to accelerate operational timelines by producing exploit code without manual programming expertise.
What Agentic AI Enables for Attackers
Unlike generative AI, which responds to single prompts, agentic AI pursues objectives over time through multi-step planning, tool invocation, and adaptation. For threat actors, this automation fundamentally changes four critical phases of cyberattacks:
Reconnaissance becomes continuous and automatic rather than a discrete pre-attack phase. AI agents scan network blocks, identify vulnerabilities, and catalog targets at scale without human time costs.
Infrastructure provisioning that previously required skilled operators configuring command-and-control servers, reverse proxies, and tunneling setups can now be generated, tested, and managed through development platforms with built-in AI troubleshooting.
Social engineering workflows use AI to fabricate credible resumes, professional profiles, and supporting documentation, enabling threat actors to pass as legitimate remote workers and maintain long-term access to corporate environments.
Malware development cycles compress dramatically as attackers use LLMs to draft, reimplement, and debug malicious components. Microsoft observed early experimentation with runtime-adaptive malware that calls AI libraries during execution to modify behavior dynamically.
100x Speed Increase Changes Defender Calculus
The Microsoft report emphasizes that agentic AI operates at speeds human defenders cannot match through traditional detection approaches. Attacks that previously required days of manual work now execute in hours with autonomous retry and adaptation when blocked.
"The industrialization of these workflows changes the economics of attack," the report states. "Tasks that were formerly time-intensive and error-prone become repeatable and reliable." This means defenders must shift from detection-based strategies to enforcement-based architectures that prevent attack categories by design rather than attempting to catch attacks post-facto.
Microsoft noted that while large-scale agentic AI use by threat actors remains limited due to reliability constraints, the trajectory is clear. Security teams published the findings to pressure the industry toward infrastructure-grade controls: identity verification, least privilege access, guarded tool invocations, and full-stack observability for AI systems.
The company's March 6 briefing and follow-up interview commentary on March 8 conclude that organizations treating agentic AI as a chatbot feature rather than critical infrastructure fundamentally misunderstand the threat landscape reshaping cybersecurity in 2026.




