In September 2025, Anthropic disclosed that a state-sponsored threat actor used an AI coding agent to execute an autonomous cyber espionage campaign against 30 global targets.
The AI handled 80-90% of tactical operations on its own, performing reconnaissance, writing exploit code, and attempting lateral movement at machine speed.
This incident is worrying, but there's a scenario that should
This incident is worrying, but there's a scenario that should concern security teams even more: an attacker who doesn't need to run through the kill chain at all, because they've compromised an AI agent that already lives inside your environment.
One that already has the access, the permissions, and a legitimate reason to move across your systems every day.
The traditional cyber kill chain assumes attackers have to earn every inch of access.
It's a model developed by Lockheed Martin in 2011 to describe how adversaries move from initial compromise to their ultimate objective, and it's shaped how security teams think about detection ever since.
The logic is simple: attackers need to complete a sequence of steps, and defenders can interrupt the chain at any point.