Threat actors are using AI to add speed and scale to their hacking toolkits and setting records for attack speeds that increasingly outpace security teams, according to a report released Tuesday from CrowdStrike.
The average e-crime breakout reached 29 minutes in 2025, a 65% increase in speed from the prior year, according to the report. The fastest observed breakout time in 2025 was only 27 seconds, compared with 51 seconds the prior year.
Researchers define breakout time as the period between initial intrusion until an adversary is able to move onto another system. In one particular case, hackers were able to exfiltrate data within four minutes of gaining initial access.
CrowdStrike researchers see the reduction in breakout time as placing additional pressure on security teams to be able detect and respond to attacks. He compared the role of network defenders to security guards in a building lobby.
“If that threat actor gets past the guard and they get into the elevator, now they have to go floor to floor and door to door to figure out every place that adversary went,” Adam Meyers, head of counter adversary operations, at CrowdStrike said during a conference call. “What did they touch? What did they get into?”
Threat groups are also abusing legitimate AI tools as part of their attacks. About 90 organizations were impacted by hackers dropping malicious prompts into these tools in order to steal credentials or steal cryptocurrency.
Nation-state and criminal groups increased their use of AI by about 90%, according to the report. For example, state-linked threat group Fancy Bear used AI-enabled malware called LameHug to automate document collection and reconnaissance activity.
A cybercrime actor tracked as Punk Spider used AI-generated scripts to erase forensic evidence and accelerate credential dumping. Famous Chollima, a North Korea-linked threat actor, used AI-generated personas for insider attacks.
The report confirms growing fears about threat groups adding AI tools to scale up attacks. In November, Anthropic reported a China-linked adversary abused its AI-based coding tool in a global espionage campaign that hit 30 different organizations.