Dive Brief:
- Government-backed hackers are increasingly using artificial intelligence to make their attacks faster and more effective, CrowdStrike said in a report published on Monday.
- AI is helping cyber threat actors conduct reconnaissance, understand the exploitation value of vulnerabilities and produce phishing messages, the security firm said in its annual threat hunting report.
- Cybercriminals are also using AI to “automate tasks and improve their tools,” according to the report.
Dive Insight:
As businesses race to incorporate AI into their workflows, hackers have also found the technology useful for understanding their targets and bypassing the social and technical barriers that stymied their past attacks.
The Iran-linked hacking team Charming Kitten, for example, “likely” used AI to generate messages as part of a 2024 phishing campaign against U.S. and European organizations, CrowdStrike said. Another group, which CrowdStrike called “Reconnaissance Spider,” almost certainly used AI to translate one of its phishing lures into Ukrainian when it reused old messages after their initial deployment. The attackers forgot to remove the AI model’s boilerplate prompt-response sentence from the text they copied.
AI is also helping the North Korea-linked hacker team “Famous Chollima” (also tracked as UNC5267) sustain “an exceptionally high operational tempo” of more than 320 intrusions in a year, the report said. The group is known for masterminding North Korea’s remote IT-worker fraud schemes, which funnel stolen money to Pyongyang and sometimes lead to the theft of victim businesses’ confidential data.
The hackers have been able to “sustain this pace by interweaving GenAI-powered tools that automate and optimize workflows at every stage of the hiring and employment process,” CrowdStrike said. AI has helped the hackers draft résumés, manage job applications and conceal their identities during video interviews, researchers found.
AI is also a top target for hackers as companies scramble to adopt the technology and often forget to secure it. “Threat actors are using organizations’ AI tools as initial access vectors to execute diverse post-exploitation operations,” CrowdStrike said, citing the exploitation in April of a vulnerability in Langflow’s AI workflow development tool, which helped hackers burrow into networks, commandeer user accounts and deploy malware.
“As organizations continue adopting AI tools,” CrowdStrike said, “the attack surface will continue expanding, and trusted AI tools will emerge as the next insider threat.”