The FBI is warning about a threat campaign in which malicious actors are impersonating senior U.S. officials using malicious text messages and AI-generated voice messages.
The messages have been sent to current and former federal and state officials and others who may be contacts of those individuals, the bureau said in an alert released Thursday.
The messages are designed to establish a rapport with individuals who might then turn over access to a personal account, according to the alert. These social engineering techniques could be used to reach additional contacts and gain access to additional information or funds.
The newly announced operation is the latest social engineering campaign to use techniques known as smishing or vishing to gain access to victims’ accounts. Smishing is the malicious use of Short Message Service text messages, while vishing is the malicious use of voice memos, sometimes created with AI-powered voice cloning technology.
The tactics are similar to spear-phishing, in which hackers send emails customized to their targets in order to trick them into clicking malicious links.
State-linked threat groups and financially motivated hackers have increasingly used smishing and vishing to trick people into responding to texts or answering phone calls. Victims are often misdirected to hacker-controlled sites that collect passwords and other account credentials or are duped into thinking they’re talking to trusted contacts or public figures delivering vital information.
The use of AI-based voice cloning surged by 442% between the first half of 2024 and the second half of the year, according to a report by CrowdStrike.
The weaponization of voice cloning has been ongoing for at least five years, and the capabilities existed before then, according to Leah Siskind, director of impact and AI research fellow at the Foundation for Defense of Democracies.
“Criminals can use it to social engineer a situation, usually for financial gain,” Siskind told Cybersecurity Dive via email. “An example: malicious actors clone a boss’s voice and use [it] to request that the CFO pay off an unexpected invoice.”
Siskind said she is aware of such cases taking place as early as 2018.
It is unclear whether the federal officials being impersonated in the new campaign have been compromised on their personal or government devices. But Aaron Rose, a security architect at Check Point Software Technologies, said publicly available audio, like a speech, can be used to create legitimate voice clones.
“Threat actors are now using AI voice cloning tools to create realistic impersonations of public figures,” Rose said via email. “These tools can replicate someone’s voice with surprising accuracy after analyzing as little as a few minutes of audio.”
Microsoft in 2024 warned of a threat actor tracked as Storm-1811 using Microsoft Teams to impersonate IT help desk workers. The hackers convinced users to grant access to their devices through Quick Assist.
Scattered Spider and AlphV used similar vishing techniques during the 2023 attacks against MGM Resorts.
Mandiant in 2023 conducted red-team exercises in which employees used AI-based voice spoofing to gain access to a client’s internal network. The red team pretended to be a member of a client’s security team and was able to train an AI model based on a natural voice sample.
The red team reached an employee of the client who reported to the security worker whose voice was spoofed. Mandiant was eventually able to deploy a malicious payload onto one of the client’s computers, bypassing Microsoft Edge and Windows Defender SmartScreen.