The vast majority of U.S. companies, 96%, were targeted with at least one payment fraud attempt in the past 12 months, according to automated fraud prevention services provider Trustpair, which surveyed more than 260 senior finance and treasury leaders. The fraud attempts mark a 71% increase from the prior year as criminals stepped up their tactics.
To dupe organizations, half of respondents said fraudsters primarily used text messages or fake websites, according to a report on the findings, released Tuesday. CEO and CFO impersonations were seen in 44% of attempts, while hacking and business email compromise were each seen in 31% of attempts.
“Our research shows fraudsters are becoming increasingly more sophisticated in their tactics and their reach is expanding,” Trustpair CEO Baptiste Collot said in a press release.
Total potential losses from cyberattacks and cyber fraud rose 48% in 2022 to $10.2 billion from $6.9 billion in 2021, according to the FBI. The FBI’s Internet Crime Complaint Center received 21,832 complaints involving fraud attempts via “business email compromise” scams in particular, with adjusted losses totaling over $2.7 billion.
Such attacks have accelerated as generative AI tools like ChatGPT have made it much easier for scammers to create “close-to-perfect” texts, emails, phishing websites, and deep-fake voices at scale, according to Trustpair, which is based in Paris with a U.S. headquarters in New York.
“ChatGPT-generated text messages, hacked websites, and deep-fake phone calls are now the norm as fraudsters use cutting-edge technology and AI to move faster and better than ever before,” the report said.
With a small sample of audio, cybercriminals can clone the voice of nearly anyone and send bogus messages by voicemail or voice messaging texts, according to a 2023 report from cybersecurity firm McAfee.
“The aim, most often, is to trick people out of hundreds, if not thousands, of dollars,” the report said. Of 7,000 people surveyed by McAfee, 1 in 4 said they had experienced an AI voice-cloning scam or knew someone who had.
More than two-thirds of respondents said they weren’t confident they could tell the difference between a cloned voice and the real thing.
AI can also be used by criminals to review large volumes of data for the purpose of identifying potential targets and tailoring scam content, according to a report released last December by PwC. “There is no hard evidence that this is currently happening, but there was a belief amongst some of those that we spoke to that this risk will increase in prevalence over time,” the report said.
Of the companies that were targeted by fraud attempts in the past year, most (90%) were hit with at least one successful attack, according to the Trustpair study. For 1 in 4 of companies, the average financial loss of successful fraud attacks was more than $5 million.
Financial loss isn’t the only potential risk of such attacks, the report said. The possibility of reputational damage with customers or investors was a concern for half of finance and treasury leaders.
Despite the fact that payment fraud is an escalating threat, many companies aren’t adequately prepared to face it, with more than half of respondents reporting an increase in anti-fraud technology spending in the last six to 12 months, according to Trustpair.
“For budget and prioritization reasons, as well as a lack of awareness about market solutions, companies aren’t shifting to automation quickly enough and are still lagging behind fraudsters,” the report said.