Dive Brief:
- Nearly two-thirds of senior IT executives have clicked on phishing links, and 17% of them didn’t report doing so, the security firm Arctic Wolf said in a report published on Wednesday.
- A fear of punishment or even termination could be driving that reticence, Arctic Wolf said.
- Nearly 10% of IT leaders responding to Arctic Wolf’s survey said they’d clicked on more than one phishing link and hadn’t reported them.
Dive Insight:
Arctic Wolf’s findings about IT leaders’ encounters with phishing messages — and their occasional reluctance to report those encounters — are particularly concerning given the report’s other findings about phishing attacks.
Nearly 70% of IT leaders have been targeted in cyberattacks, the report found, with 39% reporting phishing, 35% reporting malware and 31% reporting social engineering. And even as many IT leaders clicked phishing links, more than three-quarters said they were confident that “their organization wouldn’t fall for a phishing attack.”
The report from Arctic Wolf, which sells endpoint security and managed detection and response software, also contains information about the prevalence of data breaches worldwide. Australia and New Zealand saw the biggest increase in breaches between 2024 and 2025, with 78% of organizations there reporting intrusions this year compared with 56% last year. The share of U.S. organizations reporting breaches stayed flat, while breaches declined somewhat in Nordic countries and increased slightly in Canada.
The survey of 1,700 IT leaders and lower-level workers also asked organizations about their AI use and policies. Sixty percent of IT leaders said they had shared confidential information with an AI system such as ChatGPT, an even higher proportion than the 41% of lower-level employees who reported having done so. And while 57% of low-level workers said their organizations had a generative AI use policy, 43% said they weren’t sure or didn’t think their organizations did.
“This gap shows a lack of communication, as well as awareness training, around the risks of AI tool use,” Arctic Wolf researchers wrote. “Organizations need to ensure their policies are communicated clearly and enforced and offer training to help users understand the risks AI technology can pose to their data and network at large.”
Nearly 60% of organizations said they were worried about AI tools leaking sensitive data, while roughly half reported concerns about the abuse of those tools.