Dive Brief:
- Half of all organizations have been “negatively impacted” by security vulnerabilities in their AI systems, according to recent data from EY.
- Only 14% of CEOs believe their AI systems adequately protect sensitive data.
- AI’s new risks are compounding the difficulty of securing networks with a patchwork of cybersecurity defenses as organizations use an average of 47 security tools, EY found.
Dive Insight:
EY’s new report pulls together a variety of insights about AI, from its role in the attack landscape to its integration into corporate environments. The consulting firm echoed other experts in warning that AI-powered automation is making it easier for hackers to conduct potentially costly intrusions.
“AI lowers the bar required for cybercriminals to carry out sophisticated attacks,” Rick Hemsley, cybersecurity leader for EY in the U.K. and Ireland, said in the report. “Cyberattacking skills that used to take time and experience to develop are now more easily accessible, for free, for a greater number of cybercriminals than ever before.”
Social engineering has benefited immensely from AI. EY noted recent CrowdStrike data showing that voice phishing, or vishing, attacks skyrocketed 442% in the second half of 2024. Cybercriminals’ breakout time — the measure of how long it takes intruders to begin moving laterally after gaining initial access — dropped from roughly an hour in 2023 to 48 minutes in 2024, according to CrowdStrike data. The security firm ReliaQuest recently found that it had dropped to just 18 minutes in the middle of 2025.
These figures should alarm defenders, EY said. “Accelerating breakout times are dangerous. When attackers become established in a network, they can gain deeper control and are harder to extract.”
With AI models introducing new risks into companies’ networks, organizations should focus on training their employees to avoid costly mistakes, EY said. The company recently found that 68% of organizations let employees develop or deploy AI agents without high-level approval, and only 60% of organizations issue guidance for that work.
Companies should also take steps to protect the integrity of their data, EY said, given the importance of that data to both traditional business functions and AI model training. The report noted multiple AI-related data risks, including models leaking sensitive information and companies accidentally letting models train on personally identifiable information.
Other recommendations in EY’s report include maintaining integrity in the supply chain of AI tools, embedding security considerations into every stage of the AI development process and redesigning threat-detection programs to more quickly spot and block potential abuses of AI tools.
CISOs should focus their security investments on “clear value-driving areas,” EY said.