Dive Brief:
- Cybersecurity is one of the leading risks influencing corporate executives’ decisions about AI adoption, the consulting firm KPMG said in a quarterly AI pulse survey released on Tuesday.
- Three-quarters of senior leaders at large corporations told KPMG that they were worried about the cybersecurity and privacy risk associated with AI tools, according to the report.
- The survey also asked questions about governance approaches and agentic AI, offering a window into how businesses around the world are wrestling with new security challenges.
Dive Insight:
The widespread apprehension about AI’s cybersecurity implications captured in the KPMG survey stands at odds with the business community’s rapid adoption of the technology. But the survey also found that organizations became more comfortable with risk management as their AI programs matured.
“Among organizations still experimenting with AI, just 20 percent feel confident managing AI‑related risks,” KPMG said in a press release about its report. “That confidence rises sharply to 49 percent among AI leaders, indicating that governance frameworks strengthen as AI becomes embedded into real‑world operations.”
Even organizations with mature programs face challenges, however. Nearly half (44%) of respondents to the survey identified cybersecurity and employee misuse as their most serious problems. That figure represented a slight increase from what KPMG found in the fourth quarter of 2025, when only one-third of respondents ranked those problems as their biggest.
Cybersecurity risks also represented a financial challenge, with 58% of respondents saying those risks made it difficult for them to demonstrate the return on their AI investments.
On the agentic AI front, KPMG found significant interest but also persistent wariness. More than half of organizations are officially deploying AI agents, while another 30% are testing them, according to the report. And 43% of businesses are embedding security controls into agents, “along with clear procedures for monitoring and evaluation.” Similarly, 43% of respondents said they had demarcated certain “high-risk use cases” where they would not let agents act autonomously.
Nearly 60% of businesses said they planned to take a human-in-the-loop approach to managing their AI agents, in which human employees validated each of the agents’ outputs.
“AI agents deliver the most value when people remain firmly in the lead, setting intent, exercising judgment and retaining accountability,” KPMG said in its report.
As organizations plan their AI strategies, cybersecurity is at the forefront of their considerations, KPMG found. According to the survey, 91% of business leaders said “data security, privacy, and risk concerns” collectively were the top factor influencing their AI strategies over the next six months.
“AI’s potential value is no longer in question,” KPMG said in the report. “However, realizing that value depends on how effectively and securely organizations can reengineer work at enterprise scale.”