Dive Brief:
- Companies using AI to write code are creating serious security risks that not all organizations feel prepared to handle, according to a report released Wednesday by the security testing firm ProjectDiscovery.
- Security personnel want audit trails and access limitations before they integrate AI into their processes, ProjectDiscovery found. “They are not opposed to the technology, but they need it to earn its place.”
- The report highlights one of the most fraught aspects of the AI revolution in the corporate world: the tension between AI-assisted coders and the people responsible for protecting their work.
Dive Insight:
“A deluge of AI-generated code is hitting security teams, and the wave is building faster than most organizations can absorb it,” ProjectDiscovery said in its report. “Engineering teams are shipping at an unprecedented speed, and security teams are standing in the path of that rising tide.”
Only 38% of cybersecurity practitioners said they are keeping up well with the increasing volume of code they have to review because of AI, and nearly 60% said the task is getting harder, the report found. Security personnel at mid-sized companies felt this pressure more than their counterparts at large firms, perhaps reflecting the amount of resources larger companies have to devote to the work.
The report is based on a survey of 200 cybersecurity professionals at mid-size to large enterprises in North America and Western Europe. Nearly half of respondents work in security architecture, ProjectDiscovery said, and more than half play a role in “selecting or approving security products.”
Defenders are concerned about several risks stemming from the use of AI to write code, including the exposure of corporate secrets (78% of respondents cited this as a top concern), supply-chain risks from unreliable dependencies (73%) and “business logic vulnerabilities,” application design flaws that could let a hacker abuse legitimate functions (72%).
In its discussion of secrets leakage, ProjectDiscovery cited a 2025 National Cybersecurity Alliance report that found that 43% of employees admitted to entering sensitive company data into AI tools.
European respondents were more likely than their American counterparts to cite secrets leakage as a major concern (87% versus 72%), perhaps reflecting the strict privacy requirements of the European Union’s General Data Protection Regulation (GDPR).