A recently fixed critical vulnerability in Microsoft’s Copilot AI tool could have let a remote attacker steal sensitive data from an organization simply by sending an email, researchers say.
The vulnerability, dubbed EchoLeak and assigned the identifier CVE-2025-32711, could have allowed hackers to mount an attack without the target user having to do anything. EchoLeak represents the first known zero-click attack on an AI agent, according to researchers at Aim Security, which released the findings in a Wednesday blog post.
“This vulnerability represents a significant breakthrough in AI security research because it demonstrates how attackers can automatically exfiltrate the most sensitive information from Microsoft 365 Copilot’s context without requiring any user interaction whatsoever,” Adir Gruss, co-founder and CTO at Aim Security, told Cybersecurity Dive via email.
An EchoLeak attack could have exploited what researchers call an “LLM scope violation,” in which untrusted input from outside an organization can commandeer an AI model to access and steal privileged data.
Vulnerable data could potentially include everything to which Copilot has access, including chat histories, OneDrive documents, Sharepoint content, Teams conversations and preloaded data from an organization.
Gruss said Microsoft Copilot’s default configuration left most organizations at risk of attack until recently, although he cautioned that there was no evidence any customers were actually targeted.
Microsoft, which has been coordinating with researchers about the vulnerability for months, released an advisory on Wednesday that said the issue was fully addressed and no further action was necessary by customers.
“We appreciate Aim Labs for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted,” a spokesperson for Microsoft said via email.
Microsoft said it has updated products to mitigate the issue. The company is also implementing defense-in-depth measures to make further enhancements to its security posture.
Jeff Pollard, vice president and principal analyst at Forrester, said the vulnerability is in line with prior concerns raised about the potential security risks from AI agents.
“Once you’ve empowered something to operate on your behalf to scan your email, schedule meetings, send responses and more attackers will find a way to exploit it given the treasure trove of information that resides in both work and personal email accounts,” Pollard told Cybersecurity Dive via email.
(Adds comments from Microsoft and Forrester.)