Vercel, a cloud development platform, said that some of its internal systems were accessed after a third-party tool called Context.ai was compromised while being used by one of Vercel’s employees, according to a blog post released Sunday.
Vercel is widely known as the creator of Next.js, which is the open-source framework for React.
The attacker was able to take over the employee’s Vercel Google Workspace account and access certain company “environments and environment variables” that were not designated as “sensitive.”
Vercel said that a limited number of customers had their credentials compromised during the attack, and that they have been notified. They were urged to immediately rotate credentials.
The company said it believes the attacker is highly sophisticated, based on an assessment of their “operational velocity and detailed understanding of Vercel’s systems.”
Vercel is working with Mandiant, the incident response unit of Google, as well as other outside companies and law enforcement.
Context on Sunday said there was an attack in March where a hacker gained access to the company’s Amazon Web Services environment, according to a blog post.
The hacker appears to have compromised OAuth tokens for some of Context’s consumer users. At least one employee at Vercel signed up for AI Office Suite, a Context product that allows consumers to work with AI agents to build presentations and other documents.
Context said that Vercel is not one of its enterprise customers, but at least one of its employees used their Vercel corporate email to sign up for the AI Office Suite product. The employee granted “allow all” permissions, which opened wide access to Vercel’s Google Workspace environment.
Context has been working with those who were impacted and is coordinating with CrowdStrike to validate its containment efforts.
Context, which said the consumer product was separate from its enterprise product, shut down the AWS environment.
Jeff Pollard, vice president and principal analyst at Forrester, said the attack is a reminder about concerns about third-party risk management and permissions related to AI.
“This definitely highlights that as AI-related tools spread through an environment, OAuth will remain one of the key elements of the attack surface,” Pollard told Cybersecurity Dive. “That isn’t about the inherent security flaws of AI applications, it’s more about AI tools requiring permissions to be as valuable as possible.”