The U.S. government and key Western allies on Wednesday published guidance to help critical infrastructure operators safely use artificial intelligence.
The guidance document describes four key principles for integrating AI into operational technology, detailing the issues that infrastructure operators should consider as they adopt AI. The advice covers general risk awareness, need and risk assessment, AI model governance, and operational fail-safes.
CISA, the FBI and the NSA produced the guidance in partnership with cybersecurity agencies from Australia, Canada, Germany, the Netherlands, New Zealand and the U.K.
The document urges companies to understand AI’s unique risks, educate their employees about using automated systems, develop clear justifications for using AI, establish strong security expectations with their vendors and carefully evaluate the challenges of integrating AI into existing operational technology. In addition, the document says, companies should develop clear AI use and accountability procedures, thoroughly test their AI systems before implementing them and continuously validate the AI’s compliance with regulatory and safety requirements.
Companies also should oversee their AI systems, according to the document, including through human-in-the-loop protocols that ensure an AI model can never take potentially dangerous actions without human oversight. AI systems should have “failsafe mechanisms that enable AI systems to fail gracefully without disrupting critical operations,” the document says, and companies should update their cyber incident response plans to account for their new uses of AI.
In addition, the guidance warns, “critical infrastructure owners and operators should review how they are integrating the AI system into their existing procedures and create new safe use and implementation procedures that focus on the AI system integration into the OT environment.”
Emphasizing caution
Since the beginning of the AI frenzy, the U.S. government has sought to temper critical infrastructure operators’ enthusiasm about the technology with warnings about its risks.
In November 2024, the Department of Homeland Security published a suggested breakdown of the AI-related roles of different entities in the critical infrastructure space, from developers to cloud providers to infrastructure operators themselves. And in July, the White House’s AI Action Plan directed DHS to expand the sharing of AI-related security warnings with infrastructure providers. That plan mostly promoted AI’s benefits, but it also acknowledged that “the use of AI in cyber and critical infrastructure exposes those AI systems to adversarial threats.”
Critical infrastructure systems are already rife with security vulnerabilities, and government officials worry that infrastructure operators may be creating new weaknesses by implementing AI in novel ways without sufficient safeguards. Many infrastructure providers, especially in widely dispersed communities like the water sector, have threadbare security budgets and no dedicated security personnel, making it less likely that anyone in those organizations will push back as executives race to adopt the latest exciting technology.