- The Cybersecurity and Infrastructure Security Agency and the U.K. National Cyber Security Centre released joint guidance Sunday on how to ensure the rapidly growing business of artificial intelligence incorporates secure development practices.
- The agencies are emphasizing the use of secure by design practices, which would ensure AI is developed with security as a core element of any new applications or other technologies. The guidelines also emphasize the importance of security in operational practices and maintenance.
- The guidelines were developed alongside 21 other ministries and cybersecurity agencies around the world, including all members of the Group of Seven industrial economies.
The AI guidance is part of a larger effort to create security guardrails around the rapid evolution of AI technology.
The Biden administration has taken a number of steps to make sure cybersecurity is a priority for key stakeholders amid rapidly evolving technologies based on AI.
President Joe Biden issued an Executive Order in October designed to create safeguards around the use of AI. CISA earlier this month unveiled a Roadmap for Artificial Intelligence, part of a larger plan to prevent the malicious use of AI and to ensure the technology is used to enhance cybersecurity.
“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time,” Department of Homeland Security Secretary Alejandro Mayorkas said in a statement. “Cybersecurity is key to building AI systems that are safe, secure and trustworthy.”
The guidelines are broken down into four key categories:
- Secure design, which incorporates risk and includes threat modeling.
- Secure development, including supply chain security and asset and technical debt management.
- Secure deployment, including the protection of infrastructure and developing incident management processes.
- Secure operation and maintenance, which includes logging and monitoring as well as information sharing.
The release of the guidelines follows an AI Safety Summit hosted by U.K. officials earlier this month.
“The guidelines for secure AI system development, jointly developed by CISA and NCSC, is a step towards framework harmonization and makes good on the executive order's commitment to engage with international allies and partners in developing a globally aligned framework for AI,” Alla Valente, senior analyst at Forrester, said via email.