Secure AI Development: Unpacking and Applying Key Guidelines
Understanding the latest AI security guidance from NCSC and CISA
Leading global cybersecurity standards bodies, led by the UK’s National Cyber Security Centre (NCSC) and the US Cybersecurity and Infrastructure Agency (CISA), have recently published their recommendations for secure AI development and deployment. The guidelines provide key considerations, as well as mitigations, for organizations looking to reduce security risks for the AI development lifecycle.
This article will provide an overview of the recommendations found in the publication, as well as some suggestions on how security teams can look to implement these into their broader AI security strategy.
What are the guidelines?
The guidance from the UK breaks down into four key areas (five, if you ask me) in the AI development lifecycle – design, development, deployment, and operations & maintenance – and introduces recommendations for secure practices into each stage.
Secure design
Today, nearly all enterprise AI and data science teams neglect to consider security concerns as a part of the design stage. While more mature development teams may consider other risk areas, such as in the event of model failure, cybersecurity risk is not included in these assessments. This is primarily due to a lack of awareness of the unique threats to AI systems, primarily adversarial machine learning (AML).
Given this reality, the guidance appropriately recommends raising the organizational awareness of the security threats and risks to AI, as well as how to mitigate these issues.
Additionally, building a threat model for the AI system is included in the recommendations, which is critical for understanding where the security threats can manifest throughout the lifecycle, as well as the potential impact to the system if the threats were to manifest.
Finally, the guidance attempts to define best practices for striking a balance between performance and security; a tale as old as time, and something made particularly complicated given the vast amount of potential use cases.
Secure development
In the development stage of the AI lifecycle, the attack surface for AI systems is arguably at its widest, necessitating a robust approach to security. The standards organizations do an excellent job in covering the breadth of security requirements in this section.
The recommendations begin with AI supply chain security, which has rapidly evolved into the highest priority for organizations that lean on suppliers for AI components, including models, data, and external APIs. Critically, the guidance suggests acquiring the documentation and security attributes for externally procured AI systems and components.
Additionally, the guidance stresses the importance of asset identification, tracking, and monitoring – a challenge for many security organizations with AI development environments across multiple cloud environments.
The standards bodies also specifically call out developing model cards, data cards, and software bills of materials for AI systems. This will be a necessity for all organizations leveraging AI in their critical processes moving forward and has been a major driver in the development of Cranium’s AI Card capability.
Secure deployment
As with all other technology solutions, the security of the underlying infrastructure is critical for AI system security. The guidance recommends standard cybersecurity best practices like appropriate access controls and segregation of sensitive environments.
The guidelines introduce security considerations for AI systems at inference time in the deployment stage of the AI lifecycle, which shares some crossover with the operation & maintenance stage.
The guidance also highlights continuous monitoring of AI systems against adversarial attacks, specifically focusing on model inversion and data poisoning. The recommendations include controls at the input layer as well as general cyber best practices.
Additionally, the recommendations include developing incident response processes and procedures but does not provide too much detail into detection mechanisms or strategies on how to address AI security incidents.
Finally, the guidance includes suggestions on releasing AI systems responsibly, which may be somewhat confusing given the nebulous nature of what ‘responsible’ means in the context of AI. Regardless, this section critically covers the need for security evaluation and AI red teaming but does not detail specific approaches for conducting these tests outside of linking to open-source security testing libraries.
Secure operation and maintenance
In this stage, the guidelines cover elements of security during AI system operation and maintenance. The authors take a more granular focus on some of the continuous monitoring activities, recommending that organizations monitor changes in system performance and determine potential security issues based on the metrics.
In addressing secure update processes, the guidance does consider the fact that making changes to model assets will impact overall system behavior, but they do not necessarily suggest any particular methodology to ensure security of the updates.
As with most domains in cybersecurity, the guidance recommends collecting and distributing the lessons learned in the process by participating in sharing communities. Given the nascency of this space, sharing useful information and findings is broadly beneficial and pushes the industry forward.
How can you leverage the guidance?
The guidance published by NCSC, CISA, and other supporting organizations ranges from very high-level to deeply technical. Here are some suggestions for quick ways to take advantage of the recommendations for your organization:
- Start with the foundational items. The first step in enabling a secure AI lifecycle involves cataloguing all your organization’s AI systems and associated assets. Ensure that this inventory is dynamically updated to reflect new use cases and decommissioned systems. Look to establish a mechanism to capture and track the AI systems used by your third-party vendors, as this will extend your visibility and control over both internal and external AI assets.
- Reference the materials when defining your organization’s AI security strategy. The excellent recommendations provided by the NCSC, CISA, and other supporting organizations should be combined with other available guidance, such as the NIST AI Risk Management Framework and MITRE ATLAS.
- Use the recommendations as a guidepost collaborating with your data science teams. There will likely be some initial pushback, as most innovators are not necessarily welcoming the security team with open arms. However, leveraging this guidance will provide additional credence to the calls for improved security in the AI development lifecycle, which only comes to fruition with the buy-in of the data science teams.
- Identify a platform to support AI security monitoring. Many of the recommendations included in the guidance focus on the continuous monitoring of AI systems. Cranium enables our clients to monitor their AI systems and detect instances of adversarial attack.
Overall, the recommendations provided by NCSC, CISA, and others are an extremely effective mechanism for shifting security further to the left in the AI lifecycle, and organizations will be well served to leverage them accordingly. We hope to see continued efforts by governing bodies and standards organizations to continue developing further guidance. For more of the latest on AI security policy and procedure guidance, connect with me and follow Cranium on LinkedIn!