Guidelines and Companion Guide on Securing AI Systems

Published on 15 Oct 2024

Artificial Intelligence (AI) offers significant benefits for the economy and society. It will drive efficiency and innovation across various sectors, including cybersecurity. To harness these benefits, it is crucial that AI systems behave as intended, and outcomes are safe, secure, and responsible. However, AI systems are vulnerable to adversarial attacks and other cybersecurity risks. These can lead to data breaches or other harmful outcomes.

AI should be secure by design and secure by default, as with all digital systems. This proactive approach will allow system owners to manage security risks from the outset. The Cyber Security Agency of Singapore (CSA) has developed Guidelines on Securing AI Systems to help system owners secure AI throughout its lifecycle. These guidelines will help to protect AI systems against classical cybersecurity risks such as supply chain attacks, and novel risks such as Adversarial Machine Learning.

To support system owners, CSA has collaborated with AI and cybersecurity practitioners to develop a Companion Guide on Securing AI Systems. This is a community-driven resource to complement the Guidelines on Securing AI Systems. It is not prescriptive, and curates practical measures, security controls and best practices from industry and academia. It also references resources such as the MITRE ATLAS database and OWASP Top 10 for Machine Learning and Generative AI. We hope this will be a useful reference for system owners in navigating this developing space.  

As the field of AI security continues to evolve, the Guidelines and Companion Guide will be maintained as living resources. We will update the documents to account for new developments. Please write in to Aisecurity@csa.gov.sg if you have any views and suggestions.