Keynote Address by SMS Janil Puthucheary at the Singapore International Cyber Week (SICW) High-Level Panel on AI on 16 Oct 2024

Published on 16 Oct 2024

Can AI Be Secure?

Your excellencies,

Distinguished guests, Ladies and gentlemen,

Good morning.

 

1.     It is my pleasure to be here with you at this High-Level Panel on Artificial Intelligence.

AI Security is an International Concern

2.     “Can AI be Secure?”

a.     This is a question many of us have grappled with, more so since the explosion of ChatGPT into our consciousness in 2022, but it’s been relevant to our work in the government, industry, academia, and it is relevant to us as users, and relevant in our trust in the adoption of AI.

b.     We are concerned with whether AI can be made safe, can be made secure, and be a force for good.

3.     Many of our partners and friends have hosted international discussions on this topic in the past two years. Singapore has been an active participant in this space, building on our existing work in AI governance.

a. I    n 2023, we joined our counterparts at the AI Safety Summit, hosted by the UK. This was an important milestone in dialogues on AI safety and security, across borders.

b.     In November of last year, Singapore was also invited to co-seal the “Guidelines for Secure AI System Development”, developed by the UK’s National Cyber Security Centre, or the NCSC, and the US’s Cybersecurity and Infrastructure Security Agency. This document outlines principles that system owners should use to guide their decision-making about AI, and their frameworks for AI safety.

c.     This year, for AI safety, Singapore has launched a Model AI Governance Framework for Generative AI, following an international consultation process. This is the first comprehensive framework for the governance of Generative AI. It has nine dimensions to ensure that these models are seen and addressed in totality.

 

Trust in AI will Enable Adoption

4.     In addressing emerging technologies, these are the sorts of critical conversations we need to have. Technologies like cloud computing, Artificial Intelligence, and quantum computing promise significant benefits to our industries and economies. However, we must be clear-eyed about the risks, and how we will manage them, rather than learn the lessons the hard way, subsequently try to play catch up, and patch the vulnerabilities that we discover through crisis and something going wrong.

5.     This is why we have made trust a core principle, a core part of Singapore’s plan for Smart Nation 2.0. All users – large organisations and individuals – must be able to trust that technology, including emerging technology, will be secure and reliable, that their safety and well-being are assured.

6.     This provides the confidence to try new use cases and deploy new technologies in the next bound of growth, and this feeds into some of our larger aims in developing this Smart Nation 2.0 plan—what we can do for our communities, and what we can do to grow our society and our opportunities.

7.     So, this is not just about growth and productivity—those are necessary outcomes, but this is also about implementing AI well. We know that we must adopt a higher standard for safety and security of AI in certain domains, such as healthcare, to address key risks. 

a.     We must protect our AI systems against malicious cyberattacks. We know threat actors can insert backdoors into open-source AI components, final models can be manipulated or disrupted. Threat actors could also mount classical attacks on the software supporting AI. The old risks haven’t gone away. They’ve just been added to, so these systems all need to be updated. They need to be patched.

b.      We must protect AI models against attempts to extract data. And all these are necessary efforts to strengthen long-term trust in AI-based solutions.

8.      This requires close partnership between service providers, industry players, and public sector technology partners, such as what we have in Singapore, Synapxe, our Healthcare Technology Agency and the Government Technology Agency of Singapore. 

9.     AI has also grown in many sectors, and across our entire ecosystem. Therefore, there are not just the sector-specific risks, we also face systemic risks. These have come to the fore in our thinking, and this is really an international challenge that we have to tackle together.

a.     If there are disruptions to key parts of our critical AI infrastructure, many companies can lose access to their models, tools, and services.

b.     And users will find it difficult to continue with their activities, if they build their business model, process model around AI and the AI solutions have been corrupted. Efforts to repair and restore these services could take some time, depending on the security and resilience measures that are in place.

c.     And as this happens, it affects many people, regardless of where they reside and where the AI models have been deployed.

10.      This is why it must be a priority for us to make AI secure, and trustworthy. If we had to worry constantly about such risks, it would be difficult for any of us to adopt AI. 

11.     These are the necessary conditions for AI adoption, and we need to take practical steps to provide a base of trust.

 

Our Progress in AI Security

13.     Having painted the “glass half-empty” picture, actually we are seeing quite a bit of progress. There is a lot of AI adoption across the globe at an increasing pace. This is now the “glass half-full”. In some countries, we are seeing AI adoption in critical infrastructure.

14.     Now we know that this adoption increases many classical cybersecurity risks. This can affect the confidentiality, integrity, and availability of AI. There are new risks unique to AI models and systems which are not authorised.

15.     But we are not starting from zero. AI security is relatively nascent compared to the classical cybersecurity, but there are many existing efforts to help developers put the right guardrails in place, and to secure their models, secure their systems, and the many conversations that I’ve highlighted are part of this process.

a.     The examples continue: the US National Institute of Standards and Technology has released an AI Risk Management Framework. This will help users to manage potential AI risks.

b.     The Ministry of Science and Technology in South Korea has also announced its plans to make AI more safe, secure, and trustworthy, in its “Strategy to Realise Trustworthy AI”. 

16.     Yesterday, at the SICW Opening Ceremony, Senior Minister Teo Chee Hean announced that the Cybersecurity Agency of Singapore is publishing the first edition of our Guidelines and Companion Guide for Securing AI.

a.     These guidelines set out key, evergreen principles that system owners should use to guide their approach to security of AI, including how they implement security controls and best practices.

b.     The companion guide is a community effort to provide system owners with practical measures, and controls. This is not meant to be prescriptive, but is a resource to support system owners to navigate this nascent space.

c.     I would like to thank our international partners, industry players, and professionals for their comments. We received positive feedback on our initial draft, along with suggestions on how to improve it.

d.     We have taken effort to address the feedback, and have put the documents out as a community-led resource. We hope to continue working together to make AI more secure in practice.

17.     I am also pleased to announce that CSA has worked with Resaro, a Singaporean company in the AI assurance space, to co-author a paper on AI security risks. This paper explores what security of AI means, and discusses the role that all stakeholders should play in this space. The Guidelines, the Companion Guide, and this discussion paper are all linked on the card that you may have seen on your seats. The paper and the companion guide are available online.

18.     We are also developing our local community of cybersecurity professionals, to discover new techniques for securing AI.

a.     For example, from July to September, Singapore hosted a Global Challenge for Safe and Secure Large Language Models, or LLMs. I am proud to share that this challenge saw more than 300 international participants to developing robust security measures and innovative approaches to mitigate jailbreaking attacks on LLMs, and to make LLMs more secure.

b.    We had more than 100 teams in this challenge, including groups from China, Germany, Japan, Malaysia, Singapore, and the United States. This is a reflection of the global effort to tackle the challenges of AI.

c.    Our top teams are in the audience with us today. Please join me in congratulating them and thanking them for their effort to make AI more secure.

d.    We will have the panel discussion after this and after the panel discussion, the winners will receive their prizes. Please stay on and congratulate them, support them and encourage them for the great work that they have done, which we will encourage them to keep on doing.

 

Facilitating a Public-Private Conversation on AI

19.     The panel today is an important opportunity for us as government officials to have an open dialogue with industry professionals, together with researchers, and explore how AI can be made more secure. I look forward to our panellists’ comments on the key measures we should prioritise, and what has been most effective so far.

20.     More importantly, I look forward to how the dialogue can discuss how stakeholders should continue to work together and improve their relationships across government agencies, vendors, system owners, industry players, and users, to safeguard the development, deployment and the use of AI.

a.      Everyone has a stake in building trust in AI. We are in this together and we are at a critical period for the development, deployment, and adoption of AI. We will work together to address these issues early – across sectors, and across jurisdictions.

21.     Thank you for inviting me to be with you today. I wish you all a fruitful session, and a wonderful Cyber Week.

+++

 


 

Report a Cybersecurity Incident

SingCERT encourages the reporting of cybersecurity incidents as it enables us to better understand the scope and nature of cyber incidents in Singapore. This will enable us to issue alerts or advisories on relevant threats, and assist a broader range of individuals and organisations.
Report Incident