MDDI's Response to PQ on AI-enabled Cybersecurity Risks
5 May 2026
Parliament Sitting on 5 May 2026
Question for Oral Answer
Mr Saktiandi Supaat asked the Minister for Digital Development and Information (a) what is the Government’s assessment of the risks from AI models claiming to be sufficiently advanced to steal data or disrupt critical infrastructure, to Singapore’s financial system and critical infrastructure; (b) whether such AI-enabled cyber risks could constitute a new class of systemic financial risk; and (c) what early-warning indicators or triggers, if any, are being developed to detect such threats.
Mr Edward Chia Bing Hui asked the Minister for Digital Development and Information in light of recent reports on frontier AI models such as Anthropic’s Mythos with advanced capabilities to autonomously identify and exploit software vulnerabilities (a) what is the Ministry’s assessment of the potential cybersecurity threats posed by such models; and (b) whether the Government is reviewing Singapore’s current cybersecurity frameworks and safeguards to address these emerging risks.
Answer
Mr Speaker, my response will cover the questions raised by Mr Saktiandi Supaat and Mr Edward Chia in today's Order Paper, together with the related questions from Mr Yip Hon Weng and Mr Louis Chua filed for tomorrow's sitting. If the Members are satisfied with the response, they may wish to withdraw their questions after this exchange.
We share the Members' concerns and have been tracking these developments closely for some time.
Access to Mythos
Let me first address Mr Louis Chua's question on access. The Government does not have access to Mythos. Anthropic has released it only to a limited set of partners under a controlled preview, and we are not aware of any local bank that has been granted access. More broadly, we do not assume that we will always have early access to every frontier model. Instead, we maintain close working relationships with various partners, including major AI labs and cybersecurity firms to track capability developments, and to assess safety and security implications when new capabilities emerge. We are working with partners who have access to Mythos to better understand its capabilities and implications.
Capability – Progression, Not Paradigm Shift
We should understand the advances in capabilities enabled by Mythos to be part of a continuum rather than a step change. Models like OpenAI's GPT-5.5 already show comparable cybersecurity capabilities, and are more widely available. Open-source AI models are also rapidly improving and are likely to reach similar capabilities within months.
With AI, vulnerabilities that once took expert teams weeks to detect manually can now be identified autonomously in hours, sometimes minutes. Attackers can exploit these vulnerabilities much faster than our traditional patching cycles can address.
AI is also changing how attacks are carried out. For example:
Google had reported in 2025 that threat actors had used AI to develop a new class of malware. Unlike traditional malware that is hard-coded at the point of creation, the PROMPTFLUX malware was designed to consult a live AI model during attacks. The AI would rewrite portions of the malware code in real-time to evade detection.
Another example is high-fidelity deepfake frauds. In a 2024 case, criminals used an AI-generated deepfake video call to impersonate a multinational firm's CFO and trick an employee into transferring $25.6 million to fraudulent accounts. Similar attempts have been made against business executives internationally, and in Singapore too. Today voice cloning requires only seconds of audio, and impersonation tools are readily available.
These attacks are faster, more scalable, and significantly more sophisticated. What we have not yet seen is fully autonomous AI agents running end-to-end campaigns. But this is a matter of time given the trajectory of technological developments.
So the issue is not any single model like Mythos. The underlying shift is broader and the risks are real. We are treating them with the seriousness they deserve.
Systemic Risk and the Financial Sector
To Mr Saktiandi Supaat’s query, we view AI-enabled cyber risk as an amplification of an existing systemic risk, rather than a wholly new category. The fundamentals to strengthen an organisation’s cybersecurity matters more than ever. Therefore, MAS has convened the CEOs of major financial institutions to discuss the threat landscape and drive collective action on technology and cyber resilience. Financial institutions are treating this with the seriousness it deserves and have been strengthening their posture.
The same urgency extends across all sectors. The Cyber Security Agency of Singapore will issue a letter to the Boards and senior leadership of all Critical Information Infrastructure owners today. This letter sets out clear expectations, including a review of cyber risk posture in light of AI-enabled threats. Our government agencies are similarly on alert.
What Organisations Must Do – Get the Basics Right, with Urgency
This is not an issue that should be delegated to IT teams alone. It demands leadership attention at the highest levels, including Board members and Chief Executives. This applies whether an organisation runs information technology (IT), operational technology (OT), or both types of systems. The priority is to get the fundamentals right - and do so quickly. Five areas matter:
First, revisit your cybersecurity risk assessment. Update these for IT and OT systems to account for the AI-enabled changes in the threat environment – in particular, the narrowing window between the discovery of a vulnerability and its exploitation by attackers.
Second, know what you have. Most breaches begin at an unmanaged asset – a forgotten internet-facing system, a third-party dependency, a shadow cloud account. You cannot defend what you cannot see. Ensure you have visibility over your current inventory.
Third, patch faster, monitor continuously. The time window between vulnerability disclosure and exploitation is collapsing. Periodic audits are not enough. Organisations need to move towards continuous monitoring, automated detection, and tested incident response.
Fourth, govern your own use of AI. AI tools introduce new vulnerabilities, particularly when connected to sensitive data, code, or critical systems. CSA's Addendum on Securing Agentic AI, launched in October last year, sets out practical guidance on mapping workflows and applying controls across the entire lifecycle.
Fifth, use AI in defence. The same capabilities adversaries are deploying can be turned to detection, triaging, and response. Mr Yip Hon Weng asked whether the Government is investing in AI-powered tools for active vulnerability and patch testing. The answer is yes. The Government has been fast-tracking capability building in using AI for cybersecurity for some time, working with industry to access and adapt the best tools available globally. At the same time, we are developing capabilities in-house, so that we are not dependent on any single external party. These are being piloted within Government and will be extended to more agencies and CII owners when ready.
Government Action – Assessment, Early Warning, and Patching
To Mr Louis Chua's question on assessment capabilities: CSA leads this effort, working closely with relevant government agencies and industry experts to exchange insights on the threats and mitigation measures. CSA is also reviewing standards and obligations for CII owners to account for the faster attack timelines. Under the Cybersecurity Act, CSA has the authority to direct and enforce action where necessary.
On Mythos specifically: without direct access, we cannot test the model ourselves. But we assess the risk based on published evaluations, threat intelligence, and our ongoing engagement with the major AI labs. Where credible evidence emerges of a material risk to systems of national consequence, we work with and advise CII owners to patch and harden their systems. This is the approach we have used to date, and we will continue to do so.
Mr Saktiandi Supaat asked about early-warning indicators and triggers. We have an established approach to do this. First, we closely engage technology partners for early visibility and insights into new capabilities as they emerge. Second, CSA monitors active exploitation patterns and shares threat intelligence and advisories through established channels. Third, we conduct attack-surface monitoring and increasingly we are leveraging AI to do so.
Mr Yip Hon Weng asked about patching protocols and timelines. This is not a new problem. It’s been existing. There are established practices for patching that can manage disruption to services. This includes staged rollouts, and pre-tested rollback procedures.
Ecosystem Effort
These efforts form a broader national effort to raise cybersecurity standards across all sectors. There is no silver bullets, and no one-time fixes. We must adapt and adjust to new risks. This requires all stakeholders to play their part actively and responsibly.
Many SMEs do not have a CISO, or even a dedicated IT team. To help our SMEs, CSA's SG Cyber Safe programme provides accessible cyber-hygiene guidance. This includes the Chief Information Security Officer as a Service (CISOaaS) and the Cyber Essentials and Cyber Trust Marks (CEM/CTM), which support organisations to assess and improve their security posture.
Individuals have a role to play as well. Three things matter most, as outlined in CSA’s Stop and Check campaign. First, use two-factor authentication and strong passphrases. Second, update software promptly to ensure that cyber criminals cannot find and use vulnerabilities to infect devices with malware, steal data or take control of devices. Third, use ScamShield and anti-virus to safeguard devices and accounts. Basic cyber hygiene matters.
To conclude, the Government will continue to raise awareness, set standards, and support organisations in building robust cyber-defences. But resilience depends on everyone doing their part. We must act early and decisively, and stay ahead of the threat.
Reposted from https://www.mddi.gov.sg/newsroom/mddi-s-response-to-pq-on-ai-enabled-cybersecurity-risks/
