Governing AI: Singapore's Dynamic Approach
ETHOS Issue 27, January 2025

The latest evolution of AI has caught much attention, presenting many opportunities to make our lives better. However, it can also carry inherent risks which can lead to harms if left unchecked. Many authorities around the world have been moving quickly to introduce appropriate guardrails for its development and use.
Singapore’s early investments in AI have paid dividends, and we continue to strive for the technology’s benefits to be enjoyed safely by all. Ultimately, we want all Singaporeans to be able to interact with AI systems with confidence. Such a trusted environment does not come about by chance, but must be actively shaped and sustained, by each of us in the Public Service.
Singaporeʼs Approach to Governing AI Well
Trust is key in developing the digital economy. Most countries want their people, businesses, and institutions to enjoy the economic and social benefits that AI brings, while also staying safe and feeling confident when using AI. What differs is the approach taken to achieve this balance. Some societies perceive a tension between AI innovation and safety. However, Singapore does not think that we need to choose one over the other. Instead, if we develop appropriate and well-calibrated mechanisms to manage AI risks, this can promote a clear environment where responsible innovation can take place. This calls for cooperation across the whole of government, as well as partnership with the wider national ecosystem and the global community beyond.
Broadly, Singapore’s approach is to identify and understand the issues AI may bring about, and then review our governance toolkits to address them. As a baseline, we are introducing new frameworks and principles in close consultation with industry and academia.
This makes clear to all stakeholders in the AI ecosystem what is expected of them to achieve responsible AI. They help our stakeholders understand what principles we care about, offers practical advice on how to achieve them, and provides tools to help measure their performance. All these are being introduced alongside an evolving body of legislation that aims to address the harms of AI in a coherent, adequate and dynamic way.
While some countries are introducing comprehensive AI legislation, Singapore is currently taking a more nuanced position. At this point, we cannot adopt a one-size-fits-all approach to regulate the swiftly moving AI space, nor can we anticipate every risk out there. Prematurely passing laws may lead to questions about their effectiveness and drawbacks, and it is crucial that any regulations we impose are both practical and enforceable. Instead of relying on overarching legislation, Singapore has attempted to develop guardrails and build foundations as we go along.
We cannot adopt a one-size-fits-all approach to regulate the swiftly moving AI space, nor can we anticipate every risk out there.
Some adjectives which have been used to describe Singapore’s approach to AI governance include balanced, accretive, targeted, and multi-pronged. We believe such approaches are optimal to help us facilitate innovation and safeguard consumer interests. This is not to say, however, that we have reached—or will ever reach—a final landing zone. In a spirit of humility, we regularly review our regulations and governance frameworks to ensure that they remain fit for purpose, and will continue to do so going forward.
Responsible AI Use: Guidelines, Regulations & Tools
Principles & Frameworks
In Jan 2019, Singapore’s Personal Data Protection Commission (PDPC) released its Model AI Governance Framework. Among the first of its kind in the world, the Model Framework offers organisations detailed and practical guidance on addressing key ethical and governance issues when deploying AI solutions. As a live document, the Model Framework evolves with the times: an updated second edition was released in 2020, just a year after its launch.1
The Model Framework features two key guiding principles that apply across all stages of the AI value chain and life cycle. First, decisions made by AI should be explainable, transparent, and fair. Second, AI systems should be human-centric—meaning that AI systems should be technically safe to use or equipped with appropriate human intervention to ensure its safety. To help translate the Model Framework into effective action, PDPC has also released three accompanying guides: an Implementation and Self-Assessment Guide for Organisations (ISAGO); a Compendium of Use Cases; and a Guide to Job Redesign in the Age of AI.2
The Model Framework has served Singapore’s needs well to date, and continues to remain relevant in guiding Traditional AI systems. However, the advent of Generative AI has intensified existing risks and introduced new ones. Building on past efforts, a new Model AI Governance Framework for Generative AI (MGF-GenAI)3 has been released, setting forth a systematic and balanced approach to address generative AI concerns while facilitating innovation. MGF-GenAI comprises nine dimensions to be looked at in totality: accountability; data; trusted development and deployment; incident reporting; testing and assurance; security; content provenance; safety and alignment R&D; and AI for Public Good. These guidelines are intended to be a useful reference for all stakeholders deploying GenAI: whether general-purpose or within individual sectors.
Laws
Effective laws and regulations are an important component of our toolkits for governance, and aim to serve the public interest by helping society meet AI governance objectives.
Singapore is updating and sharpening existing legislative levers to improve governance of the AI space. For instance, PDPC has published Advisory Guidelines on the use of Personal Data in AI Recommendation and Decision Systems, providing businesses as well as consumers with greater clarity on how personal data can be used to train or develop machine-learning AI models. This clarity will make it easier for companies to build up quality datasets for AI model training.
Singapore is also developing new legislation to better safeguard the security and resilience of our digital infrastructure, help victims of online harms seek redress from their perpetrators, and address the problem of deepfakes. However, we have not introduced a horizontal AI law: we will continue to monitor developments to assess the need for one.
For now, our view is that some of the harms associated with AI can already be addressed by current laws and regulations. For instance, our laws already allow us to issue correction notices and alerts for fake news being circulated online, regardless of whether or not it is AI-generated. Our current guidelines for fair employment and workplace practices will hold employers accountable for any bias occurring at their workplaces, even if these biased outcomes result from AI models. It may also be better to update existing laws to patch gaps rather than replace them. For instance, we updated the Penal Code recently to cover the specific offence of “sextortion” where someone threatens to distribute intimate images of a victim—ensuring that such practices would be illegal regardless of whether AI was used.
As such examples illustrate, Singapore is not defenceless against AI-enabled harms, nor are we starting from ground zero in AI governance. But it is one thing to deal with the harmful effects of AI, but quite another to prevent them from happening in the first place, through proper design and upstream measures. Successful identification, development, and validation of risk-mitigating measures remain essential—and there are no shortcuts. We will continue to strive towards a much stronger basis for new laws and regulations, grounded on evidence and an understanding of areas of concern regarding AI use.
Tools
Singapore is also one of the first countries in the world to develop tools to support the implementation of responsible AI practices. One of these is IMDA’s AI Verify platform: an AI governance testing framework and software toolkit. AI Verify outlines 11 AI ethics principles which enjoy broad global consensus, including: transparency, explainability, repeatability, and reproducibility. Organisations can use the platform’s standardised tests and process checks to validate the performance of their AI systems. AI Verify is by no means a perfect tool, but it fills a gap, and continues to be enhanced to better meet industry needs.
With AI testing technologies still nascent, IMDA has set up the AI Verify Foundation to harness the collective contributions of the global open-source community to help develop testing tools for the platform. AI Verify was originally intended for Traditional AI use; IMDA has launched AI Verify Project Moonshot4 in open beta for testing large language models (LLMs), the cornerstone of many GenAI-driven solutions. The Project aims to provide intuitive results about the quality and safety of an AI model or application, in a way that even a non-technical user can easily understand.
NOTES
- http://go.gov.sg/ai-gov-mf-2
- Collectively found at https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governanceframework.
- https://aiverifyfoundation.sg/resources/mgf-gen-ai/.
- https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024/project-moonshot.
Updating and Improving Our AI Governance Approaches
The effects of AI will be felt across sectors, possibly in novel and unexpected ways as the technology evolves. As One Public Service, we should stay ahead of developments and act coherently to mitigate diverse risks across the AI life cycle.
Our AI governance regime will always remain a work in progress, and the Ministry of Digital Development and Information (MDDI) and the Infocomm Media Development Authority (IMDA) look forward to working with other agencies to advance this endeavour. For instance, we seek feedback to help us refine our frameworks and toolkits, as well as to apply them in each agency’s own context.
Several sectors have already introduced sector-specific guidance for responsible AI use, which are closely aligned with IMDA’s Model AI Governance Framework. For instance, the Monetary Authority of Singapore has its Fairness, Ethics, Accountability and Transparency (FEAT) Principles, which provide guidance to firms offering financial products and services on the responsible use of AI and data analytics. Likewise, the Ministry of Health has co-developed its Artificial Intelligence in Healthcare Guidelines (AIHGle), which share good practices with AI developers and implementers in the healthcare sector. MDDI and IMDA will support other sector leads on efforts to roll out similar guidelines tailored to the needs of their respective sectors.
Many AI-adjacent issues—including cybersecurity, intellectual property, and the allocation of responsibility—will also impact the core work of other agencies. Some of these are addressed in the MGFGenAI; agencies are also pushing forward with plans to tackle these concerns. For instance, the Cyber Security Agency has conducted a public consultation on its proposed Guidelines on Securing AI Systems. Yet other concerns will arise, at times in novel contexts. MDDI and IMDA seek agencies’ assistance to surface these to our attention so that we can maintain a consistent regulatory baseline, which can then be clearly communicated to key stakeholders such as industry. We have coordinating platforms, such as the AI Governance Roundtable, to streamline such efforts and create feedback loops: we welcome agencies’ active participation in them.
Another big piece of the AI governance puzzle will be our efforts to push the boundaries in science and research on safety in the AI and digital space, across industry and national research institutes. We are building up our Centre for Advanced Technologies in Online Safety (CATOS), a research institute focusing on misinformation, online hate, and discrimination, including those generated by AI.1 We are also strengthening the AI Singapore platform2 to advance our efforts to strive for Responsible AI.
Meanwhile, Singapore has proactively established our own state-backed AI Safety Institute (AISI), contributing to ongoing global discussions and efforts in AI safety. In May 2024, we designated the Nanyang Technological University’s Digital Trust Centre (DTC) as our national AISI, supported by MDDI and IMDA.3 Today, it drives research and innovation of trust technologies and is addressing the gaps in global AI safety science, leveraging Singapore’s work in AI evaluation and testing.
Working with Global Partners
In the borderless digital realm, AI is an “issue without a passport”. Its impact cannot be easily contained within any single country; developments in other countries can impact Singapore. While Singapore continues to sharpen our domestic toolkits to ensure that AI governance in Singapore remains effective and up-to-date, increasing our resilience will require us to continue working with other countries to develop interoperable norms and rules that steer AI to be a force for good.
Singapore works closely with a range of key bilateral partners to align on AI governance, through substantive initiatives and technical cooperation. These allow us to start small and move quickly with like-minded partners as pathfinders for broad-based multilateral cooperation. We are also working with key jurisdictions to shape norms: with the US, we have announced collaborations in AI, including under the US-Singapore Critical and Emerging Technology Dialogue. Both countries are deepening information-sharing and consultations on international AI security, safety, trust, and standards development, while making rapid progress at the leading edge of AI. We are stepping up cooperation with China, such as through the inaugural Singapore-China Digital Policy Dialogue, to enhance our mutual understanding of approaches to AI governance. Singapore participates in many multilateral and multi-stakeholder platforms that seek to achieve an inclusive, practical, and rules-based global environment for AI. One example of this is the United Nations (UN) High-Level Advisory Body (HLAB) on AI—a group of experts charged in 2023 with undertaking analysis and providing recommendations for international AI governance, which have fed into discussions on the UN's recently-adopted Global Digital Compact. Singapore is represented on the UN HLAB and has even hosted a UN HLAB meeting: the only one to be held outside UN headquarters. We are also active in other international initiatives that amplify our voice in deliberations of AI-related matters. These include the G7 Hiroshima Process, the AI Safety Summit series including its State of the Science Report, the OECD AI Principles, the Global Partnership on AI (GPAI), and the World Economic Forum’s AI Governance Alliance.
Since AI will affect and could potentially disrupt all of humanity, the global conversation on its trajectory must be inclusive. It is important to broaden the conversation to take in views from all countries and stakeholders, and to build up the capacity for meaningful efforts to address AI’s impact. Singapore has led by example in gathering countries to advocate for inclusive approaches for AI governance. For instance, Singapore led the development of an ASEAN Guide on AI Governance and Ethics. This is an important signal to the global community of AI developers, creators, and policymakers to keep the needs and expectations of Southeast Asian countries in mind as they design products and services or develop rules that our people are bound to be impacted by. Singapore also convenes the Forum of Small States (FOSS) at the UN, a grouping of 108 small states, which in 2022 introduced a Digital pillar to address issues such as AI. Through this initiative, we launched the world's first AI Playbook for Small States, in collaboration with Rwanda. The Playbook features best practices and strategies to help small states harness the potential of AI while addressing challenges related with governance, resources and societal impact.
Singapore is determined to play our part as a responsible player in the international community. We will continue to participate constructively at international fora on AI and build thought leadership on emerging issues. However, AI is a burgeoning field and MDDI and IMDA cannot be everywhere at once. We need every public officer to be an ambassador for our efforts when engaging with counterparts overseas, in order to strengthen our network of partnerships and help Singapore achieve better outcomes for AI.
Conclusion
The state of play for AI governance around the world continues to evolve rapidly. In Singapore, we are tackling the potential harms of AI through a robust AI governance toolkit, ranging from frameworks to laws to partnerships. As no country can say with certainty what risks lie ahead, we are also committed to continuously tweaking our tools in an agile manner, in tandem with global developments. We believe that this approach best safeguards our interests in the AI domain, while propelling our AI ambitions to the next level.
This governance strategy calls for a collective effort, with everyone doing our part. We seek our fellow public officers’ support: to keep abreast of Singapore’s evolving national approach towards AI governance, suggest improvements to our governance frameworks, experiment with our toolkits, think comprehensively about AI’s impact on each agency’s respective sector, contribute to Whole-of-Government discourse on AI governance, and win friends and partners for Singapore on AI. By learning from each other and staying at the cutting edge, we can collectively build a better AI-enabled future for Singapore and Singaporeans, where AI is steered towards the Public Good.
NOTES
- https://www.catos.sg/
- gttps://aisingapore.org/
- https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/factsheets/2024-digital-trust-centre