AI in the Public Service: Here for Good
ETHOS Issue 27, Forthcoming
Breakthroughs in the AI Revolution
Singapore has been keeping abreast of Artificial Intelligence (AI) developments for some time. In the early 2010s, we observed a number of breakthroughs in computer vision, neutral networks, natural language processing and other emerging technologies that had machine algorithms trying to emulate what the human brain can do in recognising and processing information. These advances prompted us to unveil our first National AI Strategy in 2019. Our focus then was on laying foundations for our ecosystem, in terms of research, talent and infrastructure, with a focus on catalysing innovation through specific and specialised use cases in identified sectors.
However, the advent of transformers and large language models (LLMs) underlying ChatGPT and other forms of generative AI meant a paradigm shift in what AI means for us. AI is now more generalised, accessible, and applicable, offering significant capabilities without requiring as much specialist knowledge. This opened up opportunities for us to reap new gains, as well as address problems we could not realistically solve before, but we needed to set new ambitions and create the broad-based conditions to seize these opportunities. This is what prompted our second National AI Strategy (NAIS 2.0).
AI is now more generalised, accessible, and applicable, offering significant capabilities without requiring as much specialist knowledge.
For the public good, we want AI to help unlock and drive transformative impact, in areas where there is significant potential for breakthroughs, such as cancer research, material sciences or climate change. But we also want to raise the level of generalised adoption. For the user base in the public sector, we want to learn how best to use this new tool in ways that can allow us to not only do things better, but do better things.
This is not to suggest that AI is always the best solution: it is one of many tools in the digital toolkit. Sometimes, simpler computational methods will suffice. That said, AI represents new, untapped potential for the Public Service to enhance our daily work and deliver better outcomes that ultimately benefit Singapore and Singaporeans.
Leveraging AI in the Public Service
We want to empower our fellow public officers to harness AI creatively and effectively. How do we achieve this vision?
To promote general adoption, we made available AI tools, such as Pair, 1 SmartCompose, 2 and AIBots. 3 They are useful to a wide range of public officers for many general tasks. Other common tools of this nature may include chatbots to support customer-facing and service delivery needs, translation, summarisation, and so on. Much of what public officers do involves words and language, which is an area that LLM-based AI technology can now help with.
Beyond improving the productivity of the Public Service, the real value lies in AI’s broader ability to transform our business and operating models to deliver greater impact. In driving adoption, we want to encourage public officers to experiment with different approaches to figure out where we can create new value by doing things differently, rather than just settle for incremental value from doing things the same old ways using new tools.
For example, we have seen how AI and automation have transformed language translation, software engineering, identity verification and border clearance. This is just the beginning and much more is possible in many other domains.
We need to re-imagine business and operating models if we are to truly realise the gains—this means being prepared to question and change the mental models behind them, including fundamental assumptions about how things are done.
Ultimately, beyond technological readiness, we need to re-imagine business and operating models if we are to truly realise the gains—this means being prepared to question and change the mental models behind them, including fundamental assumptions about how things are done. The real challenge is therefore how we manage change, at both the individual and institutional levels.
Encouraging Innovation while Managing Risks
Some worry that AI, being dependent on vast quantities of data, may carry inherent risks to privacy and security that will hinder its adoption in government work, where sensitive information may be involved.
This is real, but we can tier the risks and manage them accordingly. Most of our tasks are not high risk: indeed, we have just reviewed our Government Instruction Manual 8 (IM8) rules to significantly reduce the IT compliance burden and increase space to innovate solutions for lower risk applications, which are the majority. Basic internet safety measures still apply: being mindful about data protection, basic cybersecurity, or redundancy in case of service disruption.
We are similarly making moves to ensure that as many public agencies and officers as possible can benefit from AI tools. We want everyone to be connected and using the different online tools they need, within a network which ensures sufficient trust and security. Sensitive content will be separated and worked in a different environment. This network separation will be happening over the next 1 to 2 years.
Of course, we do have higher risk, critical services, for which we have to take a more conservative, calculated approach, and put in place more robust, risk-appropriate measures. But even in these spaces, we are working on safe and appropriate use of AI to reap its benefits. After all, those of our colleagues who work with more sensitive information every day do not want to be stuck in the Stone Age—they too, want and need the more powerful tools that are available. These can be configured differently to balance the benefits with potential risks. It is not all or nothing.
There is also the grey category in between: here is where leaders in agencies must exercise judgement because no one else can own the risk calculations nor the potential impact of any given change. We want to equip leaders and practitioners with capability and access to expertise so that ministries and agencies are confident enough to assess and make the trade-offs themselves.
If an officer or agency feels they are not competent enough in making a decision or handling a tool, they are more likely to be risk averse. And if they are not equipped to make sound risk assessments and manage risks, this is a risk in itself. This is why we emphasise the need to level up competence across the board.
Doing AI Together
We strive to strike a good balance between central support that is effective and enabling, while allowing the energy, ingenuity, and deep understanding of the domain owners and agency leads in the public sector to flourish.
Nurturing our Government AI community is one way we are bringing agencies and enablers together to offer one-stop support. The community meets regularly on “AI Wednesdays”.4 We want the already many experiments and use cases to proliferate and to be shared. We want excitement among practitioners to grow and for them to push themselves, after seeing what is possible among their peers. We want them to think: “If they can do it, so can I!”
We want them to know who to call to resolve technical issues, such as getting the computing resources and expertise, accessing some part of the tech stack they do not have, or simply clarifying the rules. We want to bring all these together in a way that is relevant to helping them solve problems in their field.
Like any technology, AI should not be a hammer in search of a nail. You must begin with a problem, and then design solutions to address that problem. What we will do is to ensure the tech stack (including but not only AI) is available, so that expertise, infrastructure, computing resources and so on are abstracted away, and agencies can focus on solving their problem well. An enabling environment and good policies remain central, and we adapt these as we observe how things are working out in practice.
Like any technology, AI should not be a hammer in search of a nail. What we will do is to ensure the tech stack is available, so that agencies can focus on solving their problem well.
Ensuring AI is a Force for Good
At its core, our investment in AI is to advance and ensure the public good. This has both progressive as well as defensive aspects.
On the progressive side, we can use AI to solve big, even wicked, problems that matter to us. Imagine how we might optimise our transport system and energy consumption, dramatically improve our healthcare outcomes, help the disadvantaged in society and serve citizens in ways that were never possible before. Many around the world are working on such problems today.
But these are also problems that, left wholly to commercial interests alone, could result in market failure. So, there is a defensive role for governments, both nationally and internationally: to pull in the right expertise and ensure these problems are resolved in ways that are for the public good.
We will leave some problems for businesses to tackle, and we will partner them on many, but we also want to address any potential harms that may emerge. As much as we want to innovate, we need to assure people that when we apply this powerful new technology, it is subject to good governance frameworks, with good testing methodologies, and where needed, complies with regulations to protect people from its worst effects.
This is why we held the Singapore Conference on Artificial Intelligence for the Global Good (SCAI) 5 in December 2023. We wanted to convene an international community of experts with the same convictions to harness AI for the public good and keep it a force for good.
Given that this technology is largely being built and deployed from outside Singapore, there is only so much our domestic laws can do. This is why a substantial part of our work is to plug into the international body of work around AI rules and norms, as early and substantially as possible. And even within Singapore, many of the specific domain applications can only be effectively addressed and governed by the leads and agencies with the sectoral and domain knowledge, each of whom has international counterparts and platforms to plug into. So the governance of AI is a whole of government effort extending into the international dimensions. Our Whole of Government AI Governance Roundtable meets regularly to share developments, collaborate, and match needs with help and expertise.
What we are doing from the centre to support AI governance is to establish frameworks and share best practices, such as for data security, and to make available helpful common tools for preserving privacy and security, testing and benchmarking, or for detecting biases and abuses. We are also studying more impactful uses of AI such as deepfakes, to assess how best to manage them, not only with technological tools but also with appropriate legislation and regulation.
AI is but one tool, and a foretaste of things to come in the larger digital and technological space. The fundamental shift in our NAIS 2.0, and in our Smart Nation thinking, is that this is no longer just about reaping digitalisation opportunities—we are already in a digital world. The question is: how do we, as a society, live with the digital? How can we make digitalisation benefit us and help us grow, to become a trusted tool that allows us to preserve our cohesion and values as one people? How can we use technology as a force for good—that takes us forward and brings people together?
The question is: how do we, as a society, live with the digital? How can we use technology as a force for good—that takes us forward and brings people together?
The Government must of course be proactive here, but there is no ready playbook. Leading nations and governments are learning by doing. If we are to succeed in this fast-moving, dynamic and often uncertain space, the Public Service will need to operate in a more agile working culture. We must cultivate experimental and pioneering mindsets—be prepared to start small, fail so that we discover, learn and iterate, and repeatedly build and progress this way. Most of us are not used to working this way, but this is a space where we often cannot define solutions upfront. We will only know and progress if we try. If we can do this at scale, the potential is enormous.
At its essence, the task ahead is to learn. We are all still learning and must learn relentlessly.
NOTES
- https://www.open.gov.sg/products/pair/
- SmartCompose is an AI writing assistant that helps public officers communicate with the public in a faster and more empathetic manner.
- AIBots is a platform where agencies can create customised Generative AI chatbots.
- AI Wednesdays is a dynamic series of community meetups dedicated to exploring the latest advancements, trends and applications in artificial intelligence. Organised for the whole of government, these sessions aim to foster collaboration, knowledge sharing and innovation.
- https://www.scai.gov.sg/