Strategising AI at the National Level
ETHOS Issue 27, Forthcoming
Introduction
The transformative potential of artificial intelligence (AI) has prompted a growing number of countries vying to become de facto leaders in this emerging technology. In Asia, China has signalled its ambition to become ‘the world leader in AI’ by 2030,1 while the United Arab Emirates seeks to establish itself as an ‘AI destination’ by 2031.2 In Europe, Germany has ambitions for ‘AI made in Germany’ to be a mark of quality,3 while the Nordic countries are striving to lead in the field of AI ethics.4 Singapore's concerted investment in human capital, digital infrastructure, innovation, and AI governance has positioned it at the top of the IMF’s ‘AI Preparedness Index’.5
Desire to participate in the AI revolution is not limited to high-income countries. Lower and middle-income countries are increasingly looking to bridge the digital divide and reap the benefits that AI promises. To date, more than 60 countries have launched their respective national AI strategies, with over 1,000 AI policy initiatives being curated by the OECD.6
Policymakers are coming to grips with AI’s revolutionary possibilities—from its versatility as a general-purpose technology to its potential to disrupt existing norms and forge new ‘operational logics’ guiding our societies. The challenge is to reimagine the state by steering AI development and adoption along multiple, concurrent dimensions vital to both present and future governance contexts:
- In the socio-economic landscape, AI will drive productivity and faster critical innovations in the fields of education, healthcare and public services. At the same time, there are concerns that such change will come at the cost of increased job insecurity for some, as well as a growing digital divide
- AI's socio-political impact will be felt most keenly through its ability to transform decision-making processes across all levels of society. Here are two areas of critical concern: the integrity and security of our information systems and the biases inherent in AI-driven decisions.
- Approaches to governance and regulation will need to be multidisciplinary, to balance the vast potential of AI for public good with its inherent risks.
AI's Socio-Economic Dimension
National AI strategies and policies commonly outline how countries plan to invest in AI to leverage their comparative advantages, with some prioritising industry applications, such as in logistics and transportation, energy, health, and agriculture.7
Alongside vertical investments, there is also growing recognition of AI as a key horizontal enabler of productivity and competitiveness. For instance, comparing Singapore’s first National AI Strategy in 2019 with its second iteration in 2023 reveals a marked shift, from approaching AI with a specialist sectoral focus to viewing it as a broader enabler to be deployed across all sectors.8 Increasingly, AI strategies, alongside that of digital and data strategies, are underpinning the new industrial approaches of the 21st century.9
Across the globe, AI is fostering new forms of economic activity, reshaping the labour market, and redefining market competition. Whilst this is promising for innovation and productivity, it also raises concerns regarding job displacements and worsening economic inequality as evidenced by previous waves of technological transformations.10
Nevertheless, AI also has the potential to boost the productivity of lower-skilled, lower-expertise workers. A study of customer-support agents found that when call-centre operators used generative AI assistants in their work, they became 14 per cent more productive. There was also a 34 per cent increase in productivity for the lowest-skilled agents, who moved up the learning curve faster with generative AI assistance.11 While insights from these recent studies have yet to make their way into formal strategies, countries like Singapore are already adopting and scaling AI across the public sector to assist with writing, research, coding and transcription tasks.12 13
Many national AI strategies are placing a strong emphasis on empowering individuals through re-training and up-skilling programmes, ensuring that those affected by AI transitions can pivot to new roles. Governments must act swiftly to harness the potential of AI as a social-levelling tool to narrow the digital divide,14 such as by improving access to personalised learning, making high-quality educational experiences accessible to people in remote or underserved areas, breaking down language barriers by harnessing multilingual capabilities,15 identifying and clarifying workforce gaps and skills needs,16 and improving pathways to upskilling.
Governments must act swiftly to harness the potential of AI as a social-levelling tool to narrow the digital divide.
Alongside the use of AI to enable socio-economic development, there is also a growing interest in using AI to tackle societal, developmental, and environmental challenges. The innovative use of AI promises to help advance Sustainable Development Goals:17 from using cognitive digital technologies to improve diagnostic capabilities in health, to improving financial inclusion for women, to enhancing disaster response with more accurate forecasting and real-time date analysis, among many other applications.
The adoption of AI in the public sector to improve service delivery for citizens, deliver greater public value and promote societal inclusion is high on the agenda of some governments. For example:
- Singapore’s National AI Strategy 2.0 (2023) aims to improve public service productivity with new value propositions for citizens_
- Canada’s AI Strategy (2017) ambition is to advance equity, diversity and inclusion in the Canadian AI ecosystem and globally, and to ensure that AI is developed and deployed to the benefit of all citizens of the world
- South Korea’s AI Strategy (2019) seeks to improve happiness of people and quality of life, and to respond proactively to social changes including job markets
- The United Kingdom’s National AI Strategy (2021) wants to ensure that AI benefits all sectors and regions, and to establish the public sector as an exemplar for AI procurement and ethics, in order to deliver greater public value for money
Nevertheless, AI use has yet to become mainstream in many governments. A good first step would be to equip civil servants with the right tools and environment for responsible experimentation, to help build institutional knowledge and practical understanding to better inform AI governance. Consider the example of ChatGPT: in its early days, countries like Italy banned it outright; schools in several countries prohibited its use. But the city of Boston chose to embrace it instead. They issued guidelines encouraging staff to experiment with generative AI to understand its potential, emphasising privacy, security, and public purpose. By shifting the focus from just governing the technology to using the technology for governance, Boston eventually reduced the initial alarmism and highlighted AI’s potential for social good. 18
AI's Socio-Political Dimension
AI is reshaping decision-making processes across different verticals in society and is thus increasingly becoming a focal point for political and security considerations. Two main concerns arise from this: the first is to do with integrity of the information ecosystem, and the second involves biases that accompany AI-enabled decision-making.
With the capacity to reveal hidden variables and suggest potential correlations or causal relationships, AIdriven analytics can aid fraud detection, optimise resource allocation, and support more informed decision-making across different government functions. To support this, the Canadian government, for instance, has implemented a ‘Directive on Automated Decision-MakingSystems’ 19 and created a ‘Pre-qualified AI Vendor Procurement Program’ 20to improve the accountability of AI tools utilised by government agencies.
However, the same AI capabilities that streamline information processing could be misused to corrupt the decision-making process, particularly by manipulating public opinion or spreading disinformation. While disinformation is not a new phenomenon, the application of AI has greatly increased the scale and speed of its spread, surpassing the pace at which policymakers can effectively cope. AI-generated fake news and deepfakes are becoming increasingly sophisticated, complicating the ability of individuals to differentiate between authentic and disingenuous content. The rapid creation and dissemination of AI generated disinformation exploits preexisting societal fault-lines, each of which presents bad actors with an opportunity to accentuate differences and minimise similarities, thereby deepening divisions and eroding trust.21
While AI can indeed enhance decisionmaking, such capabilities also carry significant risk of biases that stem from skewed data sets and algorithms. Consider the example of COMPAS, an AI tool used to make decisions in the American criminal justice system which has been shown to exhibit racial biases, leading claims that its victims _have been ‘set up for a lifetime of biased assessment’.22 Biased AI might not be immediately apparent, but its impacts are profound: it can skew the perceived problem and solutions, leading to outcomes that are suboptimal. For policymakers, biases within AI could mean wrongly perceiving the needs and concerns of the populace, leading to policies that miss their mark—thereby eroding trust in public institutions or policy processes.
Biases within AI could mean wrongly perceiving the needs and concerns of the populace, leading to policies that miss their mark.
Faced with these challenges, policymakers must not only respond to present challenges but also anticipate the future implications of integrating AI more deeply within the sociopolitical sphere. National AI strategies should be conceptualised as dynamic frameworks that evolve with the operational milieu rather than remain static in the context in which they were originally formulated. As information ecosystems evolve, so too must the strategies and policy frameworks that support them
National AI strategies should be conceptualised as dynamic frameworks that evolve with the operational milieu rather than remain static.
AI Governance and Regulation
As governments race to harness AI’s power, they must also establish clear guidelines to manage the dual-use nature of AI—both its vast potential for public good and its capacity to amplify negative outcomes—to ensure that its capabilities are directed towards societal benefit. Good AI governance should not be perceived as a handbrake on the path to progress. Instead, it is the opposite: a steering wheel that enables us to drive digital transformation at speed and scale, while supporting the development of appropriate guardrails.
Developing such oversight mechanisms can be tricky. Policymakers are not wrong to see the need to balance innovation with regulation, but it is not a question of what but how. Approaches to AI oversight commonly reference the need for ‘adaptable frameworks that keep pace with the rapid pace of AI development’ (e.g. the UK 23 and Singapore 24), or to embed AI models with certain 'values' or ethical considerations (e.g. the United States25 and the Netherlands26), or to ensure that AI-based decision-making is 'transparent' and auditable (e.g. Spain 27 and Canada28)
As any experienced policymaker can attest, the distinction between acting as a gatekeeper and serving as a facilitator is often a subtle yet profoundly significant one. Broader regulations can encompass a wide range of issues but may lack necessary precision, while narrower regulations can address specific problems more effectively but risk being outdated quickly or missing broader implications. Most AI regulation at present tends to be sector-specific and often voluntary in nature—leading to questions about their actual efficacy.
The current impulse of AI governance and regulation involves commitment_ to high-level ‘motherhood’ principles: virtually nobody will disagree with the need to ensure that AI is ‘fair’, or that it must respect ‘human privacy and dignity’. While such principles may easily fetch consensus, efforts to embed them into AI development beg the question: whose values? Most people will agree on the sanctity of human life (a high-level principle), yet such consensus does not resolve implementation-level dilemmas such as: ‘Should a self-driving car kill the baby or the grandma’?29 Modern societies, even culturally homogenous ones, are rarely of a singular mind.
Although the challenges posed by AI are complex, they are not insurmountable,_ and certainly do not leave policymakers short of practical solutions. For instance, although it does not constitute an AI strategy itself, the European Union's General Data Protection Regulation(GDPR) sets a precedent for data privacy and protection and offers a foundation for ensuring that AI systems operate transparently and ethically.
The distinction between acting as a gatekeeper and serving as a facilitator is often a subtle yet profoundly significant one.
At its core, the AI governance and regulation challenge is a multidisciplinary one, and any proposed approach should reflect this. A robust starting point for AI governance and regulation harnesses the collective expertise of government, academia, and industry— what is sometimes called a 'triple-helix partnership'.30 Policymakers provide regulatory insights, scholars from both the natural and social sciences offer technical knowledge and societal perspectives, and industry leaders contribute innovation and practical experience. To be clear, this collaborative synergy is not a panacea for the challenges associated with AI governance and regulation. It does, however, help to ensures that policies are more comprehensive, balanced, and equipped to address the multifaceted impacts of AI across all sectors of society.
The AI governance and regulation challenge is a multidisciplinary one, and any proposed approach should reflect this.
We must acknowledge that human decision-making also has its own set of limitations. Cognitive bias, opacity, and inconsistency plague decision-making at the individual level, while challenges such as groupthink exist at the group level. In many cases, the yardstick for evaluating an AI system should move beyond a ‘good vs. bad’ dichotomy, to consider whether it improves existing decision-making processes along relevant dimensions such as efficiency, transparency, and fairness. The integration of AI should be viewed as an opportunity to enhance and refine decision-making, where technology is leveraged to fill the gaps left by human limitation, while still maintaining ethical and equitable standards aligned with the values of those impacted by AI.
Pathways Toward a Global Consensus
AI’s effects are not limited to national borders. AI services deployed by multinational companies are already global in reach. International collaboration will inform a large part of national AI strategy setting. Countries can benefit massively from the exchange of information and sharing best practices with each other, which will in turn enable them to better align with international norms and standards. This could increase the interoperability of national AI ecosystems, affording governments better risk management options.
However, the current landscape is fragmented, and global AI governance is often clustered with its thematic cousin—global digital governance—despite being distinct from it.31 To date, there are more than 10 global AI governance initiatives and over 25 sets of AI principles in circulation worldwide. The proliferation of AI governance principles could make it more challenging for resource-poor countries to meaningfully participate in AI development and adoption, deepening the global digital divide.32
To be clear, it is highly unlikely that a unified global model of AI governance will be achieved anytime soon. That said, it may be possible to achieve some degree of harmonisation for global AI governance—a new form of ‘integrated internet internationalism’.33 A multipartite partnership approach could foster greater coherence in AI policies among countries worldwide and also amplify the collective global capacity to manage AI’s far-reaching implications.34 The private sector could leverage their technical expertise to help governments worldwide collectively navigate and address the ethical, operational, and strategic challenges posed by AI, strengthening global AI governance efforts.35
The yardstick for evaluating an AI system should move beyond a ‘good vs. bad’ dichotomy, to consider whether it improves existing decision-making processes.
Conclusion
AI is a multi-faceted domain, yet this need not deter policymakers. Effective interventions in each of the three dimensions—socio-economic impact, socio-political considerations, and governance and regulation—can help ensure AI delivers on its promise of innovation and societal benefit. The challenge now lies in transforming the strategic ingredients into a cohesive and successful AI ecosystem—proving the value of AI by realising its implementation in thoughtful and inclusive ways.
NOTES
- https://www.cnbc.com/2017/07/21/china-ai-world-leader-by-2030.html#:~:text=China%20laid%20out%20plans%20to,the%20military%20to%20smart%20cities.
- https://ai.gov.ae/wp-content/uploads/2021/07/UAE-National-Strategy-for-Artificial-Intelligence-2031.pdf
- https://mission-ki.de/en/
- https://www.nordicinnovation.org/news/new-expert-group-will-make-nordics-leading-region-ethical-ai
- https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-letsmake-sure-it-benefits-humanity
- https://oecd.ai/en/dashboards/overview
- https://goingdigital.oecd.org/data/notes/No14_ToolkitNote_AIStrategies.pdf
- https://www.institute.global/insights/tech-and-digitalisation/from-strategy-to-synergy-what-can-we-learn-from-singapores-ai-journey#footnote_list_item_9
- https://www.institute.global/insights/economic-prosperity/accelerating-the-future-industrialstrategy-in-the-era-of-ai
- https://www.imf.org/en/Publications/fandd/issues/2023/12/Macroeconomics-of-artificialintelligence-Brynjolfsson-Unger
- https://www.nber.org/papers/w31161
- Pair Chatbot is an experimental AI bot that will allow the secure and efficient use of large language models as a writing assistant within the government space.
- Transcribe is a secured Speech-to-Text (STT) platform with auto-transcription technologies that can be used to produce transcripts of interviews, speeches and meeting minutes, and was developed for the Singapore government.
- https://www.ft.com/content/8f7b9b52-9243-4c34-af80-223522273ab4
- https://www.khanmigo.ai/
- https://www.weforum.org/agenda/2023/05/ai-skills-gaps-future-jobs/
- https://repository.unescap.org/bitstream/handle/20.500.12870/6810/ESCAP-2024-FS-Seizing-Opportunity.pdf?sequence=3&isAllowed=y
- https://www.wired.com/story/boston-generative-ai-policy/
- https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592
- https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/list-interested-artificial-intelligence-ai-suppliers.html
- Several countries have implemented counter-disinformation initiatives to combat the spread of online disinformation in areas of public interest, such as the UK government’s National Security Online Information Team (NSOIT) unit.
- https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
- https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response
- https://file.go.gov.sg/nais2023.pdf
- https://asiatimes.com/2023/11/us-stresses-ethical-ai-use-in-its-latest-strategy
- https://data-en-maatschappij.ai/en/policy-monitor/nederland-impact-assessment-mensenrechtenen-algoritmes
- https://www.dataguidance.com/opinion/spain-agency-supervision-ai-overview
- https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592
- https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/
- For instance, Singapore has adopted a model, with the Government as ecosystem enabler, that convenes the research community, industry and public agencies to facilitate research collaborations, quickly commercialise fundamental AI research, and rapidly deploy AI solutions.
- https://eprints.lse.ac.uk/119637/1/Chan_multilateralism_in_the_digital_age_published.pdf
- https://www.institute.global/insights/tech-and-digitalisation/state-of-compute-access-how-to-bridge-the-new-digital-divide
- https://www.orfonline.org/wp-content/uploads/2023/08/TF7_InternetInternationalism.pdf
- https://www.smehorizon.com/singapore-sets-her-sights-on-being-a-global-hub-for-ai-solutions/
- https://eprints.lse.ac.uk/121603/1/Alden_et_al_Global_Digital_Governance_policy_brief.pdf