Three Ways Advancements in Artificial Intelligence will Change Learning & Development
ETHOS Issue 23, October 2021
Background: AI Is Changing Every Aspect in Public Service
Singapore’s Digital Government Blueprint,1 recently revised in 2020, has set out a strategic plan to use data, connectivity, and computing to improve how every agency operates, delivers services, and engages stakeholders. Artificial Intelligence (AI) plays a big role in this plan, holding the promise of making public services seamless and integrated for our citizens.
AI also has the potential to transform Learning and Development (L&D), the field for which the Civil Service College (CSC) is the lead public sector agency, by offering new affordances brought about through advances in AI, Machine Learning and Deep Learning. Singapore’s general- and higher-education sectors have outlined plans to personalise learning as part of the national push to use AI in the National AI Strategy, launched in 2019.2 While the education sector is on track to meet their goals, L&D functions in government—and adult learning in general—could benefit from AI-related concepts such as adaptive learning. With the COVID-19 pandemic accelerating the pivot to digital means for work and learning, it is timely to reimagine how learning could be in the Public Service.
We believe AI-driven shifts in learning can happen in three ways, following three commonly known conceptual models of AI in education: from Creating to Curating in the Domain model, from Seeing to Knowing in the Learner model, and from Prescribing to Recommending in the Pedagogical model.
While there is great potential in using AI to codify well-defined knowledge, it is best used for less complex skills.
From Creating to Curating—AI and the Domain Model
AI can be used to describe particular fields of knowledge in the form of a domain model, which typically consists of a web of ‘knowledge points’ that are related to each other in some way. Domain models might use mathematical concepts like combinatorics and stochastic processes to define and track these ‘knowledge points’, which are the smallest possible conceptual blocks of values, skills or knowledge. Traditionally, these are defined by experts and skilled domain professionals, but a recent 2020 Institute for Adult Learning report suggests that machine learning techniques can collate the appropriate materials and refine the relationships between knowledge points, based on how cohorts of learners perform on assessments.3 This means L&D experts in future could define the curriculum for a field in the form of knowledge points, and let AI determine the strength of relationships between each of its concepts. They can also let AI pick out the materials from a library of resources that will articulate each concept. The new role for human experts could then be to monitor the domain model and curate new concepts to be added.
In L&D, the work of establishing the structure of a curriculum usually entails selecting an appropriate curriculum model, determining appropriate standards for pre- or post-requisites, and developing the content. Most currently available adaptive learning management systems do a good job at teaching learners to acquire well-defined theoretical knowledge and concepts, but are less effective with practical curriculum tasks, such as honing the ability to reason logically or master a skill. So, while there is great potential in using AI to codify well-defined knowledge, it is best used for less complex skills on the lower end of Bloom’s taxonomy (a ranking of task complexity, commonly used in education). A good example of an AI-calibrated domain model might be a job role in procurement contract management, where in-class MCQ quizzes can accurately measure a learner’s competence in identifying supplier risk management strategies.
How might AI refine a domain model? The model must first contain a set of matrices that map varying difficulty of assessment items to behavioural indicators. These matrices, called Q-matrices, can then be easily refined by using a combination of refinement methods. In curriculum areas where reasoning is important, especially causality, recent developments in AI show some promise in using qualitative reasoning techniques to refine domain models where usually ill-defined logical reasoning skills are important. Put simply, AI techniques can help curriculum designers intimately understand the relationships between skills in a competency map, thereby helping them evaluate and review the curriculum better.
Of late, the lowest-hanging fruit seems to be in using AI to populate a map with content and resources. Advances in natural language understanding have given rise to a number of employee engagement platforms such as Microsoft Viva Topics (see box story on Through the Looking-Glass), which have enabled organisations to fuse a repository full of documents into a map of word topics, without the need for any human analyst. This helps people look for pieces of information and understand how they are related. Given a schema of competency frameworks, such systems will be able to automate the tagging of not just articles and learning objects but also existing organisational knowledge to the competency framework.
AI techniques can help curriculum designers intimately understand the relationships between skills in a competency map, thereby helping them evaluate and review the curriculum better.
Alice’s boss asks her to relook at and streamline the current procurement processes in her agency.
Through the Looking-Glass: How A Public Officer Might Use the LEARN app in 2025
Alice’s boss asks her to relook at and streamline the current procurement processes in her agency. On a Saturday afternoon, she happens to read an article in The New York Times on how robotic process automation (RPA) is making finance operations more efficient.
As she reads and highlights key parts of the article, the LEARN app plug-in in her browser prompts her: “This can already be done in your agency. Click to find out how to get started or read these other related articles.”
She proceeds to save these articles in her reading list in the LEARN app, where she can choose to make it ‘visible to others’ or ‘private’. Behind the interface, the app curates a short selection of:
1. Short Coursera content on getting started with RPA (seeing that it is a new area for her, but she does know what it does and has some prior knowledge about it).
2. Other Harvard Business Review articles on the web and on the LEARN app that other public officers have read on this topic (seeing that she is reading an article assessed to be of advanced difficulty).
3. Short reflections/questions posted on Workplace that highlight the realities and difficulties of working with RPA bots (posts made by officers of similar seniority are selected).
4. Using the Microsoft Viva Topics service hosted on the Government Commercial Cloud (GCC), a list of recent documents on her department’s Microsoft SharePoint that are related to RPA (prioritising those in the finance domain)
5. A mood meter on the current sentiments on RPA derived from Twitter and other Social Media of the day (seeing that the topic has reached substantial media saturation).
6. Other officers across the Public Service who have worked on a project that made use of RPA (prioritising those in the finance domain).
Alice gets excited and makes a post on SG-Teams to her teammates on Outlook, starting with: “Hey guys, I think I know how to reduce the procurement process by using RPA to scrape GeBIZ…” The LEARN app plug-in on M365 suggests quotes from articles she has read that she could append to her email.
Another team in Agency X then uses Miro to generate a timeline for a similar project and starts writing a task to ‘explore the use of RPA’ which is tagged as a to-do item. The LEARN app plug-in in Miro starts prompting the team with a note: “Alice from Agency Y has recently implemented an RPA project in a similar situation. Would you like to connect with her?”
An L&D specialist in CSC is alerted that many officers of a certain Ministry family are regularly researching and trying out RPA techniques. He writes an email to a group of officers, inviting them to informally share how they are using these tools in their work with other identified officers who are currently exploring its use. When the group comes together, the L&D specialist facilitates the conversation, knowing the materials the participants have read and drawing on their past experiences to make the discussion rich and highly relatable.
- Can you identify the domain, learner and pedagogical models at work at the different touchpoints?
- What do you notice about the role of technology in each of these typical moments of work?
- What assumptions does this illustration make and what needs to happen for this to be a reality?
From Seeing to Knowing—AI and the Learner Model
AI has proven able in carrying out specific ‘sensing’ tasks, like identifying a person’s facial expression, tone in text messages, pose, and even speech. However, these perceived emotions may not be entirely true. There is no convincing evidence that facial expressions reveal a person’s feelings, and in fact, a Nature article in 2020 argues that there is “little to no evidence that people can reliably infer someone else’s emotional state from a set of facial movements”.4
Nevertheless, instructors and trainers can use these and a range of other signals to better understand learners’ reactions or feelings accurately in a lesson. What is so unique about this human ability that enables instructors and trainers to ‘read’ their learners? Is there a more objective way to do this than just gut sense?
Will we get to a point where technology can know the learner well, cognitively and socio-emotionally? Currently, the answer is no. But when augmented with a skilled trainer, computer vision technology is surprisingly good at assessing and identifying performance issues in training simulations and situational tests. For instance, in MINDEF’s Murai Urban Training Facility,5 training areas are extensively outfitted with cameras and sensors to collect implicit learner-produced data, enabling high-fidelity after-action review sessions to accurately identify soldiers’ performance gaps, and refine team strategy and tradecraft. In bus driver training, the Land Transport Authority has employed the use of Advanced Driver Assistance Systems to detect fatigue and attention of drivers by tracking eye movements and other telematics of their driving performance.6 This data is used to incentivise good driver habits and for training purposes.
At present, the use cases of AI in L&D stop at diagnosis: AI neither predicts how trainers, instructors, or learners will act, nor prescribes how they should modify their actions. This might be about to change. Further into the future, DL techniques used in gait and pose estimation, for example, could enhance the accuracy of sensor systems. Along with an expert-informed tagging of already-collected multimodal data in more aspects other than those mentioned, we could see a day when learner models can accurately predict when learners need an expert’s assistance, even before they know it, and even inform them with a high degree of certainty there is some likely error that they will make. In other words, tools used by instructors, trainers, or learners could become highly certain of a learner’s knowledge state.
An AI-enabled system can intervene in a timely, contextual manner relevant to each learner.
From Prescribing to Recommending—AI and the Pedagogical Model
AI is unlikely to ever fully replace the instructor, trainer, or coach. With the ever-expanding collection of data in the domain model (containing values, skills, knowledge, and how they relate to each other) and the learner model (containing cognitive and socio-emotional states), an AI-enabled system can intervene in a timely, contextual manner relevant to each learner. While this would be difficult for a trainer or coach to perform for every learner all the time, they are still needed to perform tasks that humans are better at. They can read the affective cues of each learner and tailor the experience to their needs with empathy. For example, a PSD career coach could use personal interest, prior work experience and knowledge data in the LEARN app to provide tailored advice and ask more pointed questions to help the officer come to their own conclusions about their skillsets and options, coherently weaving these, along with appropriate encouragement and tact, into a meaningful career trajectory and narrative.
One of the most promising areas in AI in L&D is the personalisation of individual learning paths using a highly informed and adaptive learning management system. Adaptivity can be thought of in two ways: first, macro-adaptivity, where learners are presented with what activities, knowledge or even groups of peers they are predicted to be able to learn or learn with; and second, micro-adaptivity, where adaptive systems can intervene when necessary—if learners show signs that they might not be able to perform a required competency or skill—by providing short guidance on how to proceed. With a constant check on their prior knowledge and confidence in using the knowledge, adaptive systems could keep learners on the most efficient path to full mastery of content matter.
This idea has in fact been used successfully in general and higher education around the world,7 but it often takes the form of adaptive testing rather than adaptive learning systems. Micro-adaptive systems are also known as Intelligent Tutoring Systems in the L&D literature.8 They are becoming more widespread as domain models in some content areas become much better codified (for example, in many primary and secondar y school mathematics syllabuses, content and assessments have been made adaptive).
Further in the future, learning companions or assistants could provide accurate answers and direct a learner to relevant resources, in their moment of learning need. Such companions could offer work-relevant links to bridge theory and practice, be personable, and create conversations where a trainer or expert cannot. An early example of such companions is Clippit, the paperclip-shaped office assistant that first appeared in Microsoft Office 97 to help users use Office features more effectively. Since then, chatbots like the one seen in Google’s LaMDA (see box story on What We Mean by AI) have become much more capable.
AI, or Artificial Intelligence, is a general term for computer programmes that can sense, reason, adapt and act.
What We Mean by AI
What is AI?
AI, or Artificial Intelligence, is a general term for computer programmes that can sense, reason, adapt and act. Such programmes, and their use in real-world applications, have advanced rapidly in recent years.1
What made it popular in the recent decade?
AI has been researched since the 1950s as a science of building intelligent programmes to solve problems (think autopilot on planes). Work on Machine Learning (ML) began in the 1980s, when statistical ‘learning’ algorithms started improving machine performance without being explicitly programmed (think recommended titles in Netflix). To the layperson, AI remained largely a sci-fi topic until the 2010s, when faster and cheaper computing power and an exponential increase in data availability made Deep Learning (DL) possible. Inspired by workings of the human brain, DL uses algorithms to identify and classify patterns in large amounts of data. From such analysis, programmes can then draw conclusions or take relevant actions (such as in self-driving cars).
What AI can’t (and shouldn’t) do…for now
Prominent researcher Yoshua Bengio has said that AI still needs to be extended “to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information”.2 Largely founded on mathematical concepts like linear regression, statistics and game theory, AI is likely very good at doing specific repetitive tasks such as moving goods in a warehouse, as well as augmenting humans in more complex tasks such as detecting and treating cancer. However, it lacks empathy and other affective aspects of being human. Hence, the FATE aspects (Fairness, Accountability, Transparency, Ethics) should be addressed when planning and implementing AI, whatever the use case.
Notes
- To better grasp the power and speed at which AI has developed, watch this recent
May 2021 conversation about a learning interaction between a child and a Google
dialogue model called LaMDA. Notice how human-like the programme is in the way it
handles open-ended dialogue content.
See https://youtu.be/aUSSfo5nCdM?t=40. - W. Knight, “One of the Fathers of AI Is Worried about Its Future”, MIT Technology Review, November 17, 2018, https://www.technologyreview.com/2018/11/17/66372/ one-of-the-fathers-of-ai-is-worried-about-its-future/.
Epilogue: Some Realities
Much of the vision outlined above relies heavily on large swathes of data, which takes time to amass and prepare for use by ML/DL algorithms. There is also an assumption that systems are developed in line with adequate risk assessment such that they can address the ethical, FATE aspects of AI, although most risks can be mitigated by following the guidelines in the Model AI Governance Framework set out by IMDA governing AI-augmented decision-making.9
CSC has started work on integrating and cleaning existing data so that it can be ready to build a recommender engine. While the first iteration may be just a simple non-personalised filter of sorts for course recommendation, to be deployed on CSC’s public-officerfacing learning programme portal, future iterations may use user-item interaction data to build a range of functionalities. These include Content-based filtering as data becomes cleaner and more standardised, Model/Memory-based Collaborative filtering as more users use the recommendations, or Deep Learning-based models that make the recommendations personalised as more types of data about the public officer becomes available. With a robust recommendation algorithm, informed by other data models (viz. learner and domain models) being built in the future, a truly personalised experience can be delivered to every public officer.
With time, we will get there, either sooner (with responsible and trusted access to user data and efficient AI techniques like transfer learning could shorten the time-to-market), or later (e.g., if the user privacy movement pushback is significant or learner data requires great effort to clean up for use). Even though there is generally high trust in government services, recently reported surveys have concluded, for instance, that senior citizens remain less receptive to having AI interpret medical results.10
It is a leap to say that public officers will react similarly to learning with and from an AI-enabled tool, but for now, the challenge remains to overcome these adjustment hurdles. In order to use such smart systems effectively, change management and professional development for L&D practitioners, learners and other stakeholders, will need to be worked through.
NOTES
- GovTech Singapore, “Digital Government Blueprint”, accessed August 18, 2021, https://www.tech.gov.sg/digital-government-blueprint/.
- Smart Nation Singapore, “National AI Strategy: The Next Key Frontier of Singapore’s Smart Nation Journey”, accessed August 18, 2021, https://www.smartnation.gov.sg/why-Smart-Nation/NationalAIStrategy.
- H. Bound, S. C. Tan, M. Y. Kan, and X. F. Bi, “Charting the Future of Adult Learning Research Agenda in Singapore: A Consultative Paper by the Subgroup on Innovative Technologies for Adult Learning Research”, March 31, 2020, accessed August 18, 2021,
- D. Heaven, “Why Faces Don’t Always Tell the Truth About Feelings”, Nature 578 (2020): 502-504, accessed August 18, 2021, https://www.nature.com/articles/d41586-020-00507-5.
- MINDEF Singapore, “Fact Sheet: Murai Urban Live Firing Facility (MULFAC)”, August 14, 2014, accessed August 18, 2021, https://www.mindef.gov.sg/web/portal/mindef/news-and-events/latest-releases/article-detail/2014/august/2014aug14-News-Releases-00058.
- ST Engineering, “Advanced Driver Assistance Systems”, accessed August 18, 2021, https://www.stengg.com/en/smart-city/urban-solutions/smart-mobility-road/mobility-services/agil-drivesafeplus/.
- Cengage, “The Benefits of Adaptive Learning Technology”, April 30, 2021, accessed August 18, 2021, https://latam.cengage.com/the-benefits-of-adaptive-learning-technology/; Dorrit Billman and Evan Heit, “Observational Learning from Internal Feedback: A Simulation of an Adaptive Learning Method”, Cognitive Science 12 (1988): 587–625, accessed August 18, 2021 ; N. Sharma, I. Doherty, and C. Dong, “Adaptive Learning in Medical Education: The Final Piece of Technology Enhanced Learning?” Ulster Med J. 86, no. 3 (September 2017): 198-200, accessed August 18, 2021, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5849979/.
- Arthur C. Graesser, Xiangen Hu, Benjamin D. Nye, and Robert A. Sottilare, “Intelligent Tutoring Systems, Serious Games, and the Generalized Intelligent Framework for Tutoring (GIFT)”, in Using Games and Simulations for Teaching and Assessment, eds. Harold F. O'Neil, Eva L. Baker, and Ray S. Perez (New York: Routledge, 2016), accessed August 18, 2021, https://www.taylorfrancis.com/chapters/edit/10.4324/9781315817767-5/intelligent-tutoring-systems-serious-games-generalized-intelligent-framework-tutoring-gift-arthur-graesser-xiangen-hu-benjamin-nye-robert-sottilare.
- Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), Model Artificial Intelligence Governance Framework Second Edition (2020), accessed August 18, 2021, https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf.
- S. Begu, “Seniors Less Receptive to Telemedicine and Uncomfortable with AI Interpreting Medical Results: S’pore Survey”, The Straits Times, July 4, 2021, https://www.straitstimes.com/singapore/health/seniors-less-receptive-to-telemedicine-and-uncomfortable-with-ai-interpreting.