What Digital Success Looks Like: Measuring & Evaluating Government Digitalisation
ETHOS Issue 21, July 2019
An adage often attributed to management guru Peter Drucker goes: “What gets measured gets done”. This is probably easier to understand in its converse form: “What we cannot measure, cannot be managed”.
Despite Drucker’s insight, a growing number of governments are claiming to “go digital” by incorporating digital technology into their internal administration and service delivery—but measure their efforts woefully imprecisely. For every government that is aiming to “digitalise”, one can find a different and often competing definition of what that means. Consequently, measures of digitalisation are often patchy and poor.
Measuring Digitalisation
It is not that measurement is being ignored. There are many frameworks assessing e-governance, including efforts by the United Nations, Organisation for Economic Co-operation and Development (OECD), World Bank, European Union, Waseda University’s International Academy of Chief Information Officers, the Fletcher School of Law and Diplomacy at Tufts University, the World Wide Web Foundation, and the Open Knowledge Network.
The problem is that existing measurements are flawed and insufficient. Take, for example, the UN Department of Economics and Social Affairs (UNDESA)’s E-government Development Index (EGDI) and the World Bank’s Digital Adoption Index (DAI)—the most comprehensive measures currently available. They cover all countries, unlike other regional or sector-focused frameworks.
The EGDI includes three sub-indices:
I. an Online Services Index (OSI) indicating how far public services are delivered digitally;
II. a Telecommunication Infrastructure Index (TII) assessing a country’s number of internet users, mobile subscribers, telephone subscriptions, and wireless/fixed broadband subscriptions; and
III. a Human Capital Index (HCI) measuring adult literacy, enrolment ratios in primary, secondary and tertiary education, as well as expected and average years of schooling.
The DAI also includes three sub-indices:
I. a Business Indicator measuring Third Generation (3G) mobile coverage, download speeds and server security;
II. a People Indicator measuring Internet and mobile access in homes; and
III. a Government Indicator measuring the extent of online public services, digitalisation of core administrative systems and the presence of a digital identity (ID) system.
Both indices are problematic on several counts. In some ways, they include too much to precisely reflect government performance: the EGDI’s TII and HCI indicators cover much more than government or even business digitalisation, even if issues like mobile infrastructure and measures of education system quality are important in a broader sense. The DAI assigns a very high weightage to digital ID systems, ignoring how some advanced digitalised systems do not have one. This leads to the anomalous situation where the UK, the 2016 EGDI’s top ranked country, has a mediocre showing on the DAI with an index score of 0.59 on the government indicator.
In other respects, the indices include too little. They focus on numerical measures of the presence of digital technology in government (output measures), but do not evaluate the quality of digitalised government (outcome measures). To be fair, most scholarly work has not done this either, and only one 2006 study, by Patrick Dunleavy, Helen Margetts, Jane Tinkler and Simon Bastow,1 has attempted a rigorous evaluation.
In that study, three particular aspects of digitalisation in seven governments (Australia, Canada, Japan, the Netherlands, New Zealand, the United Kingdom and the United States) were examined:
I. the success rate of government IT (measured inversely by how often key IT projects are scrapped);
II. the price comparability between public sector and private sector IT; and
III. the relative modernity of government IT systems, including hardware, software and network speed (compared to private sector systems).
The indices focus on numerical measures of the presence of digital technology in government (output measures), but do not evaluate the quality of digitalised government (outcome measures).
Thirteen years on, countries now offer a richer empirical set of digitalisation experiences to test whether these three measures of success have proven relevant. Singapore’s own Digital Government Blueprint, for instance, suggests several key performance indicators by which Smart Nation policies and programmes should be evaluated—these include and expand on the major indicators suggested so far by both policymakers and scholars.
I suggest seven measures that can be used to evaluate public sector digital efforts, both in Singapore and more broadly. I argue that:
• the first two indicators in Dunleavy et al. (i.e, success and price) should be retained;
• the third needs to be rephrased, since a system’s modernity is not an unqualified good; and
• four new variables are needed: usage, usability and usefulness of government digital platforms to citizens, businesses and other stakeholders; security of government data; timeliness of completion for major government IT projects; the use of data and data analytics for broader policy purposes.
My seven suggested measures are as follows:
Success of digital projects
Following Dunleavy et al., a digital system’s success can be measured inversely, by the number of projects scrapped. This indicator matters because of the loss of investments involved during such write-offs. Project scrappage is likely to be affected by factors including:
• size, scale and specificity (modularity) of projects;
• existence of rigorous techniques and/or institutional processes for IT sector planning;
• whether projects proceed in well-defined stages;
• whether projects are backed by senior leadership (either political or from key central agencies like Finance Ministries);
• whether there is internal IT expertise to assess and implement digital projects;
• whether there are meaningful contractual controls at the project selection stage; and
• use of pilot studies and phased rollouts, rather than moving immediately to implement new platforms at scale.
Two nuances are worth noting. First, non-scrappage of projects may not always be positive, e.g., Japan has a culture of not publicly admitting project failure due to fear of reputational damage; instead, additional resources are channelled to fix problematic projects, and success is not purely externally determined. Such instances will have to factor into any assessment of different projects’ success. Governments should be measuring genuine project success, not merely project survival.
Second, instances of scrappage should be of full-fledged projects, not pilots or beta-versions, which are intended to be more iterative and adjustable, in line with more agile approaches that are increasingly the hallmark of digital systems.
Governments should be measuring genuine project success, not merely project survival.
Price competitiveness of government digital projects
This measure refers to the money cost of government projects, compared with similar efforts in other governments and other sectors. It measures whether governments are overpaying or obtaining value for money in their expenditure. Price competitiveness is likely to be influenced by:
• the presence of incentives for civil service leaders to secure defensible prices (e.g.,
“value for money” audits or requirements for smart commissioning);
• the degree of focus on modular, medium-sized projects instead of large-scale behemoths;
• government tendering processes and contract scrutiny within government, public scrutiny, and potential contractual challenges for losing firms. Onerous processes can lead to erosion of joint profits and returns, causing firms to be highly circumspect about bidding for a government project, unless the price is sufficiently high;
• whether and to what extent successful price dampening mechanisms like call-off contracts, electronic marketplaces and open-market purchases exist;
• the occurrence of forced outsourcing (creating a closed and concentrated market);
• the nature of relationships between firms and governments;
• firms specifying contractual terms in ways that protect them from future market changes; and
• long-term contracts that lead to only incumbent firms being able to deliver mid-contract modifications.
One nuance to watch for is to avoid simplistic quantitative comparisons between public and private sector projects. Government projects could be large, more complex and differently structured from those in the private sector, which can result in price premiums (e.g., IT for some government agencies may require helpdesk support 24 hours a day, 7 days a week, throughout the year). The key question is whether these premiums are justifiable, or merely the result of excessive risk-averse behaviour by commissioners of government IT projects.
Avoid simplistic quantitative comparisons between public and private sector projects.
Relative effectiveness of government IT systems
Dunleavy et al. consider the “relative modernity” of government IT systems, compared to private sector adoptions. This includes the quality of back office systems, large databases, front-office software, desktop systems, web-compatible systems, network speed and bandwidth, the development level of e-government services (compared to e-commerce, fintech and other Web-oriented services), existence of legacy problems, recency of transition to fully Web-enabled networks, pace of generating government websites, support to citizens in navigating e-bureaucracy, and adoption of up-to-date technical standards.
Such elements make sense as indicators of how successfully a digital system meets its aims, but the term “modernity” is unnecessarily loaded and problematic. Modernity is not an unqualified good: there can be many reasons why governments and even companies might avoid recent software or hardware, including the desire to avoid bugs in early versions. This is why many organisations, both public and private, ask employees not to download new versions of apps until they have been tested by IT departments. Nevertheless, the spirit of the modernity argument is correct, in the sense that IT systems cannot be so old that they become unwieldy and inefficient—as the email client Lotus Notes was for many governments by the late 1990s.
Whether systems are old or new, their capacity to deliver output and outcomes is key. Hence my suggestion that we should focus on the “effectiveness” of a system. This involves several aspects:
• Effectiveness as a digital system: including hardware reliability, software efficiency, bandwidth sufficiency;
• Effectiveness as a specific system: e.g., a digital tax system might focus on results like higher revenue collection, higher compliance levels and lower evasion/fraud, higher tax morale, reduced costs, increased audit efficiency, reduced occurrence of activities like money laundering, more expeditious license and identification etc; and
• Dynamic effectiveness: e.g., through the use of technology-neutral approaches and regular reviews, to “future-proof” systems and mitigate against over-reliance on particular platforms (ICAEW 2016).2
This variable is likely to be affected by the fiscal resources needed to purchase quality systems, and whether there is a clear strategic process to move services and platforms online. It will also be influenced by provisions for change management and business process re-engineering as agencies move to new systems, and the extent a government is willing to overhaul systems or apply efficacious software patches.
In some cases, there may be trade-offs in different dimensions of effectiveness. For instance, the customisation of digital systems for government needs might also create structural “lock in”, which could impede future flexibility and system effectiveness.
Whether systems are old or new, their capacity to deliver output and outcomes is key.
Usage , usability and usefulness of government digital platforms
Usage, usability and usefulness of government digital systems signify the difference between “white elephant” systems and those genuinely serving citizen needs. They are measured by the number of users of government digital platforms, the proportion of such users relative to the total number for a particular service (if the service has an “offline” delivery option), and user experiences of the systems (including time spent on particular functionalities and the ease of use of web services). Usage, usability and usefulness will be determined by how personalised, simple, consistent, intuitive and real-time a system is. The ways in which these aspects are manifested include:
• the existence and prevalence of “digital by default” platforms;
• whether, and how easily, citizens and government officials can circumvent digital processes;
• reduction or eradication of time delays in operational processes like form-filling;
• the possibility of pre-populated forms and real-time information provision to government agencies;
• the presence or absence of digital accounts for government services that allow citizens and businesses to have a “single view”, e.g., of their tax position;
• provision of support services by digital platforms like online billing and refunds, e-invoicing, and Helpdesk facilities via webchats, webinars, chatbots, secure messaging, social networking sites, YouTube videos and other digital means;
• sufficient support for users with a range of disabilities (e.g., difficulties seeing websites) and other provisions for those unable to access online platforms like older citizens or those in areas with poor Internet coverage (e.g., a continuation of at least minimal provisions for paper-based filing, and/or a network of accessible and affordable tax agents who can file returns on behalf of taxpayers);
• ability of citizens and businesses to choose how they receive services;
• transparency and regular updates on government decisions about queries or particularly complex cases;
• the extent to which government is anticipatory rather than reactive, initiating recommendations or actual service delivery to citizens/businesses rather than waiting for requests or queries;
• platforms and provisions for co-created service delivery;
• the degree of data integration among government agencies, and between government and other bodies (e.g. banks, building societies, pension administrators, mortgage providers, and unions); and
• extent of digital system adoption by government employees themselves, e.g., mobile, apps, social media for more flexible and agile customer support.
The extent of internet penetration critically determines these outcomes. These use-related variables are important proxies for digital access, which is neither uniform nor universal in most countries. In Estonia, for instance, there are already documented instances of older, rural dwellers being left out of national digital frameworks, despite the country’s high level of aggregate digitalisation.
Usage, usability and usefulness of government digital systems signify the difference between "white elephant" systems and those genuinely serving citizen needs.
Security of government data
As concerns over individual privacy and collective data integrity rise, the overall security of such data will be a key new dimension of any successful digitalisation—as Singapore has experienced with health data in the past two years. We can measure the security of government data by the number of security breaches in the period under study, and the time taken to respond to them. Breaches could include phishing, man-in-the-middle attacks, identity theft, spearfishing, social engineering and other forms of cyber-security breaches (ICAEW 2016).
This indicator measures the quality of governments’ internal data governance processes, not just their citizen-facing delivery capacity. Observable dimensions include: the existence of protocols for data governance (e.g., classification, extraction); the use of stewardship models, change-control mechanisms, enhanced encryption, increased identification, identity and rights management, secure ID, and multi-factor authentication, including biometrics; and how effectively a government uses platforms like cloud computing or hybrid cloud.
Timeliness of completion for major government IT projects
This measure indicates governments’ capacity to translate an initial idea for a digital project into tangible outcomes within a reasonable timeframe. It is likely to be influenced by the size of each project, as well as the presence of expertise able to anticipate and pre-empt potential obstacles early. Such expertise could be internal or outsourced to contractors with a direct link to internal decision-makers.
Data use in policy, strategy formulation, organisational design and delivery
Where earlier measures consider the use of digital systems by citizens and businesses, this measure indicates how government agencies themselves use the opportunities created by digitalisation. This includes harnessing digitally collected and synthesised data to understand broad fiscal patterns and generate insights specific to companies and key citizen segments. For instance, when digital technology is applied to taxation:
• data analytics platforms could be treated as predictability models based on taxpayer information from e-invoices, and used to identify taxation deviations that determine pre-populated tax returns and generate broad-based “fiscal intelligence”;
• tax agencies could make appropriate data available as building blocks for other government agencies, like Ministries of Finance; and possibly third parties, to integrate broader service offerings (Microsoft and PWC 2017)3. Current examples include the OECD’s use of (i) consolidated databases and analysis of i-extensible Business Reporting Language (iXBRL) tagged company accounts, and (ii) Standard Audit File for Tax (SAF-T) requirements to standardise tax reporting by businesses, and generate mutually comparable data streams;
• the use of artificial intelligence (enabled by data-driven machine learning) to combat corporate tax evasion e.g., work by the Computer Science and Artificial Intelligence Laboratory, at the Massachusetts Institute of Technology, to develop the AI “STEALTH” (Simulating Tax Evasion and Law through Heuristics);
• cash registers at any commercial establishment could be directly linked to a tax agency, facilitating more accurate estimates of revenues from Value Added / Goods and Services taxes; and
• a combination of tax payment data combined with behavioural nudges reminding taxpayers to make payments, as was done in a project by the Danish Nudge Unit.
Potential responses to such data include restructuring an agency, crafting strategies or developing key performance indicators (KPIs) around citizen and business needs, rather than supply-driven factors (e.g., by function or region).
As concerns over individual privacy and collective data integrity rise, the overall security of such data will be a key new dimension of any successful digitalisation.
Such data analytics and data matching could contribute to increasing compliance and refining audit efforts. Continuing with the taxation-related examples, Bas Jacobs’ 2017 chapter in an International Monetary Fund publication identifies several ways in which tax agencies might use data to enhance their “tax enforcement technology”4: higher quality data on individual consumption (through digital platforms), better linked data on wage and capital income, better cross-border linkages of data on wages and capital, enabling financial institutions to be third party reporters on capital income and wealth, and enabling consumers as third-party reporters.
This variable is likely to be strongly influenced by the quality of the technological system in use by each agency, the complexity of the tax regime, and the presence of expertise to collect, connect, curate and communicate the data such that it is relevant to policy, strategy and organisational design.
Conclusion
Alone, none of these proposed indicators sufficiently capture the quality of a digitalisation effort. Together, however, they paint a rich composite picture, and could contribute to filling the substantial measurement gap in the literature on government digitalisation. As Singapore’s Smart Nation project develops and deepens, measures like this help to ensure that its outcomes are truly meaningful: not just by helping us to manage the process better, á la Peter Drucker, but also by enabling us to scrutinise our efforts honestly, as we continually refine them.
NOTES
- Patrick Dunleavy, Helen Margetts, Simon Bastow, and Jane Tinkler, Digital Era Governanance: IT Corporations, the State and E-Government (Oxford University Press, 2006).
- International Chartered Accountants of England and Wales (ICAEW) Information Technology Faculty, Digitalisation of Tax: International Perspectives (ICAEW, 2016).
- Kuralay Baisalbayeva, Eelco Van Der Enden, Rita Tenan, and Raúl Flores, The Data Intelligent Tax Administration (Microsoft and PriceWaterhouseCoopers, 2018).
- Bas Jacobs, “Digitalization and Taxation”, in Digital Revolutions in Public Finance, eds Sanjeey Gupta, Michael Keen, Alpa Shah, and Genevieve Verdier (International Monetary Fund, 2017), chap. 22.