Artificial Intelligence: Impact on Public Safety & Security
ETHOS Issue 21, July 2019
In the American television drama Person of Interest, “The Machine” is an advanced computer system that functions as a vigilante crime-fighting tool: it employs pattern recognition to stop crimes before they materialise. This premise—crime prevention with the help of Artificial Intelligence (AI)—is no longer so far-fetched.
An increasingly sophisticated technology, AI could support preventive policing to bring about a safer community. But are there any downsides we need to be aware of? What are AI’s possibilities as well as potential risks in the context of public safety and security, and what can we do to mitigate potential downsides?
AI Enhances Operational Effectiveness
As a set of technologies that simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning,1 AI can enhance operational effectiveness through automation and augmentation. When combined, they complement human expertise, producing faster and better results. While AI can spot patterns that may escape the naked eye, humans can contextualise data insights and decision-making with intuition and experience.
The automation of data-heavy processing tasks, from visual inspections of public spaces to interpreting security video footage, can help to overcome resource constraints. This frees up scarce human capacity for higher-value work and more complex problem-solving, boosting workplace productivity and engagement.
Machine self-learning capabilities have predictive and prescriptive uses. AI creates new sense-making possibilities by quickly generating insights through deeper analysis of data.
While AI augments capability, it cannot entirely replace humans.
Automating the Home Team’s Operational Capabilities
In Singapore, AI has already found its way into a variety of Home Team2 border security and homeland security applications. AI-driven perception, processing, and analysis are essential for collecting, sorting, and interpreting data to better inform human decision-making. A leading AI technology now being deployed is machine-learning computer vision technology. AI-backed biometric systems have also become more powerful than ever in spotting patterns in human physiology.
AI—at the intersection of machine learning and robotics—has also given rise to autonomous systems that can tackle more challenging tasks in a wider range of environments. While sensors can provide data inputs to systems, the AI element helps to filter and make sense of data, and can recommend particular actions. Unmanned Aerial Vehicles (UAVs) are robotic autonomous systems that give our officers a bird’s-eye view of a situation, so they can make better ground decisions. In the future, the UAVs could incorporate AI in the following forms:
a. "Computer Vision & Learning”—the
ability to analyse visual input;
b. "Machine Perception”—the ability to
processing input from a variety of
sensors; and
c. "Motion Planning”—the ability to break
down a desired path into smaller,
more manageable segments.
The Singapore Civil Defence Force (SCDF) has deployed UAVs in monitoring activities outdoors and in public spaces, such as fire tracking, surveillance, and Search and Rescue missions. The integration of these systems complements current operations and aims to improve operational effectiveness. An example is SCDF’s use of a Red Rhino Robot (or 3R) for autonomous fire detection, with an auto heat-seeking mechanism to help find heat sources. This robot can potentially reduce a traditional four-man crew to a team of three, and penetrate far deeper into the seat of fire without risking a human firefighter.
Augmenting the Home Team’s Operational Capabilities
UAVs also augment police neighbourhood patrols. The UAVs can transmit a live aerial video feed to a Police Operations Command Centre (POCC), facilitating their dispatch to the crime scene. Advanced sensors, intelligent autonomous navigation and mapping algorithms may be progressively added to these UAVs to improve obstacle detection and avoidance.
The Home Team is well aware that AI is not a magical silver bullet that will solve all problems: different operations call for different degrees of technological intervention. While AI augments capability, it cannot entirely replace humans. The use of UAVs, for example, enhances the present force’s capabilities and effectiveness, with the same manpower resources. But our frontline officers remain relevant to the community they serve in. Officers bring a human touch, and an assuring sense of safety and security to the community. Human touchpoints that communities value cannot easily be replaced by AI.
Iris scans were introduced on a trial basis at the Woodlands Checkpoint in July 2018, enhancing the existing network of cameras with the facial recognition capabilities of the Automated Biometric and Behavioural Screening Suite.
AI Integration in Singapore's Border Security Operations
Iris scans were introduced on a trial basis at the Woodlands Checkpoint in July 2018, enhancing the existing network of cameras with the facial recognition capabilities of the Automated Biometric and Behavioural Screening Suite.The Immigration & Checkpoints Authority plans to roll out the Suite progressively at all checkpoints.
Video analytics and screening capabilities identify suspicious objects and individuals, and conduct quick biometric identity verification. This reduces manpower requirements and also increases operational effectiveness. Since its introduction, the system has swiftly detected foreigners wanted for offences such as overstaying.
Case Study: Emergency Management
The Ministry of Home Affairs uses UAVs (also known as drones) to conduct aerial surveillance for forested operations, fire management and crowd monitoring for mass public events such as the New Year’s Eve Countdown.
The Ministry of Home Affairs uses UAVs (also known as drones) to conduct aerial surveillance for forested operations, fire management and crowd monitoring for mass public events such as the New Year’s Eve Countdown. Equipped with high-definition cameras that transmit clear images of people and objects on the ground, the drones use thermal-imaging capabilities to help identify human heat signatures—a boon for spotting suspects in densely-forested areas, especially at night.
Since December 2016, the drones have helped police to nab three criminals conducting illegal activities in the forest; they have also monitored major pipelines, and even traffic congestion at checkpoints. In October 2018, police officers caught 125 illegal immigrants via night-time drone operations in the western and northern parts of Singapore.
Potential for Exploitation
Any emerging technology is a double-edged sword, with potential for abuse by malicious actors. Automation and augmentation through AI have contributed to such widely reported abuses as cybersecurity breaches and fake news distribution. Understanding how malicious agents can manipulate AI technologies to their advantage is crucial in mitigating potential threats.
The Thinking Malware
In 2017, 62% of the attendees at Black Hat USA 2017—the world’s leading information security conference—said they believe artificial intelligence will be used for cyberattacks in the near future.3 In fact, this has already happened. IBM security researches have uncovered a new breed of AI-powered cyber-attacks that can automatically target vulnerabilities with greater speed and accuracy.4 Deep Locker, a recent product of IBM Research, demonstrates how AI-powered malware is highly successful at evading traditional detection.5 Automated to attack with peak effectiveness and with self-learning capabilities, each attempt becomes more effective than the last.
The first observed example of an AI-backed malware hack was executed in 2017, on an India-based company.6 Embedded algorithms allowed the software to first observe and figure out the typical user’s network behaviour, and then mimick their digital footprints to evade surveillance detection long enough to complete the hack. Data breaches may now go undetected for longer as AI-powered attacks emulate this detection-evading mechanism.
The Role in Fake News
The ease of access to emerging technologies means AI is as readily available for use by malicious actors as by proper authorities. Deliberate online falsehoods, the online proliferation of false stories often embedded with social, economic and political biases with the malicious intent of misleading audiences for gain, are becoming increasingly common. The generation of these increasingly realistic falsehoods suggest how AI could be manipulated to fool more people more effectively and quickly.
Neural networks underpinning AI technologies have augmented multimedia editing. Almost perfect image and video manipulations are now achievable, creating photo-realistic images and mimicking voices seamlessly. These are known as “Deep Fakes”.7 Discerning between what is real and fake online is no longer straightforward. A viral video of Barack Obama, where the former US President is seen and heard using expletives, was made using Adobe’s After Effects software and the AI face-swapping tool FakeApp. The fake footage was swiftly disseminated across many virtual platforms, garnering over 3.7 million views within a week.8 This shows just how attention-grabbing and persuasive fakes can be.
At present, even an AI of tremendous power will not be able to determine outcomes in a complex social system, the outcomes are too complex—even without allowing for free will by sentient agents...Strategy that involves humans, no matter that they are assisted by modular AI and fight using legions of autonomous robots, will retain its inevitable human flavor.9
—Kareem Ayoub and Kenneth Payne
Strengthening Our Resilience for The Future: AI and Beyond
Cybersecurity
For all the inherent risks AI presents in self-mutating malware, the answer might ironically lie in harnessing the power of AI itself to strengthen existing cybersecurity setups. SparkCognition, a US-based company, developed an entirely AI-based solution called Deep Armor in 2017.10 It is the first cognitive antivirus software that leverages AI to identify mutating online viruses and detect malware approaches, including advanced malware masking techniques, and stepping up against more sophisticated cyberattacks. AI can therefore be tapped to upgrade cybersecurity capabilities not only in detection and response, but also preventive defence.
In parallel, a deliberate talent strategy will be important, to recruit and deploy those with the expertise to work with AI to boost cybersecurity. For example, Thailand’s government agencies have begun deploying sensors running AI algorithms, incorporating predictive analytics in cyber network monitoring systems.11 At the same time, a new digital forensics team is being developed to specifically investigate digital evidence from cyber-attacks.12 These projects accompany plans to raise existing employees’ digital literacy, while looking overseas to recruit experts. Such a move aims to combine AI-enabled prevention and protection systems’ algorithmic decision-making, with flexible human interaction and supervision.
Dealing with Fake News
Research is already being carried out on how to deploy AI in detecting falsehoods. The machine can be trained to analyse text and determine how likely it is that a particular message is a real communication from an actual person, or a mass-distributed solicitation.13 Building on a similar type of text analysis to spam-fighting, AI systems are also trained to evaluate h ow well a post’s text, or a headline, compares with the actual content of an article someone is sharing online. Another method could examine similar articles to see whether other news media have differing facts. Similar systems can identify specific accounts and source websites that spread fake news.
However, mitigation measures must go beyond technology: the response needs to be all-rounded, involving citizens and public-private collaborations. To inoculate the community against falsehoods, Singapore government agencies such as MCI14 and IMDA have begun efforts to promote better media literacy15 through educational forums, training users to critically evaluate and independently report suspicious information.16
A Broader Perspective
From the security perspective, a multi-agency effort is needed to establish a framework so that agencies understand the appropriate responses to different risks. Relevant agencies are also working together to anticipate and identify emerging security risks linked to such technology adoption, and to build up capabilities to address these risks.
As we gain a better understanding of AI, we will be better at mitigating its dangers. Exciting times are ahead—we have entered a brave new world.
NOTES
- Personal Data Protection Commission, Infocomm Media Development Authority, Singapore, “A Proposed Model Artificial Intelligence Governance Framework” (Working Draft, November 28, 2018, revision).
- The Home Team consists of the Ministry of Home Affairs Headquarters, Singapore Police Force, Immigration and Checkpoints Authority, Home Team Academy, Internal Security Department, Singapore Civil Defence Force, Singapore Prison Service, Central Narcotics Bureau, Casino Regulatory Authority and the Singapore Corporation of Rehabilitative Enterprises.
- The Cylance Team, “Black Hat Attendees See AI as Double-Edged Sword”, August 1, 2017, accessed January 2, 2019, https://blogs.blackberry.com/en/2017/08/black-hat-attendees-see-ai-as-double-edged-sword.
- Dan Patterson, “How Weaponized AI Creates a New Breed of Cyber-Attacks”, TechRepublic, August 16, 2018, accessed January 2, 2019, https://www.techrepublic.com/article/how-weaponized-ai-creates-a-new-breed-of-cyber-attacks/.
- Marc Ph. Stoecklin, “DeepLocker: How AI Can Power A Stealthy New Breed of Malware”,Security Intelligence, August 8, 2018, accessed January 2, 2019, https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/.
- Infosec Institute, “How Criminals Can Exploit AI”, May 1, 2018, accessed December 26, 2018, https://resources.infosecinstitute.com/criminals-can-exploit-ai/#gref.
- Oscar Schwartz, “You Thought Fake News Was Bad? Deep Fakes Are Where Truth Goes to Die”, The Guardian, November 12, 2018, accessed December 26, 2018, https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth.
- James Vincent, “Watch Jordan Peele Use AI To Make Barack Obama Deliver a PSA about Fake News”, The Verge, April 17, 2018, accessed December 26, 2018, https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed.
- Kareem Ayoub and Kenneth Payne, “Strategy in the Age of Artificial Intelligence”, Journal of Strategic Studies 39, no. 5 (November 2015):816.
- SparkCognition, “Deep Armor: Endpoint Protection, Built from AI”, accessed December 26, 2018, https://www.sparkcognition.com/products/.
- Michell Christopher, “Artificial Intelligence in Thailand: How It Started and Where It’s Headed”, OpenGov Asia, July 12, 2018, accessed December 26, 2018, https://www.opengovasia.com/artificial-intelligence-in-thailand-how-it-started-and-where-its-headed.
- Nurfilzah Rohaidi, “How Thailand Is Using AI for Cybersecurity”, GovInsider, November 27, 2018, accessed December 26, 2018, https://govinsider.asia/digital-gov/how-thailand-is-using-ai-for-cybersecurity/.
- Kai Shu, Amy Silva, Suhang Wang, Jiliang Tang and Huan Liu, “Fake News Detection on Social Media: A Data Mining Perspective”, Sigkdd Explorations by Association for Computing Machinery 19, no. 1 (June 2017): 22–36, https://dl.acm.org/doi/10.1145/3137597.3137600.
- Remarks by Mr S Iswaran, Minister for Communications and Information, at the Media Literacy Council’s Launch of the Fake News Campaign, November 2, 2018, accessed December 26, 2018, https://www.mci.gov.sg/pressroom/news-and-stories/pressroom/2018/11/remarks-by-minister-s-iswaran-at-the-media-literacy-council-launch-of-the-fake-news-campaign.
- Lianne Chia, “National Framework to Build Information and Media Literacy to be Launched in 2019: S Iswaran”, CNA, November 2, 2018, accessed December 26, 2018, https://www.channelnewsasia.com/singapore.
- Infocomm Media Development Authority (IMDA), “New Council to Oversee Cyber Wellness, Media Literacy Initiatives”, November 3, 2017, accessed 26 December 2018, https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/archived/mda/press-releases/2012/new-council-to-oversee-cyber-wellness-media-literacy-initiatives.