The Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged AI
Testing human ability to detect ‘deepfake’ images of human faces 

By Sergi D. Bray , Shane D. Johnson and Bennett Kleinberg

Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (these being uncurated output from the StyleGAN2 algorithm as trained on the FFHQ dataset) from a pool of non-deepfake images (these being random selection of images from the FFHQ dataset), and to assess the effectiveness of some simple interventions intended to improve detection accuracy. Using an online survey, participants (N = 280) were randomly allocated to one of four groups: a control group, and three assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake images of human faces and 50 images of real human faces. Participants were asked whether each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Of equal concern was the fact that participants’ confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals that participants consistently found certain images easy to label correctly and certain images difficult, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85 and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat. 

Journal of Cybersecurity, 2023, 1–18 

The Weaponisation of Deepfakes Digital Deception by the Far-Right

By Ella Busch and Jacob Ware    

In an ever-evolving technological landscape, digital disinformation is on the rise, as are its political consequences. In this policy brief, we explore the creation and distribution of synthetic media by malign actors, specifically a form of artificial intelligence-machine learning (AI/ML) known as the deepfake. Individuals looking to incite political violence are increasingly turning to deepfakes– specifically deepfake video content–in order to create unrest, undermine trust in democratic institutions and authority figures, and elevate polarised political agendas. We present a new subset of individuals who may look to leverage deepfake technologies to pursue such goals: far right extremist (FRE) groups. Despite their diverse ideologies and worldviews, we expect FREs to similarly leverage deepfake technologies to undermine trust in the American government, its leaders, and various ideological ‘out-groups.' We also expect FREs to deploy deepfakes for the purpose of creating compelling radicalising content that serves to recruit new members to their causes. Political leaders should remain wary of the FRE deepfake threat and look to codify federal legislation banning and prosecuting the use of harmful synthetic media. On the local level, we encourage the implementation of “deepfake literacy” programs as part of a wider countering violent extremism (CVE) strategy geared towards at-risk communities. Finally, and more controversially, we explore the prospect of using deepfakes themselves in order to “call off the dogs” and undermine the conditions allowing extremist groups to thrive.  

The Hague?:  International Centre for Counter-Terrorism (ICCT), 2023.

Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

By Carter, Sarah R.; Wheeler, Nicole E.; Chwalek, Sabrina; Isaac, Christopher R.; Yassif, Jaime

From the document: "Rapid scientific and technological advances are fueling a 21st-century biotechnology revolution. Accelerating developments in the life sciences and in technologies such as artificial intelligence (AI), automation, and robotics are enhancing scientists' abilities to engineer living systems for a broad range of purposes. These groundbreaking advances are critical to building a more productive, sustainable, and healthy future for humans, animals, and the environment. Significant advances in AI in recent years offer tremendous benefits for modern bioscience and bioengineering by supporting the rapid development of vaccines and therapeutics, enabling the development of new materials, fostering economic development, and helping fight climate change. However, AI-bio capabilities--AI tools and technologies that enable the engineering of living systems--also could be accidentally or deliberately misused to cause significant harm, with the potential to cause a global biological catastrophe. [...] To address the pressing need to govern AI-bio capabilities, this report explores three key questions: [1] What are current and anticipated AI capabilities for engineering living systems? [2] What are the biosecurity implications of these developments? [3] What are the most promising options for governing this important technology that will effectively guard against misuse while enabling beneficial applications? To answer these questions, this report presents key findings informed by interviews with more than 30 individuals with expertise in AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies."

Nuclear Threat Initiative. 2023. 88p.

Principles for Reducing AI Cyber Risk in Critical Infrastructure: A Prioritization Approach

By SLEDJESKI, CHRISTOPHER L.

From the document: "Artificial Intelligence (AI) brings many benefits, but disruption of AI could, in the future, generate impacts on scales and in ways not previously imagined. These impacts, at a societal level and in the context of critical infrastructure, include disruptions to National Critical Functions. A prioritized risk-based approach is essential in any attempt to apply cybersecurity requirements to AI used in critical infrastructure functions. The topics of critical infrastructure and AI are simply too vast to meaningfully address otherwise. The National Institute of Standards and Technology (NIST) defines cyber secure AI systems as those that can 'maintain confidentiality, integrity and availability through protection mechanisms that prevent unauthorized access and use.' Cybersecurity incidents that impact AI in critical infrastructure could impact the availability, reliability, and safety of these vital services. [...] This paper was prompted by questions presented to MITRE about to what extent the original NIST Cybersecurity Risk Framework, and the efforts that accompanied its release, enabled a regulatory approach that could serve as a model for AI regulation in critical infrastructure. The NIST Cybersecurity Risk Framework was created a decade ago as a requirement of Executive Order (EO) 13636. When this framework was paired with the list of cyber-dependent entities identified under the EO, it provided a voluntary approach for how Sector Risk Management Agencies (SRMAs) prioritize and enhance the cybersecurity of their respective sectors."

MITRE CORPORATION. 2023. 18p.