Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged AI
Crossing the Deepfake Rubicon The Maturing Synthetic Media Threat Landscape

By Di Cooke, Abby Edwards, Alexis Day, Devi Nair, Sophia Barkoff, and Katie Kelly

THE ISSUE

  • In recent years, threat actors have increasingly used synthetic media—digital content produced or manipulated by artificial intelligence (AI)—to enhance their deceptive activities, harming individuals and organizations worldwide with growing frequency.

  • In addition, the weaponization of synthetic media has also begun to undermine people’s trust in information integrity more widely, posing concerning implications for the stability and resilience of the U.S.’s information environment.

  • At present, an individual’s ability to recognize AI-generated content remains the primary defense against people falling prey to deceptively presented synthetic media.

  • However, a recent experimental study by CSIS found that people are no longer able to reliably distinguish between authentic and AI-generated images, audio, and video sourced from publicly available tools.

  • That human detection has ceased to be a reliable method for identifying synthetic media only heightens the dangers posed by the technology’s misuse, underscoring the pressing need to implement alternative countermeasures to address this emerging threat.

CSIS, 2024. 11p.

“Say it’s Only Fictional”: How the Far-Right is Jailbreaking AI and What Can Be Done About It  

By Bàrbara Molas and Heron Lopes

This research report illustrates how far-right users have accelerated the spread of harmful content by successfully exploiting AI tools and platforms. In doing so, it contributes to improving our understanding of the misuse of AI through new data and evidence-based insights that may inform action against the dissemination of hate culture through the latest technologies.  

  The Hague: The International Centre for Counter-Terrorism (ICCT), 2024. 27p.

Bytes and Battles: Inclusion of Data Governance in Responsible Military AI

By: Yasmin Afina and Sarah Grand-Clément

Data plays a critical role in the training, testing and use of artificial intelligence (AI), including in the military domain. Research and development for AI-enabled military solutions is proceeding at breakneck speed, and the important role data plays in shaping these technologies has implications and, at times, raises concerns. These issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning). Yet pathways and governance solutions to address these issues remain scarce and very much underexplored.

This paper aims to fill this gap by first providing a comprehensive overview on data issues surrounding the development, deployment and use of AI. It then explores data governance practices from civilian applications to identify lessons for military applications, as well as highlight any limitations to such an approach. The paper concludes with an overview of possible policy and governance approaches to data practices surrounding military AI to foster the responsible development, testing, deployment and use of AI in the military domain.

CIGI Papers No. 308 — October 2024

THE IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY: SHIFTING THE OFFENSE- DEFENSE BALANCE

By: Jennifer Tang, Tiffany Saade, and Steve Kelly

Cutting-edge advances in artificial intelligence (AI) are taking the world by storm, driven by a massive surge of investment, countless new start-ups, and regular technological breakthroughs. AI presents key opportunities within cybersecurity, but concerns remain regarding the ways malicious actors might also use the technology. In this study, the Institute for Security and Technology (IST) seeks to paint a comprehensive picture of the state of play— cutting through vagaries and product marketing hype, providing our outlook for the near future, and most importantly, suggesting ways in which the case for optimism can be realized.

The report concludes that in the near term, AI offers a significant advantage to cyber defenders, particularly those who can capitalize on their "home field" advantage and firstmover status. However, sophisticated threat actors are also leveraging AI to enhance their capabilities, making continued investment and innovation in AI-enabled cyber defense crucial. At this time of writing, AI is not yet unlocking novel capabilities or outcomes, but instead represents a significant leap in speed, scale, and completeness.

This work is the foundation of a broader IST project to better understand which areas of cybersecurity require the greatest collective focus and alignment—for example, greater opportunities for accelerating threat intelligence collection and response, democratized tools for automating defenses, and/or developing the means for scaling security across disparate platforms—and to design a set of actionable technical and policy recommendations in pursuit of a secure, sustainable digital ecosystem.

The Institute for Security and Technology, October 2024

Hacking Generative AI

By Ido Kilovaty

Generative AI platforms, like ChatGPT, hold great promise in enhancing human creativity, productivity, and efficiency. However, generative AI platforms are prone to manipulation. Specifically, they are susceptible to a new type of attack called “prompt injection.” In prompt injection, attackers carefully craft their input prompt to manipulate AI into generating harmful, dangerous, or illegal content as output. Examples of such outputs include instructions on how to build an improvised bomb, how to make meth, how to hotwire a car, and more. Researchers have also been able to make ChatGPT generate malicious code. This article asks a basic question: do prompt injection attacks violate computer crime law, mainly the Computer Fraud and Abuse Act? This article argues that they do. Prompt injection attacks lead AI to disregard its own hard-coded content generation restrictions, which allows the attacker to access portions of the AI that are beyond what the system’s developers authorized. Therefore, this constitutes the criminal offense of accessing a computer in excess of authorization. Although prompt injection attacks could run afoul of the Computer Fraud and Abuse Act, this article offers ways to distinguish serious acts of AI manipulation from less serious ones, so that prosecution would only focus on a limited set of harmful and dangerous prompt injections.

Loyola of Los Angeles Law Review, Vol. 58, 2025, Kilovaty, Ido, Hacking Generative AI (March 1, 2024). Loyola of Los Angeles Law Review, Vol. 58, 2025,

Testing human ability to detect ‘deepfake’ images of human faces 

By Sergi D. Bray , Shane D. Johnson and Bennett Kleinberg

Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (these being uncurated output from the StyleGAN2 algorithm as trained on the FFHQ dataset) from a pool of non-deepfake images (these being random selection of images from the FFHQ dataset), and to assess the effectiveness of some simple interventions intended to improve detection accuracy. Using an online survey, participants (N = 280) were randomly allocated to one of four groups: a control group, and three assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake images of human faces and 50 images of real human faces. Participants were asked whether each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Of equal concern was the fact that participants’ confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals that participants consistently found certain images easy to label correctly and certain images difficult, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85 and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat. 

Journal of Cybersecurity, 2023, 1–18 

The Weaponisation of Deepfakes Digital Deception by the Far-Right

By Ella Busch and Jacob Ware    

In an ever-evolving technological landscape, digital disinformation is on the rise, as are its political consequences. In this policy brief, we explore the creation and distribution of synthetic media by malign actors, specifically a form of artificial intelligence-machine learning (AI/ML) known as the deepfake. Individuals looking to incite political violence are increasingly turning to deepfakes– specifically deepfake video content–in order to create unrest, undermine trust in democratic institutions and authority figures, and elevate polarised political agendas. We present a new subset of individuals who may look to leverage deepfake technologies to pursue such goals: far right extremist (FRE) groups. Despite their diverse ideologies and worldviews, we expect FREs to similarly leverage deepfake technologies to undermine trust in the American government, its leaders, and various ideological ‘out-groups.' We also expect FREs to deploy deepfakes for the purpose of creating compelling radicalising content that serves to recruit new members to their causes. Political leaders should remain wary of the FRE deepfake threat and look to codify federal legislation banning and prosecuting the use of harmful synthetic media. On the local level, we encourage the implementation of “deepfake literacy” programs as part of a wider countering violent extremism (CVE) strategy geared towards at-risk communities. Finally, and more controversially, we explore the prospect of using deepfakes themselves in order to “call off the dogs” and undermine the conditions allowing extremist groups to thrive.  

The Hague?:  International Centre for Counter-Terrorism (ICCT), 2023.

Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

By Carter, Sarah R.; Wheeler, Nicole E.; Chwalek, Sabrina; Isaac, Christopher R.; Yassif, Jaime

From the document: "Rapid scientific and technological advances are fueling a 21st-century biotechnology revolution. Accelerating developments in the life sciences and in technologies such as artificial intelligence (AI), automation, and robotics are enhancing scientists' abilities to engineer living systems for a broad range of purposes. These groundbreaking advances are critical to building a more productive, sustainable, and healthy future for humans, animals, and the environment. Significant advances in AI in recent years offer tremendous benefits for modern bioscience and bioengineering by supporting the rapid development of vaccines and therapeutics, enabling the development of new materials, fostering economic development, and helping fight climate change. However, AI-bio capabilities--AI tools and technologies that enable the engineering of living systems--also could be accidentally or deliberately misused to cause significant harm, with the potential to cause a global biological catastrophe. [...] To address the pressing need to govern AI-bio capabilities, this report explores three key questions: [1] What are current and anticipated AI capabilities for engineering living systems? [2] What are the biosecurity implications of these developments? [3] What are the most promising options for governing this important technology that will effectively guard against misuse while enabling beneficial applications? To answer these questions, this report presents key findings informed by interviews with more than 30 individuals with expertise in AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies."

Nuclear Threat Initiative. 2023. 88p.

Principles for Reducing AI Cyber Risk in Critical Infrastructure: A Prioritization Approach

By SLEDJESKI, CHRISTOPHER L.

From the document: "Artificial Intelligence (AI) brings many benefits, but disruption of AI could, in the future, generate impacts on scales and in ways not previously imagined. These impacts, at a societal level and in the context of critical infrastructure, include disruptions to National Critical Functions. A prioritized risk-based approach is essential in any attempt to apply cybersecurity requirements to AI used in critical infrastructure functions. The topics of critical infrastructure and AI are simply too vast to meaningfully address otherwise. The National Institute of Standards and Technology (NIST) defines cyber secure AI systems as those that can 'maintain confidentiality, integrity and availability through protection mechanisms that prevent unauthorized access and use.' Cybersecurity incidents that impact AI in critical infrastructure could impact the availability, reliability, and safety of these vital services. [...] This paper was prompted by questions presented to MITRE about to what extent the original NIST Cybersecurity Risk Framework, and the efforts that accompanied its release, enabled a regulatory approach that could serve as a model for AI regulation in critical infrastructure. The NIST Cybersecurity Risk Framework was created a decade ago as a requirement of Executive Order (EO) 13636. When this framework was paired with the list of cyber-dependent entities identified under the EO, it provided a voluntary approach for how Sector Risk Management Agencies (SRMAs) prioritize and enhance the cybersecurity of their respective sectors."

MITRE CORPORATION. 2023. 18p.