Open Access Publisher and Free Library
CRIME+CRIMINOLOGY.jpeg

CRIME

Violent-Non-Violent-Cyber-Global-Organized-Environmental-Policing-Crime Prevention-Victimization

Posts tagged AI
New Frontiers: The Use of Generative Artificial Intelligence to Facilitate Trafficking in Persons

Bennett, Phil; Cucos, Radu; Winch, Ryan

From the document: "The intersection of AI and transnational crime, particularly its application in human trafficking, represents an emerging and critically important area of study. This brief has been developed with a clear objective: to equip policymakers, law enforcement agencies, and the technology sector with the insights needed to anticipate and pre-emptively address the potential implications of AI on trafficking in persons. While we respond to the early instances of the use of AI by transnational criminal organisations, such as within Southeast Asia's cyber-scam centres, a more systemic approach is required. The potential for transnational criminal organisations to significantly expand their operations using AI technologies is considerable, and with it comes the risk of exponentially increasing harm to individuals and communities worldwide. It is imperative that we act now, before the most severe impacts of AI-enabled trafficking are realised. We have a unique time-limited opportunity--and indeed, a responsibility--to plan, train, and develop policies that can mitigate these emerging threats. This report aims to concretise this discussion by outlining specific scenarios where AI and trafficking could intersect, and to initiate a dialogue on how we can prepare and respond effectively. This document is not intended to be definitive, but rather to serve as a foundation for a broader, ongoing discussion. The ideas presented here are initial steps, and it will require innovative thinking, adequate resourcing, and sustained engagement from all sectors to build upon them effectively."

Organization For Security And Co-Operation In Europe. Office Of The Special Representative And Co-Ordinator For Combating Trafficking In Human Beings; Bali Process (Forum). Regional Support Office .NOV, 2024

Crossing the Deepfake Rubicon The Maturing Synthetic Media Threat Landscape

By Di Cooke, Abby Edwards, Alexis Day, Devi Nair, Sophia Barkoff, and Katie Kelly

THE ISSUE

  • In recent years, threat actors have increasingly used synthetic media—digital content produced or manipulated by artificial intelligence (AI)—to enhance their deceptive activities, harming individuals and organizations worldwide with growing frequency.

  • In addition, the weaponization of synthetic media has also begun to undermine people’s trust in information integrity more widely, posing concerning implications for the stability and resilience of the U.S.’s information environment.

  • At present, an individual’s ability to recognize AI-generated content remains the primary defense against people falling prey to deceptively presented synthetic media.

  • However, a recent experimental study by CSIS found that people are no longer able to reliably distinguish between authentic and AI-generated images, audio, and video sourced from publicly available tools.

  • That human detection has ceased to be a reliable method for identifying synthetic media only heightens the dangers posed by the technology’s misuse, underscoring the pressing need to implement alternative countermeasures to address this emerging threat.

CSIS, 2024. 11p.

Using Intelligence Analysis to Understand and Address Fentanyl Distribution Networks in America’s Largest Port City 

By Aili Malm, Nicholas Perez, Michael D. White

This publication represents the final research report of California State University, Long Beach’s (CSULB) evaluation of an intelligence-led problem-oriented policing (POP) project to better understand and address illicit fentanyl distribution networks in Long Beach, CA. The goals of this study were to: (1) employ problem-oriented policing to drive efforts to identify and disrupt fentanyl distribution networks in Long Beach, CA, and (2) use intelligence analysis to identify high-level distributors for investigation. To achieve these goals, researchers worked with a newly hired intelligence analyst and Long Beach Police Department (LBPD) Drug Investigation Section (DIS) detectives to improve their fentanyl distribution network investigations. The intervention included POP training, intelligence analyst support [cellular phone extractions, open-source intelligence (OSINT), social network analysis (SNA), etc.], and weekly interactions between the analyst and the research team. To assess the effectiveness of the project, we conducted both process and outcome evaluations. Primary data sources include: (1) interviews of detectives and the analyst; (2) DIS administrative data; (3) network data from three fentanyl distribution cases; and (4) fentanyl-related overdose data from the LBPD and the California Overdose Surveillance Dashboard. We identified findings across multiple analyses that, when taken together, represent a persuasive collection of circumstantial evidence regarding the positive effects of the project on two important outcomes: increased DIS activity and efficiency and effective fentanyl distribution network disruption. While fentanyl-related overdose rates did decrease substantially over the course of the project, there is no conclusive evidence that the project led to the reduction. The effects of COVID-19, the defund movement following George Floyd’s death, and the Los Angeles County District Attorney policy limiting the prosecution of drug offenses confounded our ability to draw a stronger connection between the project and enhanced DIS activity and efficiency, fentanyl distribution network disruption, and overdose rates.   

California State University, Long Beach; School of Criminology, Criminal Justice, and Emergency Management; 2024 77p. 

“Say it’s Only Fictional”: How the Far-Right is Jailbreaking AI and What Can Be Done About It  

By Bàrbara Molas and Heron Lopes

This research report illustrates how far-right users have accelerated the spread of harmful content by successfully exploiting AI tools and platforms. In doing so, it contributes to improving our understanding of the misuse of AI through new data and evidence-based insights that may inform action against the dissemination of hate culture through the latest technologies.  

  The Hague: The International Centre for Counter-Terrorism (ICCT), 2024. 27p.

Bytes and Battles: Inclusion of Data Governance in Responsible Military AI

By: Yasmin Afina and Sarah Grand-Clément

Data plays a critical role in the training, testing and use of artificial intelligence (AI), including in the military domain. Research and development for AI-enabled military solutions is proceeding at breakneck speed, and the important role data plays in shaping these technologies has implications and, at times, raises concerns. These issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning). Yet pathways and governance solutions to address these issues remain scarce and very much underexplored.

This paper aims to fill this gap by first providing a comprehensive overview on data issues surrounding the development, deployment and use of AI. It then explores data governance practices from civilian applications to identify lessons for military applications, as well as highlight any limitations to such an approach. The paper concludes with an overview of possible policy and governance approaches to data practices surrounding military AI to foster the responsible development, testing, deployment and use of AI in the military domain.

CIGI Papers No. 308 — October 2024

THE IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY: SHIFTING THE OFFENSE- DEFENSE BALANCE

By: Jennifer Tang, Tiffany Saade, and Steve Kelly

Cutting-edge advances in artificial intelligence (AI) are taking the world by storm, driven by a massive surge of investment, countless new start-ups, and regular technological breakthroughs. AI presents key opportunities within cybersecurity, but concerns remain regarding the ways malicious actors might also use the technology. In this study, the Institute for Security and Technology (IST) seeks to paint a comprehensive picture of the state of play— cutting through vagaries and product marketing hype, providing our outlook for the near future, and most importantly, suggesting ways in which the case for optimism can be realized.

The report concludes that in the near term, AI offers a significant advantage to cyber defenders, particularly those who can capitalize on their "home field" advantage and firstmover status. However, sophisticated threat actors are also leveraging AI to enhance their capabilities, making continued investment and innovation in AI-enabled cyber defense crucial. At this time of writing, AI is not yet unlocking novel capabilities or outcomes, but instead represents a significant leap in speed, scale, and completeness.

This work is the foundation of a broader IST project to better understand which areas of cybersecurity require the greatest collective focus and alignment—for example, greater opportunities for accelerating threat intelligence collection and response, democratized tools for automating defenses, and/or developing the means for scaling security across disparate platforms—and to design a set of actionable technical and policy recommendations in pursuit of a secure, sustainable digital ecosystem.

The Institute for Security and Technology, October 2024

Harnessing Artificial Intelligence to Address Organised Environmental Crime in Africa

By Romi Sigsworth 

Artificial intelligence (AI) offers innovative solutions for addressing a range of illegal activities that impact Africa’s environment. This report explores how AI is being used in Africa to provide intelligence on organised environmental crime, craft tools to assess its impact, and develop methods to detect and prevent environmental criminal activities. It discusses the challenges and opportunities AI poses for policing environmental crime in Africa, and proposes recommendations that would allow AI-powered policing to make a real difference on the continent. Recommendations • African governments and organisations should invest in gathering large, local data sets to allow AI models to produce appropriate and relevant solutions. • Investments in digital and communication infrastructure need to be made across Africa to improve and expand access to and affordability of AI solutions. • Police forces across Africa should include technology and AI skills capacity building into their basic and professional development training curricula. • Guardrails should be established through legislation to protect data, ensure privacy where necessary, and regulate the use of AI. • Public-private partnerships must be strengthened for law enforcement agencies across Africa to receive the technology and training they need to effectively embed AI tools into their methodologies to combat environmental (and other) organised crime

Enact Africa 2024. 28p.

Increasing Threat of DeepFake Identities

By U.S. Department of Homeland Security

Deepfakes, an emergent type of threat falling under the greater and more pervasive umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events that never happened. Many applications of synthetic media represent innocent forms of entertainment, but others carry risk. The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see, and as a result, deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation. Based on numerous interviews conducted with experts in the field, it is apparent that the severity and urgency of the current threat from synthetic media depends on the exposure, perspective, and position of who you ask. The spectrum of concerns ranged from “an urgent threat” to “don’t panic, just be prepared.” To help customers understand how a potential threat might arise, and what that threat might look like, we considered a number of scenarios specific to the arenas of commerce, society, and national security. The likelihood of any one of these scenarios occurring and succeeding will undoubtedly increase as the cost and other resources needed to produce usable deepfakes simultaneously decreases - just as synthetic media became easier to create as non-AI/ML techniques became more readily available. In line with the multifaceted nature of the problem, there is no one single or universal solution, though elements of technological innovation, education, and regulation must comprise part of any detection and mitigation measures. In order to have success there will have to be significant cooperation among stakeholders in the private and public sectors to overcome current obstacles such as “stovepiping” and to ultimately protect ourselves from these emerging threats while protecting civil liberties.

Washington, DC: DHS, 2021.  47p.


Hacking Generative AI

By Ido Kilovaty

Generative AI platforms, like ChatGPT, hold great promise in enhancing human creativity, productivity, and efficiency. However, generative AI platforms are prone to manipulation. Specifically, they are susceptible to a new type of attack called “prompt injection.” In prompt injection, attackers carefully craft their input prompt to manipulate AI into generating harmful, dangerous, or illegal content as output. Examples of such outputs include instructions on how to build an improvised bomb, how to make meth, how to hotwire a car, and more. Researchers have also been able to make ChatGPT generate malicious code. This article asks a basic question: do prompt injection attacks violate computer crime law, mainly the Computer Fraud and Abuse Act? This article argues that they do. Prompt injection attacks lead AI to disregard its own hard-coded content generation restrictions, which allows the attacker to access portions of the AI that are beyond what the system’s developers authorized. Therefore, this constitutes the criminal offense of accessing a computer in excess of authorization. Although prompt injection attacks could run afoul of the Computer Fraud and Abuse Act, this article offers ways to distinguish serious acts of AI manipulation from less serious ones, so that prosecution would only focus on a limited set of harmful and dangerous prompt injections.

Loyola of Los Angeles Law Review, Vol. 58, 2025, Kilovaty, Ido, Hacking Generative AI (March 1, 2024). Loyola of Los Angeles Law Review, Vol. 58, 2025,

Testing human ability to detect ‘deepfake’ images of human faces 

By Sergi D. Bray , Shane D. Johnson and Bennett Kleinberg

Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (these being uncurated output from the StyleGAN2 algorithm as trained on the FFHQ dataset) from a pool of non-deepfake images (these being random selection of images from the FFHQ dataset), and to assess the effectiveness of some simple interventions intended to improve detection accuracy. Using an online survey, participants (N = 280) were randomly allocated to one of four groups: a control group, and three assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake images of human faces and 50 images of real human faces. Participants were asked whether each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Of equal concern was the fact that participants’ confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals that participants consistently found certain images easy to label correctly and certain images difficult, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85 and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat. 

Journal of Cybersecurity, 2023, 1–18 

The Weaponisation of Deepfakes Digital Deception by the Far-Right

By Ella Busch and Jacob Ware    

In an ever-evolving technological landscape, digital disinformation is on the rise, as are its political consequences. In this policy brief, we explore the creation and distribution of synthetic media by malign actors, specifically a form of artificial intelligence-machine learning (AI/ML) known as the deepfake. Individuals looking to incite political violence are increasingly turning to deepfakes– specifically deepfake video content–in order to create unrest, undermine trust in democratic institutions and authority figures, and elevate polarised political agendas. We present a new subset of individuals who may look to leverage deepfake technologies to pursue such goals: far right extremist (FRE) groups. Despite their diverse ideologies and worldviews, we expect FREs to similarly leverage deepfake technologies to undermine trust in the American government, its leaders, and various ideological ‘out-groups.' We also expect FREs to deploy deepfakes for the purpose of creating compelling radicalising content that serves to recruit new members to their causes. Political leaders should remain wary of the FRE deepfake threat and look to codify federal legislation banning and prosecuting the use of harmful synthetic media. On the local level, we encourage the implementation of “deepfake literacy” programs as part of a wider countering violent extremism (CVE) strategy geared towards at-risk communities. Finally, and more controversially, we explore the prospect of using deepfakes themselves in order to “call off the dogs” and undermine the conditions allowing extremist groups to thrive.  

The Hague?:  International Centre for Counter-Terrorism (ICCT), 2023.

Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

By Carter, Sarah R.; Wheeler, Nicole E.; Chwalek, Sabrina; Isaac, Christopher R.; Yassif, Jaime

From the document: "Rapid scientific and technological advances are fueling a 21st-century biotechnology revolution. Accelerating developments in the life sciences and in technologies such as artificial intelligence (AI), automation, and robotics are enhancing scientists' abilities to engineer living systems for a broad range of purposes. These groundbreaking advances are critical to building a more productive, sustainable, and healthy future for humans, animals, and the environment. Significant advances in AI in recent years offer tremendous benefits for modern bioscience and bioengineering by supporting the rapid development of vaccines and therapeutics, enabling the development of new materials, fostering economic development, and helping fight climate change. However, AI-bio capabilities--AI tools and technologies that enable the engineering of living systems--also could be accidentally or deliberately misused to cause significant harm, with the potential to cause a global biological catastrophe. [...] To address the pressing need to govern AI-bio capabilities, this report explores three key questions: [1] What are current and anticipated AI capabilities for engineering living systems? [2] What are the biosecurity implications of these developments? [3] What are the most promising options for governing this important technology that will effectively guard against misuse while enabling beneficial applications? To answer these questions, this report presents key findings informed by interviews with more than 30 individuals with expertise in AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies."

Nuclear Threat Initiative. 2023. 88p.

Principles for Reducing AI Cyber Risk in Critical Infrastructure: A Prioritization Approach

By SLEDJESKI, CHRISTOPHER L.

From the document: "Artificial Intelligence (AI) brings many benefits, but disruption of AI could, in the future, generate impacts on scales and in ways not previously imagined. These impacts, at a societal level and in the context of critical infrastructure, include disruptions to National Critical Functions. A prioritized risk-based approach is essential in any attempt to apply cybersecurity requirements to AI used in critical infrastructure functions. The topics of critical infrastructure and AI are simply too vast to meaningfully address otherwise. The National Institute of Standards and Technology (NIST) defines cyber secure AI systems as those that can 'maintain confidentiality, integrity and availability through protection mechanisms that prevent unauthorized access and use.' Cybersecurity incidents that impact AI in critical infrastructure could impact the availability, reliability, and safety of these vital services. [...] This paper was prompted by questions presented to MITRE about to what extent the original NIST Cybersecurity Risk Framework, and the efforts that accompanied its release, enabled a regulatory approach that could serve as a model for AI regulation in critical infrastructure. The NIST Cybersecurity Risk Framework was created a decade ago as a requirement of Executive Order (EO) 13636. When this framework was paired with the list of cyber-dependent entities identified under the EO, it provided a voluntary approach for how Sector Risk Management Agencies (SRMAs) prioritize and enhance the cybersecurity of their respective sectors."

MITRE CORPORATION. 2023. 18p.