Open Access Publisher and Free Library
03-crime prevention.jpg

CRIME PREVENTION

CRIME PREVENTION-POLICING-CRIME REDUCTION-POLITICS

Posts tagged artifical intelligence
AI and the Evolution of Biological National Security Risks: Capabilities, Thresholds, and Interventions

DREXEL, BILL; WITHERS, CALEB

From the document: "In 2020, COVID-19 brought the world to its knees, with nearly 29 million estimated deaths, acute social and political disruptions, and vast economic fallout. However, the event's impact could have been far worse if the virus had been more lethal, more transmissible, or both. For decades, experts have warned that humanity is entering an era of potential catastrophic pandemics that would make COVID-19 appear mild in comparison. History is well acquainted with such instances, not least the 1918 Spanish Flu, the Black Death, and the Plague of Justinian--each of which would have dwarfed COVID-19's deaths if scaled to today's populations. Equally concerning, many experts have sounded alarms of possible deliberate bioattacks in the years ahead. [...] This report aims to clearly assess AI's impact on the risks of biocatastrophe. It first considers the history and existing risk landscape in American biosecurity independent of AI disruptions. Drawing on a sister report, 'Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security,' this study then considers how AI is impacting biorisks across four dimensions of AI safety: new capabilities, technical challenges, integration into complex systems, and conditions of AI development. Building on this analysis, the report identifies areas of future capability development that may substantially alter the risks of large-scale biological catastrophes worthy of monitoring as the technology continues to evolve. Finally, the report recommends actionable steps for policymakers to address current and near-term risks of biocatastrophes."

CENTER FOR A NEW AMERICAN SECURITY. 2024.

Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security

DREXEL, BILL; WITHERS, CALEB

From the document: "Since ChatGPT [Chat Generative Pre-Trained Transformer] was launched in November 2022, artificial intelligence (AI) systems have captured public imagination across the globe. ChatGPT's record-breaking speed of adoption--logging 100 million users in just two months--gave an unprecedented number of individuals direct, tangible experience with the capabilities of today's state-of-the-art AI systems. More than any other AI system to date, ChatGPT and subsequent competitor large language models (LLMs) have awakened societies to the promise of AI technologies to revolutionize industries, cultures, and political life. [...] This report aims to help policymakers understand catastrophic AI risks and their relevance to national security in three ways. First, it attempts to further clarify AI's catastrophic risks and distinguish them from other threats such as existential risks that have featured prominently in public discourse. Second, the report explains why catastrophic risks associated with AI development merit close attention from U.S. national security practitioners in the years ahead. Finally, it presents a framework of AI safety dimensions that contribute to catastrophic risks."

CENTER FOR A NEW AMERICAN SECURITY. JUN, 2024.

Artificial Intelligence Index Report 2024

MASLEJ, NESTOR; FATTORINI, LOREDANA; PERRAULT, RAYMOND; PARLI, VANESSA; REUEL, ANKA; BRYNJOLFSSON, ERIK

From the document: "Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI's influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI's impact on science and medicine. The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI." See pages 10 and 11 for a full list of contributors.

STANFORD UNIVERSITY. HUMAN-CENTERED ARTIFICIAL INTELLIGENCE. 2024. 502p.

Artificial Intelligence, Predictive Policing, and Risk Assessment lor Law Enforcement

By Richard A. Berk

  There are widespread concerns about the use of artificial intelligence in law enforcement. Predictive policing and risk assessment are salient examples. Worries include the accuracy of forecasts that guide both activities, the prospect of bias, and an apparent lack of operational transparency. Nearly breathless media coverage of artificial intelligence helps shape the narrative. In this review, we address these issues by first unpacking depictions of artificial intelligence. Its use in predictive policing to forecast crimes in time and space is largely an exercise in spatial statistics that in principle can make policing more effective and more surgical. Its use in criminal justice risk assessment to forecast who will commit crimes is largely an exercise in adaptive, nonparametric regression. It can in principle allow law enforcement agencies to better provide for public safety with the least restrictive means necessary, which can mean far less use of incarceration. None of this is mysterious. Nevertheless, concerns about accuracy, fairness, and transparency are real, and there are tradeoffs between them for which there can be no technical fix. You can’t have it all. Solutions will be found through political and legislative processes achieving an acceptable balance between competing priorities.  

  Annu. Rev. Criminol. 2021. 4:209–37  

AI and Administration of Justice: Predictive Policing and Predictive Justice in the Netherlands

By Maša Galič, Abhijit Das and Marc Schuilenburg

There is great enthusiasm for the use of Artificial Intelligence (AI) in the criminal justice domain in the Netherlands. This enthusiasm is connected to a strong belief – at least on the side of the government – that experimenting with new technologies can enhance security as well as improve government efficiency. New digital systems are considered as leading to rational, scientific and value-neutral ways to generate knowledge and expertise within the criminal justice domain. AI in this domain therefore holds a central position not only in policy documents,3 but can also be seen in numerous examples in practice. The Dutch police stand at the forefront of predictive policing practices, at least in Europe, being the first to deploy an AI-based system for predictive policing nation-wide, and continue to set up an increasing number of predictive policing projects. Facial recognition technology is increasingly used in public space, both by the police and municipalities, often in public-private partnerships constituted within smart city initiatives. And AI-based systems, such as Hansken, are used for the purpose of finding evidence among huge amounts of data gathered in contemporary criminal investigations.6 It should be noted, however, that in the Dutch public sector the term AI is oftentimes used in a broad manner, including algorithmic systems of various complexity. The term AI is used not only for data-driven algorithms (where algorithms are trained on the basis of input data) or rule-based algorithms (where the steps, methodologies and outcomes can be traced to pre-programmed instructions implemented by a human), but also for older and much simpler types of statistical analysis (e.g., actuarial risk assessment tools, which are based on the correlation between certain factors and past statistics concerning recidivism). Because of this broad use of the term AI and a lack of publicly available information on the functioning of many technological systems used in practice, it is sometimes difficult to know whether the system used in the criminal justice domain is strictly speaking AI-based or not. In any case, older methods for statistical analysis should be seen as a precursor of contemporary advanced AI techniques. The development of risk assessment technology, such as predictive policing and tools used for the assessment of the risk of recidivism, is namely taking place on a continuum, where several generations can be discerned.

e-Revue Internationale de Droit Pénal .2023. 57p.

Predictive Policing’, ‘Predictive Justice’, and the use of ‘Artificial Intelligence’ in the Administration of Criminal Justice in Germany

By JohannaSprenhrt and DominikBrodowski

In ever more areas, it becomes evident that the transformative power of information technology – and so-called ‘artificial intelligence’ in particular – affects the administration of criminal justice in Germany. The legal framing of issues relating to the use of ‘AI technology’ in criminal justice lags behind, however, and is of high complexity: In particular, it needs to take the European framework into account, and has to cope with the German peculiarity that the prevention of crimes by the police is a separate branch of law, which is regulated mostly at the ‘Länder’ (federal states) level, while criminal justice is regulated mostly on the federal level. In this report, we shed light on the practice, on legal discussions, and on current initiatives relating to ‘predictive policing’ (1.), ‘predictive justice’ (2.) as well as evidence law and the use of ‘artificial intelligence’ in the administration of criminal justice (3.) in Germany

e-Revue Internationale de Droit Pénal .2023. 57p.