Open Access Publisher and Free Library
CRIMINAL JUSTICE.jpeg

CRIMINAL JUSTICE

CRIMINAL JUSTICE-CRIMINAL LAW-PROCDEDURE-SENTENCING-COURTS

Posts tagged AI in law enforcement
Artificial Intelligence in the Criminal Justice System.  Demystifying artificial intelligence, its applications, and potential risks 

By James Redden; Molly O'Donovan Dix

This technology brief is the first in a four-part series that explores artificial intelligence (AI) applications within the criminal justice system. This first brief frames AI, defines common AI terms, and offers a mental model for identifying AI use cases within the criminal justice system. While this brief provides examples of how AI might bring significant benefit to the criminal justice system, it also highlights risks that decision makers should consider when developing or deploying AI tools. Additional briefs provide greater consideration of AI in law enforcement, the criminal courts system, and corrections.   

  Key Takeaways ¡ AI will transform our personal, industrial, commercial, and civil realities in the years to come— enabling and challenging individuals involved in the justice system as well as in criminal activity. ¡ AI tools have the potential to improve efficiency, reduce costs, and expand capabilities across many criminal justice use cases; however, technical feasibility and operational realities need to be considered. ¡ AI systems carry inherent risk that decision makers need to understand. For example, AI technologies raise ethical and civil liberties questions that the criminal justice system and society at large will have to wrestle with in the years ahead. AI will bring changes to nearly every industry over the next decade. In fact, AI is already impacting our daily lives and is being built into the background of many of our daily activities—from facial recognition technologies that unlock our smartphones, to algorithms that recommend movies we might like, to virtual chatbots that handle our customer service inquiries. Forthe criminal justice system, AI presents opportunities along with significant risks. AI tools have the potential to improve efficiency, reduce costs, and expand capabilities across many criminal justice use cases. Yet many criminal justice leaders have misconceptions about the capabilities and the level of investment required to create or deploy AI solutions for specific use cases

Research Triangle Park, NC:RTI International.,   . 

2020. 10p.

Minding the Machines On Values and AI in the Criminal Legal Space 

By Julian Adler, Jethro Antoine, Laith Al-Saadoon 

There was but one passing reference to “core values” over the course of a recent U.S. Senate Judiciary hearing on artificial intelligence [AI] in criminal investigations and prosecutions.[1] This is typical. Even in spaces like the criminal legal system, where the specters of racial injustice and inhumanity loom so large, the technological sublimity of AI can be awfully distracting. People have long looked to technology to duck the hard problem of values. “[W]e have tended to believe that if we just had more information, we could make better policy,” observes University of Nevada’s Lynda Walsh in Scientists as Prophets. “But no matter how much data we could lay hands to—even if it were LaPlace’s Demon itself—values would still stand in the way.”[2] If anything is clear about advanced AI, it is that there is much we don’t know and even more that we can’t begin to predict. Consider that the “generative AI” we have witnessed over the past 18 months—AI which produces autonomous human-impersonating content—was largely unforeseen. It’s now being attributed to AI’s “emergent abilities.”[3] Across sectors, most observers acknowledge that AI is a game-changing technology. The Financial Industry Regulatory Authority is illustrative: using AI, it now processes “a peak volume of 600 billion transactions every day to detect potential abuses,” making the regulator “one of the largest data processors in the world.”[4] Tell  ingly, many of the people closest to the leading edges of AI development are sounding the loudest alarms about its capabilities. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” warned the Center for AI Safety in 2023.[5] AI has the potential to supercharge, not mitigate, the uglier sides of humanity, much like, as one journalist puts it, “a fun-house-style… mirror magnifying biases and stripping out the context from which their information comes.”[6] Advanced AI is “not just another technology,” contends Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford. It is not “another tool that will add incrementally to human capabilities.”[7] Echoing countless dystopian projections of the future, the Center for AI Safety predicts AI systems will likely “become harder to control” than previous forms of technology; among other disquieting scenarios, these systems could “drift from their original goals” and “optimize flawed objectives.”[8] 

New York: Center for Court Innovation, 2024. 8p.