Open Access Publisher and Free Library
CRIMINAL JUSTICE.jpeg

CRIMINAL JUSTICE

CRIMINAL JUSTICE-CRIMINAL LAW-PROCDEDURE-SENTENCING-COURTS

Posts in AI
TOWARDS AI?: “IMAGINED FUTURES” FOR PROBATION AND ELECTRONIC MONITORING IN THE INDEPENDENT SENTENCING REVIEW

By: Mike Nellis, Emeritus Professor of Criminal and Community Justice, University of Strathclyde

Abstract

The 2025 Independent Sentencing Review (the Gauke Report) famously placed great emphasis on the use of technology in what has traditionally been called “community supervision”, to provide a way out of the capacity crisis in England and Wales’ prisons. In favours a significant expansion of electronic monitoring (EM) and markedly more punitive forms of remote regulation – dubbed “prison outside prison” in press releases. It further encourages the use of emerging forms of AI to make monitoring and supervision more efficient. In this, the Review was largely elaborating the Ministry of Justice’s own emerging view of the penal future. Its call for EM to be more integrated with the Probation Service, may have gone further, but the Review’s vision of the future Probation Service is of a punitive-surveillant agency with a rather ambiguous commitment to rehabilitation. Whether this imagined future is realised remains to be seen.

SherlockAI and the Sentencing Review: AI- Assisted Radical Help

By: Dave Nicholson and Helen Codd

The Independent Sentencing Review highlights the potential of AI for supporting behavioural change in criminal justice involved people and identifies SherlockAI as deserving further exploration and evaluation to realise that potential (p.135). SherlockAI was co-founded by criminal justice involved people in partnership with the authors, and in this article, we explain how SherlockAI offers a distinctive and innovative resource. Both the authors are part of the SherlockAI team and this short article offers insights into why the innovative approach of this particular app resonates with the findings of the review at a time when there are ongoing emergent developments in mobile-based technologies to encourage and support desistance (Bartels, 2023; Knight et al., 2024; McGreevy, 2017; Morris and Graham, 2019) and ongoing interest in Hilary Cottam’s work on radical help and radical care (Cottam, 2018; Cottam, 2021).

Generative AI as Courtroom Evidence: A Practical Guide

By Neal Feigenson and Brian Carney

You are the lawyer in a case in which the crucial incident was captured by dozens of smartphone, surveillance, and other cameras. Imagine your forensic video expert putting all of those videos into a generative artificial intelligence (GenAI)1 model that quickly synchronizes the audio and video streams, links relevant documents, and provides an outline for the strategy of your case—enabling you to understand exactly what happened in minutes instead of weeks and then suggesting ways to prove it at trial. The expert could also employ GenAI to enhance those videos, making relevant facts clearer by rendering blurry images more legible and inaudible conversations more intelligible, or even by creating important camera angles showing views not found in the original images. Or imagine, in a complex commercial dispute, feeding masses of documents and other data into a GenAI model that produces timelines and other visualizations of the relevant events, as well as lists of inherent contradictions in the evidence, which you could then use to prepare your arguments and illustrate your theory of the case in court. All of these tools and more will soon be available. Much has been written in the last half-dozen or so years about the prospect of images, video, and audio created with GenAI being used in court. Most of the concern has focused on deepfakes, andmassive data sources—primarily the Internet—in response to a user’s prompt.