By Neal Feigenson and Brian Carney
You are the lawyer in a case in which the crucial incident was captured by dozens of smartphone, surveillance, and other cameras. Imagine your forensic video expert putting all of those videos into a generative artificial intelligence (GenAI)1 model that quickly synchronizes the audio and video streams, links relevant documents, and provides an outline for the strategy of your case—enabling you to understand exactly what happened in minutes instead of weeks and then suggesting ways to prove it at trial. The expert could also employ GenAI to enhance those videos, making relevant facts clearer by rendering blurry images more legible and inaudible conversations more intelligible, or even by creating important camera angles showing views not found in the original images. Or imagine, in a complex commercial dispute, feeding masses of documents and other data into a GenAI model that produces timelines and other visualizations of the relevant events, as well as lists of inherent contradictions in the evidence, which you could then use to prepare your arguments and illustrate your theory of the case in court. All of these tools and more will soon be available. Much has been written in the last half-dozen or so years about the prospect of images, video, and audio created with GenAI being used in court. Most of the concern has focused on deepfakes, andmassive data sources—primarily the Internet—in response to a user’s prompt.