Open Access Publisher and Free Library
CRIME+CRIMINOLOGY.jpeg

CRIME

Violent-Non-Violent-Cyber-Global-Organized-Environmental-Policing-Crime Prevention-Victimization

Posts tagged deepfakes
“One day this could happen to me” Children, nudification tools and sexually explicit deepfakes

By The Children's Commission of the UK

“Maybe young girls will not post what they want to post or do something they would like to do just in case there’s this fear of ‘Oh I might be abused, this might be turned into a bit of sexual content’ when it shouldn’t have been.” – Girl, 17, focus group Generative Artificial Intelligence (GenAI) is transforming the online world. AI models can generate text, images, videos, and hold conversations in response to a handful of prompts and are rightly being seen as a development with huge potential for the enhancement of people’s lives. However, these tools are also being misused at an alarming cost to children’s online and offline safety. ‘Nudification’ tools are apps and websites that create sexually explicit deepfake images of real people, and at the time of writing, this technology is legal in the UK. GenAI, which is often free to use and easy to programme, has supercharged the growth of these tools. Despite this being a relatively new technology, the high risk of harm it presents to children is increasingly evident. Children told the Children’s Commissioner’s Office (CCo) team that the very existence of technology, that could strip people of their clothes, frightened them. In a series of focus groups held with children in their schools (quoted throughout this report), the team heard girls describe how they were trying to reduce the chance of featuring in a sexually explicit deepfake by limiting their participation in the online world- a space which could enhance their social lives, play and learning, if it were safe for them. This report identifies the threat that sexually explicit deepfake technology presents to children. Currently, it is illegal to create a sexually explicit image of a child. Yet, the technology that is used to do so remains legal and accessible through the most popular parts of the online world, including large social media platforms and search engines. After analysing what is known about this new technological threat, assessing what it looks like in the online landscape, and speaking to children about what it means for them, this report has found: • Nudification tools and sexually explicit deepfake technologies present a high risk of harm to children: o Nudification tools target women and girls in particular, and many only work on female bodies. This is contributing to a culture of misogyny both online and offline. o The presence of nudification technology is having a chilling effect on girls’ participation in the online world. Girls are taking preventative steps to keep themselves safe from being victimised by nudification tools, in the same way that girls follow other rules to keep themselves safe in the offline world – like not walking home alone at night. o Children want action to be taken to tackle the misuse of AI technology. One girl questioned what the point of it was, if it only seemed to be used for bad intentions: “Do you know why deepfake was created? Like, what was the purpose of it? Because I don't see any positives” – Girl, 16. • Nudification tools and sexually explicit deepfake technologies are easily accessible through popular online platforms o Search engines and social media platforms are the most common way that users access nudification apps and technologies. o GenAI has made the development of nudification technology easy and cheap. o Open-source AI models that are not primarily designed to create overtly sexually explicit images or videos still present a risk of harm to children and young people The Children’s Commissioner wants GenAI technology, and future AI technology, to be made safe for children, and calls on the Government to: 1. Ban bespoke nudification apps. 2. Bring in specific legal responsibilities for the companies developing GenAI tools to screen their tools for nudifying risks to children and mitigate them. 3. Provide children with an effective route to have sexually explicit deepfake images of themselves removed from the internet. 4. Committo making the online world safer for girls, by recognising sexually explicit deepfake abuse - and bespoke services used to carry this out - as acts of violence against women and girls 

London: The Children's Commissioner, 2025. 34p.

Deepfake Nudes & Young People Navigating a new frontier in technology-facilitated nonconsensual sexual abuse and exploitation

By Thorn in partnership with Burson Insights, Data & Intelligence 

Since 2019, Thorn has focused on amplifying youth voices to better understand their digital lives, with particular attention to how they encounter and navigate technologyfacilitated forms of sexual abuse and exploitation. Previous youth-centered research has explored topics such as child sexual abuse material (CSAM)1 —including that which is self-generated (“SG-CSAM”)—nonconsensual resharing, online grooming, and the barriers young people face in disclosing or reporting negative experiences. Thorn’s Emerging Threats to Young People research series aims to examine emergent online risks to better understand how current technologies create and/or exacerbate child safety vulnerabilities and identify areas where solutions are needed. This report, the first in the series, sheds light specifically on young people’s perceptions of and experiences with deepfake nudes. Future reports in this initiative will address other pressing issues, including sextortion and online solicitations. Drawing on responses from a survey of 1,200 young people aged 13-20, this report explores their awareness of deepfake nudes, lived experiences with them, and their involvement in creating such content. Three key findings emerged from this researc.  1. Young people overwhelmingly recognize deepfake nudes as a form of technology-facilitated abuse that harms the person depicted. Eighty-four percent of those surveyed believe that deepfake nudes cause harm, attributing this largely to the emotional and psychological impacts on victims, the potential for reputational damage, and the increasingly photorealistic quality of the imagery, which leads viewers to perceive—and consume—it as authentic. 2. Deepfake nudes already represent real experiences that young people have to navigate. Not only are many young people familiar with the concept, but a significant number report personal connections to this harm—either knowing someone targeted or experiencing it themselves. Forty-one percent of young people surveyed indicated they had heard the term “deepfake nudes,” including 1 in 3 (31%) teens. Additionally, among teens, 1 in 10 (10%) reported personally knowing someone who had deepfake nude imagery created of them, and 1 in 17 (6%) disclosed having been a direct victim of this form of abuse. 3. Among the limited sample of young people who admit to creating deepfake nudes of others, they describe easy access to deepfake technologies. Creators described access to the technologies through their devices’ app stores and accessibility via general search engines and social media 

El Segundo, CA  Thorn, 2025. 32p.

Multimedia Forensics

Edited by Husrev Taha Sencar, Luisa Verdoliva, Nasir Memon

Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field.

Singapore: Springer Nature 2022, 490p.

Deepfakes on Trial: A Call To Expand the Trial Judge’s Gatekeeping Role To Protect Legal Proceedings from Technological Fakery

By Rebecca A. Delfino

Deepfakes—audiovisual recordings created using artificial intelligence (AI) technology to believably map one person’s movements and words onto another—are ubiquitous. They have permeated societal and civic spaces from entertainment, news, and social media to politics. And now deepfakes are invading the courts, threatening our justice system’s truth-seeking function. Ways deepfakes could infect a court proceeding run the gamut and include parties fabricating evidence to win a civil action, government actors wrongfully securing criminal convictions, and lawyers purposely exploiting a lay jury’s suspicions about evidence. As deepfake technology improves and it becomes harder to tell what is real, juries may start questioning the authenticity of properly admitted evidence, which in turn may have a corrosive effect on the justice system. No evidentiary procedure explicitly governs the presentation of deepfake evidence in court. The existing legal standards governing the authentication of evidence are inadequate because they were developed before the advent of deepfake technology. As a result, they do not solve the urgent problem of how to determine when an audiovisual image is fake and when it is not. Although legal scholarship and the popular media have addressed certain facets of deepfakes in the last several years, there has been no commentary on the procedural aspects of deepfake evidence in court. Absent from the discussion is who gets to decide whether a deepfake is authentic. This Article addresses the matters that prior academic scholarship on deepfakes obscures. It is the first to propose a new addition to the Federal Rules of Evidence reflecting a novel reallocation of fact-determining responsibilities from the jury to the judge, treating the question of deepfake authenticity as one for the court to decide as an expanded gatekeeping function under the Rules. The challenges of deepfakes—problems of proof, the “deepfake defense,” and juror skepticism—can be best addressed by amending the Rules for authenticating digital audiovisual evidence, instructing the jury on its use of that evidence, and limiting counsel’s efforts to exploit the existence of deepfakes.

Hastings Law Journal, 2023. 57p.

Challenge Trial Judges Face When Authenticating Video Evidence in the Age of Deepfakes

By Taurus Myhand

The proliferation of deepfake videos has resulted in rapid improvements in the technology used to create them. Although the use of fake videos and images are not new, advances in artificial intelligence have made deepfakes easier to make and harder to detect. Basic human perception is no longer sufficient to detect deepfakes. Yet, under the current construction of the Federal Rules of Evidence, trials judges are expected to do just that. Trial judges face a daunting challenge when applying the current evidence authentication standards to video evidence in this new reality of widely available deepfake videos. This article examines the gatekeeping role trial judges must perform in light of the unique challenges posed by deepfake video evidence. This article further examines why the jury instruction approach and the rule change approaches proposed by other scholars are insufficient to combat the grave threat of false video evidence. This article concludes with a discussion of the affidavit of forensic analysis approach, a robust response to the authentication challenges posed by deepfakes. The AFA approach preserves most of the current construction of the Federal Rules of Evidence while reviving the gatekeeping role of the trial judge in determining the admissibility of video evidence. The AFA will provide the trial judges with the tools necessary to detect and exclude deepfake videos without leaving an everlasting taint on the juries that would have otherwise seen the falsified videos.

Widener Law Review, 2023. 19p.

The Weaponisation of Deepfakes Digital Deception by the Far-Right

By Ella Busch and Jacob Ware    

In an ever-evolving technological landscape, digital disinformation is on the rise, as are its political consequences. In this policy brief, we explore the creation and distribution of synthetic media by malign actors, specifically a form of artificial intelligence-machine learning (AI/ML) known as the deepfake. Individuals looking to incite political violence are increasingly turning to deepfakes– specifically deepfake video content–in order to create unrest, undermine trust in democratic institutions and authority figures, and elevate polarised political agendas. We present a new subset of individuals who may look to leverage deepfake technologies to pursue such goals: far right extremist (FRE) groups. Despite their diverse ideologies and worldviews, we expect FREs to similarly leverage deepfake technologies to undermine trust in the American government, its leaders, and various ideological ‘out-groups.' We also expect FREs to deploy deepfakes for the purpose of creating compelling radicalising content that serves to recruit new members to their causes. Political leaders should remain wary of the FRE deepfake threat and look to codify federal legislation banning and prosecuting the use of harmful synthetic media. On the local level, we encourage the implementation of “deepfake literacy” programs as part of a wider countering violent extremism (CVE) strategy geared towards at-risk communities. Finally, and more controversially, we explore the prospect of using deepfakes themselves in order to “call off the dogs” and undermine the conditions allowing extremist groups to thrive.  

The Hague?:  International Centre for Counter-Terrorism (ICCT), 2023.