Open Access Publisher and Free Library
09-victimization.jpg

VICTIMIZATION

VICTIMIZATION-ABUSE-WITNESSES-VICTIM SURVEYS

Posts tagged deepfakes
“One day this could happen to me” Children, nudification tools and sexually explicit deepfakes

By The Children's Commission of the UK

“Maybe young girls will not post what they want to post or do something they would like to do just in case there’s this fear of ‘Oh I might be abused, this might be turned into a bit of sexual content’ when it shouldn’t have been.” – Girl, 17, focus group Generative Artificial Intelligence (GenAI) is transforming the online world. AI models can generate text, images, videos, and hold conversations in response to a handful of prompts and are rightly being seen as a development with huge potential for the enhancement of people’s lives. However, these tools are also being misused at an alarming cost to children’s online and offline safety. ‘Nudification’ tools are apps and websites that create sexually explicit deepfake images of real people, and at the time of writing, this technology is legal in the UK. GenAI, which is often free to use and easy to programme, has supercharged the growth of these tools. Despite this being a relatively new technology, the high risk of harm it presents to children is increasingly evident. Children told the Children’s Commissioner’s Office (CCo) team that the very existence of technology, that could strip people of their clothes, frightened them. In a series of focus groups held with children in their schools (quoted throughout this report), the team heard girls describe how they were trying to reduce the chance of featuring in a sexually explicit deepfake by limiting their participation in the online world- a space which could enhance their social lives, play and learning, if it were safe for them. This report identifies the threat that sexually explicit deepfake technology presents to children. Currently, it is illegal to create a sexually explicit image of a child. Yet, the technology that is used to do so remains legal and accessible through the most popular parts of the online world, including large social media platforms and search engines. After analysing what is known about this new technological threat, assessing what it looks like in the online landscape, and speaking to children about what it means for them, this report has found: • Nudification tools and sexually explicit deepfake technologies present a high risk of harm to children: o Nudification tools target women and girls in particular, and many only work on female bodies. This is contributing to a culture of misogyny both online and offline. o The presence of nudification technology is having a chilling effect on girls’ participation in the online world. Girls are taking preventative steps to keep themselves safe from being victimised by nudification tools, in the same way that girls follow other rules to keep themselves safe in the offline world – like not walking home alone at night. o Children want action to be taken to tackle the misuse of AI technology. One girl questioned what the point of it was, if it only seemed to be used for bad intentions: “Do you know why deepfake was created? Like, what was the purpose of it? Because I don't see any positives” – Girl, 16. • Nudification tools and sexually explicit deepfake technologies are easily accessible through popular online platforms o Search engines and social media platforms are the most common way that users access nudification apps and technologies. o GenAI has made the development of nudification technology easy and cheap. o Open-source AI models that are not primarily designed to create overtly sexually explicit images or videos still present a risk of harm to children and young people The Children’s Commissioner wants GenAI technology, and future AI technology, to be made safe for children, and calls on the Government to: 1. Ban bespoke nudification apps. 2. Bring in specific legal responsibilities for the companies developing GenAI tools to screen their tools for nudifying risks to children and mitigate them. 3. Provide children with an effective route to have sexually explicit deepfake images of themselves removed from the internet. 4. Committo making the online world safer for girls, by recognising sexually explicit deepfake abuse - and bespoke services used to carry this out - as acts of violence against women and girls 

London: The Children's Commissioner, 2025. 34p.

Deepfake Nudes & Young People Navigating a new frontier in technology-facilitated nonconsensual sexual abuse and exploitation

By Thorn in partnership with Burson Insights, Data & Intelligence 

Since 2019, Thorn has focused on amplifying youth voices to better understand their digital lives, with particular attention to how they encounter and navigate technologyfacilitated forms of sexual abuse and exploitation. Previous youth-centered research has explored topics such as child sexual abuse material (CSAM)1 —including that which is self-generated (“SG-CSAM”)—nonconsensual resharing, online grooming, and the barriers young people face in disclosing or reporting negative experiences. Thorn’s Emerging Threats to Young People research series aims to examine emergent online risks to better understand how current technologies create and/or exacerbate child safety vulnerabilities and identify areas where solutions are needed. This report, the first in the series, sheds light specifically on young people’s perceptions of and experiences with deepfake nudes. Future reports in this initiative will address other pressing issues, including sextortion and online solicitations. Drawing on responses from a survey of 1,200 young people aged 13-20, this report explores their awareness of deepfake nudes, lived experiences with them, and their involvement in creating such content. Three key findings emerged from this researc.  1. Young people overwhelmingly recognize deepfake nudes as a form of technology-facilitated abuse that harms the person depicted. Eighty-four percent of those surveyed believe that deepfake nudes cause harm, attributing this largely to the emotional and psychological impacts on victims, the potential for reputational damage, and the increasingly photorealistic quality of the imagery, which leads viewers to perceive—and consume—it as authentic. 2. Deepfake nudes already represent real experiences that young people have to navigate. Not only are many young people familiar with the concept, but a significant number report personal connections to this harm—either knowing someone targeted or experiencing it themselves. Forty-one percent of young people surveyed indicated they had heard the term “deepfake nudes,” including 1 in 3 (31%) teens. Additionally, among teens, 1 in 10 (10%) reported personally knowing someone who had deepfake nude imagery created of them, and 1 in 17 (6%) disclosed having been a direct victim of this form of abuse. 3. Among the limited sample of young people who admit to creating deepfake nudes of others, they describe easy access to deepfake technologies. Creators described access to the technologies through their devices’ app stores and accessibility via general search engines and social media 

El Segundo, CA  Thorn, 2025. 32p.