Open Access Publisher and Free Library
CRIME+CRIMINOLOGY.jpeg

CRIME

Violent-Non-Violent-Cyber-Global-Organized-Environmental-Policing-Crime Prevention-Victimization

Posts tagged online abuse
Deepfake Nudes & Young People: Navigating a New Frontier in Technology-Facilitated Nonconsensual Sexual Abuse and Exploitation

By Thorn and Burson Insights, Data & Intelligence

Since 2019, Thorn has focused on amplifying youth voices to better understand their digital lives, with particular attention to how they encounter and navigate technologyfacilitated forms of sexual abuse and exploitation. Previous youth-centered research has explored topics such as child sexual abuse material (CSAM)1 —including that which is self-generated (“SG-CSAM”)—nonconsensual resharing, online grooming, and the barriers young people face in disclosing or reporting negative experiences. Thorn’s Emerging Threats to Young People research series aims to examine emergent online risks to better understand how current technologies create and/or exacerbate child safety vulnerabilities and identify areas where solutions are needed. This report, the first in the series, sheds light specifically on young people’s perceptions of and experiences with deepfake nudes. Future reports in this initiative will address other pressing issues, including sextortion and online solicitations. Drawing on responses from a survey of 1,200 young people aged 13-20, this report explores their awareness of deepfake nudes, lived experiences with them, and their involvement in creating such content. Three key findings emerged from this research: CSAM Any visual depiction of sexually explicit conduct involving a person less than 18 years old 1. Young people overwhelmingly recognize deepfake nudes as a form of technology-facilitated abuse that harms the person depicted. Eighty-four percent of those surveyed believe that deepfake nudes cause harm, attributing this largely to the emotional and psychological impacts on victims, the potential for reputational damage, and the increasingly photorealistic quality of the imagery, which leads viewers to perceive—and consume—it as authentic. 2. Deepfake nudes already represent real experiences that young people have to navigate. Not only are many young people familiar with the concept, but a significant number report personal connections to this harm—either knowing someone targeted or experiencing it themselves. Forty-one percent of young people surveyed indicated they had heard the term “deepfake nudes,” including 1 in 3 (31%) teens. Additionally, among teens, 1 in 10 (10%) reported personally knowing someone who had deepfake nude imagery created of them, and 1 in 17 (6%) disclosed having been a direct victim of this form of abuse. 3. Among the limited sample of young people who admit to creating deepfake nudes of others, they describe easy access to deepfake technologies. Creators described access to the technologies through their devices’ app stores and accessibility via general search engines and social media

El Segundo, CA: Thorn, 2025. 32p.

Trends in and Characteristics of Cybercrime in NSW.

By Ilya Klauzner, Amy Pisani

  AIM To examine the trends in, major characteristics of, and the police response to cybercrime in NSW. METHOD We extracted data from the ReportCyber Application Platform (RCAP), a national cybercrime reporting system operated by the Australian Cyber Security Centre. Data was analysed over a three-year period from 1 July 2019 to 30 June 2022 and was restricted to incidents where the victim resided in NSW. We separate cybercrime into five offence categories: cyber-enabled fraud, identity theft, cyber-enabled abuse, online image abuse (OIA), and device. We conducted a descriptive analysis on the victim, suspected perpetrator, and report characteristics to report on trends and characteristics of reported cybercrime. We estimated an ordinary least squares regression model to identify factors correlated with a referral to police of reported cybercrime. RESULTS Over the three years to June 2022, there were 39,494 reports of cybercrime where the victim resided in NSW, and more than $404 million reported lost. Cybercrime reports increased by 42%, with all cyber offence categories increasing except cyber abuse. Increases in cyber enabled fraud and identity crime, spurred a corresponding increase in reported cyber crime related financial losses by individuals. Most victims were individuals (89%), male (53%) and over 25 years of age (87%); however, differences in victim type were observed within offence categories. While a high proportion of victims have evidence about the incident (94%), the majority did not know their perpetrator and therefore few reports included suspect details (28%). The majority (71%) of reports were closed by police in RCAP with no further investigation undertaken. Reports were however more likely to be referred to police when the incident involved a victim aged 17 years or younger, the suspect was known to the victim, money was lost, or an OIA offence was indicated. CONCLUSION Our results show that cybercrime in NSW largely follows the same increasing trend that has been observed in national cybercrime studies. However, the statistics we report here only offer a partial view of reported cybercrime in NSW as we do not capture data reported directly to police or other national reporting systems. There are clear benefits in ongoing public reporting of cybercrime trends both at the national level and separately for individual states and territories, which could be enabled by integrating reporting systems and enhancing police data

Bureau Brief no. 165. 

Sydney:  NSW Bureau of Crime Statistics and Research. , 2023. 18p.

Cyberviolence Against Women in the EU

By Ionel Zamfir and Colin Murphy

The rise of digital technologies represents a double-edged sword for women's rights. On the one hand, the digital environment has enabled women to build networks and spread awareness about the abuse they suffer, such as through the #Metoo movement. On the other, it has provided abusers and misogynists with new tools with which they can spread their harmful content on an unprecedented scale. With the development of artificial intelligence, these trends, both positive and negative, are expected to continue. Against this backdrop, it has become clear that digital violence is as harmful as offline violence and needs to be tackled with the full force of the law, as well as through other non-legislative measures. Moreover, the digital content causing the harm – images, messages, etc. – needs to be erased. This is particularly important, as the impact on victims is profound and long-lasting. The European Union has adopted several pieces of legislation that aim to make a difference in this respect. The directive on combating violence against women, to be implemented at the latest by June 2027, sets minimum EU standards for criminalising several serious forms of cyberviolence and enhances the protection of and access to justice for victims. EU legislation on the protection of privacy is also having an impact on cyberviolence. For example, the new Digital Services Act imposes an obligation on big digital platforms in the EU to remove harmful content from their websites. This is instrumental in removing intimate or manipulated images that are disseminated on the internet without the person's consent; almost all such images portray women, according to existing data. Member States use a multiplicity of legal approaches to tackle this issue, combining criminalisation of specific cyber offences with the use of general criminal law. In some Member States, an explicit gender dimension is also included.

Brussels: EPRS | European Parliamentary Research Service, 2024. 11p

The Overlap between Viewing Child Sexual Abuse Material and Fringe or Radical Content Online

By Timothy Cubitt, Anthony Morgan and Rick Brown

Drawing on a survey of 13,302 online Australians, this study examines the characteristics and behaviours of respondents who viewed child sexual abuse material (CSAM) and fringe or radical content online, or both. In the past 12 months, 40.6 percent of respondents had viewed fringe or radical content and 4.5 percent had viewed CSAM. Among respondents who viewed CSAM, 64.7 percent had also viewed fringe or radical content, while 7.1 percent of those who viewed radical content had also viewed CSAM. Respondents who viewed only CSAM or only fringe or radical content were similar to one another. Respondents who viewed both were more likely to be younger and male and had higher rates of criminal justice system contact and diagnosed mental illness. Their online activity, including the platforms used, also differed.

Trends & issues in crime and criminal justice no. 708. Canberra: Australian Institute of Criminology. 2024. 16p.