Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged case study
Understanding and Mitigating Cyberfraud in Africa

By Akinbowale, Oluwatoyin E., Mashigo, Mariann P., Zerihun, Prof. Mulatu

The book covers the overview of cyberfraud and the associated global statistics. It demonstrates practicable techniques that financial institutions can employ to make effective decisions geared toward cyberfraud mitigation. Furthermore, the book contains some emerging technologies, such as information and communication technologies (ICT), forensic accounting, big data technologies, tools, and analytics employed in fraud mitigation. In addition, it highlights the implementation of some techniques, such as the fuzzy analytical hierarchy process (FAHP) and system thinking approach to address information and security challenges. The book combines a case study, empirical findings, a systematic literature review, and theoretical and conceptual concepts to provide practicable solutions to mitigate cyber fraud. The major contributions of this book include the demonstration of digital and emerging techniques, such as forensic accounting for cyber fraud mitigation. It also provides in-depth statistics about cyber fraud, its causes, its threat actors, practicable mitigation solutions, and the application of a theoretical framework for fraud profiling and mitigation.

Capetown, AOSIS, 2024 

How to implement online warnings to prevent the use of child sexual abuse material

By Charlotte Hunn, Paul Watters, Jeremy Prichard, Richard Wortley, Joel Scanlan, Caroline Spiranovic and Tony Krone

Online CSAM offending is a challenge for law enforcement, policymakers and child welfare organisations alike. The use of online warning messages to prevent or deter an individual when they actively search for CSAM is gaining traction as a response to some types of CSAM offending. Yet, to date, the technical question of how warning messages can be implemented, and who can implement them, has been largely unexplored. To address this, we use a case study to analyse the actions individuals and organisations within the technology, government, non-government and private sectors could take to implement warning messages. We find that, from a technical perspective, there is considerable opportunity to implement warning messages, although further research into efficacy and cost is needed.

Trends & issues in crime and criminal justice no. 669. Canberra: Australian Institute of Criminology. 2023. 14p.

Testing human ability to detect ‘deepfake’ images of human faces 

By Sergi D. Bray , Shane D. Johnson and Bennett Kleinberg

Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (these being uncurated output from the StyleGAN2 algorithm as trained on the FFHQ dataset) from a pool of non-deepfake images (these being random selection of images from the FFHQ dataset), and to assess the effectiveness of some simple interventions intended to improve detection accuracy. Using an online survey, participants (N = 280) were randomly allocated to one of four groups: a control group, and three assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake images of human faces and 50 images of real human faces. Participants were asked whether each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Of equal concern was the fact that participants’ confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals that participants consistently found certain images easy to label correctly and certain images difficult, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85 and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat. 

Journal of Cybersecurity, 2023, 1–18