Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged deepfake
Crossing the Deepfake Rubicon The Maturing Synthetic Media Threat Landscape

By Di Cooke, Abby Edwards, Alexis Day, Devi Nair, Sophia Barkoff, and Katie Kelly

THE ISSUE

  • In recent years, threat actors have increasingly used synthetic media—digital content produced or manipulated by artificial intelligence (AI)—to enhance their deceptive activities, harming individuals and organizations worldwide with growing frequency.

  • In addition, the weaponization of synthetic media has also begun to undermine people’s trust in information integrity more widely, posing concerning implications for the stability and resilience of the U.S.’s information environment.

  • At present, an individual’s ability to recognize AI-generated content remains the primary defense against people falling prey to deceptively presented synthetic media.

  • However, a recent experimental study by CSIS found that people are no longer able to reliably distinguish between authentic and AI-generated images, audio, and video sourced from publicly available tools.

  • That human detection has ceased to be a reliable method for identifying synthetic media only heightens the dangers posed by the technology’s misuse, underscoring the pressing need to implement alternative countermeasures to address this emerging threat.

CSIS, 2024. 11p.

Confounds and overestimations in fake review detection: Experimentally controlling for product-ownership and data-origin

By Felix Soldner, Bennett Kleinberg, Shane D. Johnson

The popularity of online shopping is steadily increasing. At the same time, fake product reviews are published widely and have the potential to affect consumer purchasing behavior. In response, previous work has developed automated methods utilizing natural language processing approaches to detect fake product reviews. However, studies vary considerably in how well they succeed in detecting deceptive reviews, and the reasons for such differences are unclear. A contributing factor may be the multitude of strategies used to collect data, introducing potential confounds which affect detection performance. Two possible confounds are data-origin (i.e., the dataset is composed of more than one source) and product ownership (i.e., reviews written by individuals who own or do not own the reviewed product). In the present study, we investigate the effect of both confounds for fake review detection. Using an experimental design, we manipulate data-origin, product ownership, review polarity, and veracity. Supervised learning analysis suggests that review veracity (60.26–69.87%) is somewhat detectable but reviews additionally confounded with product-ownership (66.19–74.17%), or with data-origin (84.44–86.94%) are easier to classify. Review veracity is most easily classified if confounded with product-ownership and data-origin combined (87.78–88.12%). These findings are moderated by review polarity. Overall, our findings suggest that detection accuracy may have been overestimated in previous studies, provide possible explanations as to why, and indicate how future studies might be designed to provide less biased estimates of detection accuracy. 

PLoS ONE 17(12): 2022

Testing human ability to detect ‘deepfake’ images of human faces 

By Sergi D. Bray , Shane D. Johnson and Bennett Kleinberg

Deepfakes’ are computationally created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020, a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (these being uncurated output from the StyleGAN2 algorithm as trained on the FFHQ dataset) from a pool of non-deepfake images (these being random selection of images from the FFHQ dataset), and to assess the effectiveness of some simple interventions intended to improve detection accuracy. Using an online survey, participants (N = 280) were randomly allocated to one of four groups: a control group, and three assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake images of human faces and 50 images of real human faces. Participants were asked whether each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Of equal concern was the fact that participants’ confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals that participants consistently found certain images easy to label correctly and certain images difficult, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85 and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat. 

Journal of Cybersecurity, 2023, 1–18