Read-Me.Org

View Original

Crossing the Deepfake Rubicon The Maturing Synthetic Media Threat Landscape

By Di Cooke, Abby Edwards, Alexis Day, Devi Nair, Sophia Barkoff, and Katie Kelly

THE ISSUE

  • In recent years, threat actors have increasingly used synthetic media—digital content produced or manipulated by artificial intelligence (AI)—to enhance their deceptive activities, harming individuals and organizations worldwide with growing frequency.

  • In addition, the weaponization of synthetic media has also begun to undermine people’s trust in information integrity more widely, posing concerning implications for the stability and resilience of the U.S.’s information environment.

  • At present, an individual’s ability to recognize AI-generated content remains the primary defense against people falling prey to deceptively presented synthetic media.

  • However, a recent experimental study by CSIS found that people are no longer able to reliably distinguish between authentic and AI-generated images, audio, and video sourced from publicly available tools.

  • That human detection has ceased to be a reliable method for identifying synthetic media only heightens the dangers posed by the technology’s misuse, underscoring the pressing need to implement alternative countermeasures to address this emerging threat.

CSIS, 2024. 11p.