Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

The Next Paradigm-Shattering Threat? Right-Sizing the Potential Impacts of Generative AI on Terrorism

By  David Wells 

  • Over the past year and a half, the rapid expansion in the availability and accessibility of generative artificial intelligence (AI) tools has prompted a range of potential national and international security concerns, including the possible abuse of generative AI by terrorists and violent extremists. Terrorists and violent extremists have already started experimenting with generative AI, including by using a variety of tools to generate propaganda material. This experiment has been relatively limited so far. • An analysis of current or imminent iterations of generative AI tools suggests that they offer terrorists and violent extremists the potential to optimize some of their existing capabilities. Most obviously, generative AI can improve a range of propaganda-related tasks, including generating or modifying images, videos, audio, and text, as well as the use of translation and transcription tools. More worryingly, it may also allow terrorists and violent extremists to evade a key counter-measure used by major online platforms — the timely removal of terrorist content using its “digital fingerprint” (hash). • In other areas of terrorist methodology, the potential benefits of generative AI appear overstated, or dependent on either a significant advancement in the technology itself or the technological skills available to terrorist actors. For example, while generative AI can theoretically speed up and enhance research into terrorist targets or methodology, the frequency with which many generative AI programs provide inaccurate or made-up information presents potential risks for terrorist users. Although early indications of violent extremists customizing basic chatbots is concerning, creating a comprehensive, fully-functioning “terrorist GPT” to radicalize and recruit would currently require processing power and technical skills beyond those of most terrorist actors. Broader factors impacting how and when terrorists adopt new technologies must also be taken into account when considering the risks of generative AI being exploited. • Although understanding (and ultimately responding to) these use cases will be important, any analysis of the potential impact of generative AI on terrorism and violent extremism must include the broader societal impacts of the technology. Many of these potential impacts — which range from significant job losses and a severely degraded information environment to a bolstering of authoritarian regimes and a large-scale perpetuation of discrimination and biases — are extremely worrying in and of themselves. But they are also likely to contribute to conditions that are conducive to radicalization, and in which terrorist and violent extremist narratives can thrive. • The breadth of these direct and indirect challenges presents a compelling argument for the urgent development of a coordinated approach. A range of responses to the broader risks posed by AI are underway at national, regional, and international levels, including draft regulation, consultations, and nascent bi- and multilateral agreements. But few have focused to any great extent on the risks associated with terrorist use of generative AI. Stakeholders must remind themselves that while generative AI technology is new, many of the challenges it poses are not; moreover, many of the lessons learned over the past two decades of counter-terrorism and preventing and countering violent extremism (P/CVE) remain extremely relevant. These include the importance of multilateral cooperation, the centrality of both public-private partnerships and engagement with civil society organizations, and the need to respect human rights  

Washington, DC: Middle East Institute, 2024. 18p.  

Guest User