Open Access Publisher and Free Library
TERRORISM.jpeg

TERRORISM

Terrorism-Domestic-International-Radicalization-War-Weapons-Trafficking-Crime-Mass Shootings

Posts tagged artificial intelligence
The Use of Artificial Intelligence in Countering Online Radicalisation in Indonesia

By Raneeta Mutiara

Digitalisation of the activities of Islamic State in Iraq and Syria (ISIS) has been a longstanding issue in Southeast Asia. In recent years, the nature of this threat has become more widespread and complex. In countries like Indonesia, where radicalisation is primarily offline, online platforms still play a role in spreading extremist ideas and maintaining ideological networks. The phenomenon of online radicalisation can erode social cohesion, highlighting the need for strategic measures to counter its destabilising impact.

Indonesia has made several attempts to combat online radicalisation. The National Counter Terrorism Agency of Indonesia (BNPT) initiated the Duta Damai Dunia Maya campaign to counter harmful content on the Internet. Other online initiatives, such as BincangSyariah and Islamidotco, have also been promoting Islamic literacy, moderating religious interpretations, and correcting misleading narratives.

Nevertheless, Indonesia still encounters online radicalisation cases. In July 2024, Indonesia’s elite counterterrorism unit, Densus 88, detained a 19-year-old student who had expressed allegiance to ISIS through social media and was believed to be planning attacks on religious sites before he was caught.

The swift progress of Artificial Intelligence (AI), especially in areas of machine learning (ML) and natural language processing (NLP), presents both opportunities and challenges in combating online radicalisation in Indonesia. AI, generally defined as machines mimicking human intelligence, enables systems to recognise patterns, analyse content, and produce outputs in text, images, and videos. Within this AI landscape, ML allows models to enhance themselves through data, while NLP, as a specific ML application, deals with understanding and generating human language. These advancements provide possibilities for creating early detection systems, content moderation tools, and sentiment analysis tools that can spot and counter extremist messages online.

For the research, the author conducted interviews with fifteen experts across different fields, including law enforcement officers, academics, representatives from civil society organisations (CSOs), and employees of AI start-ups in Indonesia. The qualitative data collected from this process have been analysed through thematic analysis, and the preliminary findings reveal that AI can indeed complement the conventional CVE (countering violent extremism) methods in the country, albeit not without challenges

S. Rajaratnam School of International Studies, NTU Singapore, 2025. 6p.

download
Artificial Intelligence, Counter-Terrorism and the Rule of Law: At the Heart of National Security

By Arianna Vedaschi and Chiara Graziani

While states and terrorists have always used emerging technology in their endeavours, there has seldom been an emerging technology with the reach, implications, and possibilities of AI. In this masterful book, Vedaschi and Graziani skilfully merge law, computer science, psychology and more to provide the authoritative account of how AI enables terrorist actors, promises security, and challenges the rule of law.’

Cheltenham, UK; Northampton, MA: Edward Elgar, 2025. 168p.

download
More is More: Scaling up Online Extremism and Terrorism Research with Computer Vision 

By By Stephane J. Baele,* Lewys Brace, and Elahe Naserian 

Scholars and practitioners investigating extremist and violent political actors’ online communications face increasingly large information environments containing ever-growing amounts of data to find, collect, organise, and analyse. In this context, this article encourages terrorism and extremism analysts to use computational visual methods, mirroring for images what is now routinely done for text. Specifically, we chart how computer vision methods can be successfully applied to strengthen the study of extremist and violent political actors’ online ecosystems. Deploying two such methods – unsupervised deep clustering and supervised object identification – on an illustrative case (an original corpus containing thousands of images collected from incel platforms) allows us to explain the logic of these tools, to identify their specific advantages (and limitations), and to subsequently propose a research workflow associating computational methods with the other visual analysis approaches traditionally leveraged  

Perspectives on Terrorism, Volume XIX, Issue 1 March 2025  

download
Strategic competition in the age of AI: Emerging risks and opportunities from military use of artificial intelligence

By James Black, Mattias Eken, Jacob Parakilas, Stuart Dee, Conlan Ellis, Kiran Suman-Chauhan, Ryan J. Bain, Harper Fine, Maria Chiara Aquilino, Melusine Lebret, et al.

Artificial intelligence (AI) holds the potential to usher in transformative changes across all aspects of society, economy and policy, including in the realm of defence and security. The United Kingdom (UK) aspires to be a leading player in the rollout of AI for civil and commercial applications, and in the responsible development of defence AI. This necessitates a clear and nuanced understanding of the emerging risks and opportunities associated with the military use of AI, as well as how the UK can best work with others to mitigate or exploit these risks and opportunities.

In March 2024, the Defence AI & Autonomy Unit (DAU) of the UK Ministry of Defence (MOD), and the Foreign, Commonwealth and Development Office (FCDO) jointly commissioned a short scoping study from RAND Europe. The goal was to provide an initial exploration of ways in which military use of AI might generate risks and opportunities at the strategic level – conscious that much of the research to date has focused on the tactical level or on non-military topics (e.g. AI safety). Follow-on work will then explore these issues in more detail to inform the UK strategy for international engagement on these issues.

This technical report aims to set a baseline of understanding of strategic risks and opportunities emerging from military use of AI. The summary report focuses on high-level findings for decision makers.

Key Findings

One of the most important findings of this study is deep uncertainty around AI impacts; an initial prioritisation is possible, but this should be iterated as evidence improves.

The RAND team identified priority issues demanding urgent action. Whether these manifest as risks or opportunities will depend on how quickly and effectively states adapt to intensifying competition over and through AI.

RAND - Sep 6, 2024

Download
Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security

DREXEL, BILL; WITHERS, CALEB

From the document: "Since ChatGPT [Chat Generative Pre-Trained Transformer] was launched in November 2022, artificial intelligence (AI) systems have captured public imagination across the globe. ChatGPT's record-breaking speed of adoption--logging 100 million users in just two months--gave an unprecedented number of individuals direct, tangible experience with the capabilities of today's state-of-the-art AI systems. More than any other AI system to date, ChatGPT and subsequent competitor large language models (LLMs) have awakened societies to the promise of AI technologies to revolutionize industries, cultures, and political life. [...] This report aims to help policymakers understand catastrophic AI risks and their relevance to national security in three ways. First, it attempts to further clarify AI's catastrophic risks and distinguish them from other threats such as existential risks that have featured prominently in public discourse. Second, the report explains why catastrophic risks associated with AI development merit close attention from U.S. national security practitioners in the years ahead. Finally, it presents a framework of AI safety dimensions that contribute to catastrophic risks."

CENTER FOR A NEW AMERICAN SECURITY. UN, 2024. 42p.

download
Terrorism, Extremism, Disinformation and Artificial Intelligence: A Primer for Policy Practitioners

By GANDHI, MILAN

From the document: "Focussing on current and emerging issues, this policy briefing paper ('Paper') surveys the ways in which technologies under the umbrella of artificial intelligence ('AI') may interact with democracy and, specifically, extremism, mis/disinformation, and illegal and 'legal but harmful' content online. The Paper considers examples of how AI technologies can be used to mislead and harm citizens and how AI technologies can be used to detect and counter the same or associated harms, exploring risks to democracy and human rights emerging across the spectrum. [...] Given the immense scope and potential impacts of AI on different facets of democracy and human rights, the Paper does not consider every relevant or potential AI use case, nor the long-term horizon. For example, AI-powered kinetic weapons and cyber-attacks are not discussed. Moreover, the Paper is limited in examining questions at the intersection of AI and economics and AI and geopolitics, though both intersections have important implications for democracy in the digital age. Finally, the Paper only briefly discusses how AI and outputs such as deepfakes may exacerbate broader societal concerns relating to political trust and polarisation. Although there is a likelihood that aspects of the Paper will be out-of-date the moment it is published given the speed at which new issues, rules and innovations are emerging, the Paper is intended to empower policymakers, especially those working on mis/disinformation, hate, extremism and terrorism specifically, as well as security, democracy and human rights more broadly. It provides explanations of core concerns related to AI and links them to practical examples and possible public policy solutions."

INSTITUTE FOR STRATEGIC DIALOGUE. 2024.

download