Open Access Publisher and Free Library
04-terrorism.jpg

TERRORISM

TERRORISM-DOMESTIC-INTERNATIONAL-RADICALIZATION-WAR

Posts tagged artificial intelligence
Strategic competition in the age of AI: Emerging risks and opportunities from military use of artificial intelligence

By James Black, Mattias Eken, Jacob Parakilas, Stuart Dee, Conlan Ellis, Kiran Suman-Chauhan, Ryan J. Bain, Harper Fine, Maria Chiara Aquilino, Melusine Lebret, et al.

Artificial intelligence (AI) holds the potential to usher in transformative changes across all aspects of society, economy and policy, including in the realm of defence and security. The United Kingdom (UK) aspires to be a leading player in the rollout of AI for civil and commercial applications, and in the responsible development of defence AI. This necessitates a clear and nuanced understanding of the emerging risks and opportunities associated with the military use of AI, as well as how the UK can best work with others to mitigate or exploit these risks and opportunities.

In March 2024, the Defence AI & Autonomy Unit (DAU) of the UK Ministry of Defence (MOD), and the Foreign, Commonwealth and Development Office (FCDO) jointly commissioned a short scoping study from RAND Europe. The goal was to provide an initial exploration of ways in which military use of AI might generate risks and opportunities at the strategic level – conscious that much of the research to date has focused on the tactical level or on non-military topics (e.g. AI safety). Follow-on work will then explore these issues in more detail to inform the UK strategy for international engagement on these issues.

This technical report aims to set a baseline of understanding of strategic risks and opportunities emerging from military use of AI. The summary report focuses on high-level findings for decision makers.

Key Findings

One of the most important findings of this study is deep uncertainty around AI impacts; an initial prioritisation is possible, but this should be iterated as evidence improves.

The RAND team identified priority issues demanding urgent action. Whether these manifest as risks or opportunities will depend on how quickly and effectively states adapt to intensifying competition over and through AI.

RAND - Sep 6, 2024

Catalyzing Crisis: A Primer on Artificial Intelligence, Catastrophes, and National Security

DREXEL, BILL; WITHERS, CALEB

From the document: "Since ChatGPT [Chat Generative Pre-Trained Transformer] was launched in November 2022, artificial intelligence (AI) systems have captured public imagination across the globe. ChatGPT's record-breaking speed of adoption--logging 100 million users in just two months--gave an unprecedented number of individuals direct, tangible experience with the capabilities of today's state-of-the-art AI systems. More than any other AI system to date, ChatGPT and subsequent competitor large language models (LLMs) have awakened societies to the promise of AI technologies to revolutionize industries, cultures, and political life. [...] This report aims to help policymakers understand catastrophic AI risks and their relevance to national security in three ways. First, it attempts to further clarify AI's catastrophic risks and distinguish them from other threats such as existential risks that have featured prominently in public discourse. Second, the report explains why catastrophic risks associated with AI development merit close attention from U.S. national security practitioners in the years ahead. Finally, it presents a framework of AI safety dimensions that contribute to catastrophic risks."

CENTER FOR A NEW AMERICAN SECURITY. UN, 2024. 42p.

Terrorism, Extremism, Disinformation and Artificial Intelligence: A Primer for Policy Practitioners

By GANDHI, MILAN

From the document: "Focussing on current and emerging issues, this policy briefing paper ('Paper') surveys the ways in which technologies under the umbrella of artificial intelligence ('AI') may interact with democracy and, specifically, extremism, mis/disinformation, and illegal and 'legal but harmful' content online. The Paper considers examples of how AI technologies can be used to mislead and harm citizens and how AI technologies can be used to detect and counter the same or associated harms, exploring risks to democracy and human rights emerging across the spectrum. [...] Given the immense scope and potential impacts of AI on different facets of democracy and human rights, the Paper does not consider every relevant or potential AI use case, nor the long-term horizon. For example, AI-powered kinetic weapons and cyber-attacks are not discussed. Moreover, the Paper is limited in examining questions at the intersection of AI and economics and AI and geopolitics, though both intersections have important implications for democracy in the digital age. Finally, the Paper only briefly discusses how AI and outputs such as deepfakes may exacerbate broader societal concerns relating to political trust and polarisation. Although there is a likelihood that aspects of the Paper will be out-of-date the moment it is published given the speed at which new issues, rules and innovations are emerging, the Paper is intended to empower policymakers, especially those working on mis/disinformation, hate, extremism and terrorism specifically, as well as security, democracy and human rights more broadly. It provides explanations of core concerns related to AI and links them to practical examples and possible public policy solutions."

INSTITUTE FOR STRATEGIC DIALOGUE. 2024.