By James Black, Mattias Eken, Jacob Parakilas, Stuart Dee, Conlan Ellis, Kiran Suman-Chauhan, Ryan J. Bain, Harper Fine, Maria Chiara Aquilino, Melusine Lebret, et al.
Artificial intelligence (AI) holds the potential to usher in transformative changes across all aspects of society, economy and policy, including in the realm of defence and security. The United Kingdom (UK) aspires to be a leading player in the rollout of AI for civil and commercial applications, and in the responsible development of defence AI. This necessitates a clear and nuanced understanding of the emerging risks and opportunities associated with the military use of AI, as well as how the UK can best work with others to mitigate or exploit these risks and opportunities.
In March 2024, the Defence AI & Autonomy Unit (DAU) of the UK Ministry of Defence (MOD), and the Foreign, Commonwealth and Development Office (FCDO) jointly commissioned a short scoping study from RAND Europe. The goal was to provide an initial exploration of ways in which military use of AI might generate risks and opportunities at the strategic level – conscious that much of the research to date has focused on the tactical level or on non-military topics (e.g. AI safety). Follow-on work will then explore these issues in more detail to inform the UK strategy for international engagement on these issues.
This technical report aims to set a baseline of understanding of strategic risks and opportunities emerging from military use of AI. The summary report focuses on high-level findings for decision makers.
Key Findings
One of the most important findings of this study is deep uncertainty around AI impacts; an initial prioritisation is possible, but this should be iterated as evidence improves.
The RAND team identified priority issues demanding urgent action. Whether these manifest as risks or opportunities will depend on how quickly and effectively states adapt to intensifying competition over and through AI.
RAND - Sep 6, 2024