Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged Artificial Intelligence
Crossing the Deepfake Rubicon The Maturing Synthetic Media Threat Landscape

By Di Cooke, Abby Edwards, Alexis Day, Devi Nair, Sophia Barkoff, and Katie Kelly

THE ISSUE

  • In recent years, threat actors have increasingly used synthetic media—digital content produced or manipulated by artificial intelligence (AI)—to enhance their deceptive activities, harming individuals and organizations worldwide with growing frequency.

  • In addition, the weaponization of synthetic media has also begun to undermine people’s trust in information integrity more widely, posing concerning implications for the stability and resilience of the U.S.’s information environment.

  • At present, an individual’s ability to recognize AI-generated content remains the primary defense against people falling prey to deceptively presented synthetic media.

  • However, a recent experimental study by CSIS found that people are no longer able to reliably distinguish between authentic and AI-generated images, audio, and video sourced from publicly available tools.

  • That human detection has ceased to be a reliable method for identifying synthetic media only heightens the dangers posed by the technology’s misuse, underscoring the pressing need to implement alternative countermeasures to address this emerging threat.

CSIS, 2024. 11p.

Through the Chat Window and Into the Real World: Preparing for AI Agents

By: Helen Toner, John Bansemer, Kyle Crichton, Matthew Burtell, Thomas Woodside, Anat Lior, Andrew Lohn, Ashwin Acharya, Beba Cibralic, Chris Painter, Cullen O’Keefe, Iason Gabriel, Kathleen Fisher, Ketan Ramakrishnan, Krystal Jackson, Noam Kolt, Rebecca Crootof, and Samrat Chatterjee

The concept of artificial intelligence systems that actively pursue goals—known as AI “agents”—is not new. But over the last year or two, progress in large language models (LLMs) has sparked a wave of excitement among AI developers about the possibility of creating sophisticated, general-purpose AI agents in the near future. Startups and major technology companies have announced their intent to build and sell AI agents that can act as personal assistants, virtual employees, software engineers, and more. While current systems remain somewhat rudimentary, they are improving quickly. Widespread deployment of highly capable AI agents could have transformative effects on society and the economy. This workshop report describes findings from a recent CSET-led workshop on the policy implications of increasingly “agentic” AI systems.

In the absence of a consensus definition of an “agent,” we describe four characteristics of increasingly agentic AI systems: they pursue more complex goals in more complex environments, exhibiting independent planning and adaptation to directly take actions in virtual or real-world environments. These characteristics help to establish how, for example, a cyber-offense agent that could autonomously carry out a cyber intrusion would be more agentic than a chatbot advising a human hacker. A “CEO-AI” that could run a company without human intervention would likewise be more agentic than an AI acting as a personal assistant.

At present, general-purpose LLM-based agents are the subject of significant interest among AI developers and investors. These agents consist of an advanced LLM (or multimodal model) that uses “scaffolding” software to interface with external environments and tools such as a browser or code interpreter. Proof-of-concept products that can, for example, write code, order food deliveries, and help manage customer relationships are already on the market, and many relevant players believe that the coming years will see rapid progress.

In addition to the many potential benefits that AI agents will likely bring, they may also exacerbate a range of existing AI-related issues and even create new challenges. The ability of agents to pursue complex goals without human intervention could lead to more serious accidents; facilitate misuse by scammers, cybercriminals, and others; and create new challenges in allocating responsibility when harms materialize. Existing data governance and privacy issues may be heightened by developers’ interest in using data to create agents that can be tailored to a specific user or context. If highly capable agents reach widespread use, users may become vulnerable to skill fade and dependency, agents may collude with one another in undesirable ways, and significant labor impacts could materialize as an increasing range of currently human-performed tasks become automated.

To manage these challenges, our workshop participants discussed three categories of interventions:

  • Measurement and evaluation: At present, our ability to assess the capabilities and real-world impacts of AI agents is very limited. Developing better methodologies to track improvements in the capabilities of AI agents themselves, and to collect ecological data about their impacts on the world, would make it more feasible to anticipate and adapt to future progress.

  • Technical guardrails: Governance objectives such as visibility, control, trustworthiness, as well as security and privacy can be supported by the thoughtful design of AI agents and the technical ecosystems around them. However, there may be trade-offs between different objectives. For example, many mechanisms that would promote visibility into and control over the operations of AI agents may be in tension with design choices that would prioritize privacy and security.

  • Legal guardrails: Many existing areas of law—including agency law, corporate law, contract law, criminal law, tort law, property law, and insurance law—will play a role in how the impacts of AI agents are managed. Areas where contention may arise when attempting to apply existing legal doctrines include questions about the “state of mind” of AI agents, the legal personhood of AI agents, how industry standards could be used to evaluate negligence, and how existing principal-agent frameworks should apply in situations involving AI agents.

While it is far from clear how AI agents will develop, the level of interest and investment in this technology from AI developers means that policymakers should understand the potential implications and intervention points. For now, valuable steps could include improving measurement and evaluation of AI agents’ capabilities and impacts, deeper consideration of how technical guardrails can support multiple governance objectives, and analysis of how existing legal doctrines may need to be adjusted or updated to handle more sophisticated AI agents.

Center for Security and Emerging Technology, October 2024

Bytes and Battles: Inclusion of Data Governance in Responsible Military AI

By: Yasmin Afina and Sarah Grand-Clément

Data plays a critical role in the training, testing and use of artificial intelligence (AI), including in the military domain. Research and development for AI-enabled military solutions is proceeding at breakneck speed, and the important role data plays in shaping these technologies has implications and, at times, raises concerns. These issues are increasingly subject to scrutiny and range from difficulty in finding or creating training and testing data relevant to the military domain, to (harmful) biases in training data sets, as well as their susceptibility to cyberattacks and interference (for example, data poisoning). Yet pathways and governance solutions to address these issues remain scarce and very much underexplored.

This paper aims to fill this gap by first providing a comprehensive overview on data issues surrounding the development, deployment and use of AI. It then explores data governance practices from civilian applications to identify lessons for military applications, as well as highlight any limitations to such an approach. The paper concludes with an overview of possible policy and governance approaches to data practices surrounding military AI to foster the responsible development, testing, deployment and use of AI in the military domain.

CIGI Papers No. 308 — October 2024

THE IMPLICATIONS OF ARTIFICIAL INTELLIGENCE IN CYBERSECURITY: SHIFTING THE OFFENSE- DEFENSE BALANCE

By: Jennifer Tang, Tiffany Saade, and Steve Kelly

Cutting-edge advances in artificial intelligence (AI) are taking the world by storm, driven by a massive surge of investment, countless new start-ups, and regular technological breakthroughs. AI presents key opportunities within cybersecurity, but concerns remain regarding the ways malicious actors might also use the technology. In this study, the Institute for Security and Technology (IST) seeks to paint a comprehensive picture of the state of play— cutting through vagaries and product marketing hype, providing our outlook for the near future, and most importantly, suggesting ways in which the case for optimism can be realized.

The report concludes that in the near term, AI offers a significant advantage to cyber defenders, particularly those who can capitalize on their "home field" advantage and firstmover status. However, sophisticated threat actors are also leveraging AI to enhance their capabilities, making continued investment and innovation in AI-enabled cyber defense crucial. At this time of writing, AI is not yet unlocking novel capabilities or outcomes, but instead represents a significant leap in speed, scale, and completeness.

This work is the foundation of a broader IST project to better understand which areas of cybersecurity require the greatest collective focus and alignment—for example, greater opportunities for accelerating threat intelligence collection and response, democratized tools for automating defenses, and/or developing the means for scaling security across disparate platforms—and to design a set of actionable technical and policy recommendations in pursuit of a secure, sustainable digital ecosystem.

The Institute for Security and Technology, October 2024

States of Surveillance: Ethnographies of New Technologies in Policing and Justice

Edited by Maya Avis, Daniel Marciniak and Maria Sapignoli   

Recent discussions on big data surveillance and artificial intelligence in governance have opened up an opportunity to think about the role of technology in the production of the knowledge states use to govern. The contributions in this volume examine the socio-technical assemblages that underpin the surveillance carried out by criminal justice institutions – particularly the digital tools that form the engine room of modern state bureaucracies. Drawing on ethnographic research in contexts from across the globe, the contributions to this volume engage with technology’s promises of transformation, scrutinise established ways of thinking that become embedded through technologies, critically consider the dynamics that shape the political economy driving the expansion of security technologies, and examine how those at the margins navigate experiences of surveillance. The book is intended for an interdisciplinary academic audience interested in ethnographic approaches to the study of surveillance technologies in policing and justice. Concrete case studies provide students, practitioners, and activists from a broad range of backgrounds with nuanced entry points to the debate.

London; New York: Routledge, 2025. 201p.

The Weaponisation of Deepfakes Digital Deception by the Far-Right

By Ella Busch and Jacob Ware    

In an ever-evolving technological landscape, digital disinformation is on the rise, as are its political consequences. In this policy brief, we explore the creation and distribution of synthetic media by malign actors, specifically a form of artificial intelligence-machine learning (AI/ML) known as the deepfake. Individuals looking to incite political violence are increasingly turning to deepfakes– specifically deepfake video content–in order to create unrest, undermine trust in democratic institutions and authority figures, and elevate polarised political agendas. We present a new subset of individuals who may look to leverage deepfake technologies to pursue such goals: far right extremist (FRE) groups. Despite their diverse ideologies and worldviews, we expect FREs to similarly leverage deepfake technologies to undermine trust in the American government, its leaders, and various ideological ‘out-groups.' We also expect FREs to deploy deepfakes for the purpose of creating compelling radicalising content that serves to recruit new members to their causes. Political leaders should remain wary of the FRE deepfake threat and look to codify federal legislation banning and prosecuting the use of harmful synthetic media. On the local level, we encourage the implementation of “deepfake literacy” programs as part of a wider countering violent extremism (CVE) strategy geared towards at-risk communities. Finally, and more controversially, we explore the prospect of using deepfakes themselves in order to “call off the dogs” and undermine the conditions allowing extremist groups to thrive.  

The Hague?:  International Centre for Counter-Terrorism (ICCT), 2023.

Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe

By Carter, Sarah R.; Wheeler, Nicole E.; Chwalek, Sabrina; Isaac, Christopher R.; Yassif, Jaime

From the document: "Rapid scientific and technological advances are fueling a 21st-century biotechnology revolution. Accelerating developments in the life sciences and in technologies such as artificial intelligence (AI), automation, and robotics are enhancing scientists' abilities to engineer living systems for a broad range of purposes. These groundbreaking advances are critical to building a more productive, sustainable, and healthy future for humans, animals, and the environment. Significant advances in AI in recent years offer tremendous benefits for modern bioscience and bioengineering by supporting the rapid development of vaccines and therapeutics, enabling the development of new materials, fostering economic development, and helping fight climate change. However, AI-bio capabilities--AI tools and technologies that enable the engineering of living systems--also could be accidentally or deliberately misused to cause significant harm, with the potential to cause a global biological catastrophe. [...] To address the pressing need to govern AI-bio capabilities, this report explores three key questions: [1] What are current and anticipated AI capabilities for engineering living systems? [2] What are the biosecurity implications of these developments? [3] What are the most promising options for governing this important technology that will effectively guard against misuse while enabling beneficial applications? To answer these questions, this report presents key findings informed by interviews with more than 30 individuals with expertise in AI, biosecurity, bioscience research, biotechnology, and governance of emerging technologies."

Nuclear Threat Initiative. 2023. 88p.

Principles for Reducing AI Cyber Risk in Critical Infrastructure: A Prioritization Approach

By SLEDJESKI, CHRISTOPHER L.

From the document: "Artificial Intelligence (AI) brings many benefits, but disruption of AI could, in the future, generate impacts on scales and in ways not previously imagined. These impacts, at a societal level and in the context of critical infrastructure, include disruptions to National Critical Functions. A prioritized risk-based approach is essential in any attempt to apply cybersecurity requirements to AI used in critical infrastructure functions. The topics of critical infrastructure and AI are simply too vast to meaningfully address otherwise. The National Institute of Standards and Technology (NIST) defines cyber secure AI systems as those that can 'maintain confidentiality, integrity and availability through protection mechanisms that prevent unauthorized access and use.' Cybersecurity incidents that impact AI in critical infrastructure could impact the availability, reliability, and safety of these vital services. [...] This paper was prompted by questions presented to MITRE about to what extent the original NIST Cybersecurity Risk Framework, and the efforts that accompanied its release, enabled a regulatory approach that could serve as a model for AI regulation in critical infrastructure. The NIST Cybersecurity Risk Framework was created a decade ago as a requirement of Executive Order (EO) 13636. When this framework was paired with the list of cyber-dependent entities identified under the EO, it provided a voluntary approach for how Sector Risk Management Agencies (SRMAs) prioritize and enhance the cybersecurity of their respective sectors."

MITRE CORPORATION. 2023. 18p.