The Open Access Publisher and Free Library
03-crime prevention.jpg

CRIME PREVENTION

CRIME PREVENTION-POLICING-CRIME REDUCTION-POLITICS

Posts tagged artificial intelligence
Going cyberpunk: Conceptualizing the smart(er) artificially intelligent firearm for policing's Utopian future

By Mehzeb Chowdhury

As policing develops into a more professional, evidence-driven and technical endeavour, heightened public concern regarding organizational competence and police culture-related fallacies have become augmented, especially in the case of officer-involved shootings. Introduction of body-worn cameras, increased CCTV coverage, vehicle dash-cams and the advent of social media, have provided avenues for investigation into misconduct, but institutional and individual failings such as racism, sexism and other forms of discrimination remain a concern. Technical innovations like smart guns, smart targeting and programmable projectiles have instigated conversations about traditional firearms and whether alternatives using cutting-edge technology could address some of these shortcomings. This article examines existing policing technologies, providing an overview of advanced computational and sensor systems, the risks and dangers of these mechanisms, as well as their potential benefits and drawbacks. It conceptualizes whether existing technologies can be transformed into a smarter, more efficient firearm, powered by artificial intelligence (AI). The premise of the AI-assisted firearm being the promise of a future in which unwanted outcomes in officer/citizen encounters can be counteracted through AI assisting in better decision-making. The article considers hardware and software, policy issues, associated risks and potential advantages of the firearms system, providing a wider perspective on the increasing use of computational technologies in policing practice, and highlighting areas for further research and discussion.

Newcastle upon Tyne, UK: International Journal of Police Science & Management, 2023. 14p.

Detecting AI Fingerprints: A Guide to Watermarking and Beyond

By Srinivasan, Siddarth

From the document: "Over the last year, generative AI [artificial intelligence] tools have made the jump from research prototype to commercial product. Generative AI models like OpenAI's ChatGPT [Chat Generative Pre-trained Transformer] [hyperlink] and Google's Gemini [hyperlink] can now generate realistic text and images that are often indistinguishable from human-authored content, with generative AI for audio [hyperlink] and video [hyperlink] not far behind. Given these advances, it's no longer surprising to see AI-generated images of public figures go viral [hyperlink] or AI-generated reviews and comments on digital platforms. As such, generative AI models are raising concerns about the credibility of digital content and the ease of producing harmful content going forward. [...] There are several ideas for how to tell whether a given piece of content--be it text, image, audio, or video--originates from a machine or a human. This report explores what makes for a good AI detection tool, how the oft-touted approach of 'watermarking' fares on various technical and policy-relevant criteria, governance of watermarking protocols, what policy objectives need to be met to promote watermark-based AI detection, and how watermarking stacks up against other suggested approaches like content provenance."

Washington. DC. Brookings Institution. 2024.

Skating to Where the Puck is Going: Anticipating and Managing Risks from Frontier AI Systems

By Toner, Helen; Ji, Jessica; Bansemer, John; Lim, Lucy; Painter, Chris; Corley, Courtney D.; Whittlestone, Jess; Botvinick, Matt; Rodriguez, Mikel; Shankar Siva Kumar, Ram

From the document: "AI is experiencing a moment of profound change, capturing unprecedented public attention and becoming increasingly sophisticated. As AI becomes more powerful, and in some cases more general in its capabilities, it may become capable of posing novel risks in domains such as bioweapons development, cybersecurity, and beyond. Two features of the current AI landscape are especially challenging from a policy perspective: the rapid pace at which research is advancing, and the recent development of more general-purpose AI systems, which--unlike most AI systems, which are narrowly focused on a single task--can be adapted to many different use cases. These two elements add new layers of difficulty to existing AI ethics and safety problems. In July 2023, Georgetown University's Center for Security and Emerging Technology (CSET) and Google DeepMind hosted a virtual roundtable to discuss the implications and governance of the advancing AI research frontier, particularly with regard to general-purpose AI models. The objective of the roundtable was to help bridge the gap between the state of the current conversation and the reality of AI technology at the research frontier, which has potentially widespread implications for both national security and society at large."

Georgetown University. Walsh School Of Foreign Service. Center For Security And Emerging Technology . 2023. 23p.

Surveillance for Sale: The Underregulated Relationship between U.S. Data Brokers and Domestic and Foreign Government Agencies

By Caitlin Chin

Ten years ago, when whistleblower Edward Snowden revealed that U.S. government agencies had intercepted bulk telephone and internet communications from numerous individuals around the world, President Barack Obama acknowledged a long-standing yet unsettled dilemma: “You can’t have 100 percent security and also then have 100 percent privacy and zero inconvenience. There are trade-offs involved.” Snowden’s disclosures reignited robust debates over the appropriate balance between an individual’s right to privacy and the state’s interest in protecting economic and national security—in particular, where to place limitations on the U.S. government’s ability to compel access to signals intelligence held by private companies. These debates continue today, but the internet landscape—and subsequently, the relationship between the U.S. government and private sector—has evolved substantially since 2013. U.S. government agencies still routinely mandate private companies like Verizon and Google hand over customers’ personal information and issue non-disclosure orders to prevent these companies from informing individuals about such access. But the volume and technical complexity of the data ecosystem have exploded over the past decade, spurred by the rising ubiquity of algorithmic profiling in the U.S. private sector. As a result, U.S. government agencies have increasingly turned to “voluntary” mechanisms to access data from private companies, such as purchasing smartphone geolocation history from third-party data brokers and deriving insights from publicly available social media posts, without the formal use of a warrant, subpoena, or court order. In June 2023, the Office of the Director of National Intelligence (ODNI) declassified a report from January 2022—one of the first public efforts to examine the “large amount” of commercially available information that federal national security agencies purchase. In this report, ODNI recognizes that sensitive personal information both “clearly provides intelligence value” but also increases the risk of harmful outcomes like blackmail or harassment. Despite the potential for abuse, the declassified report reveals that some intelligence community elements have not established proper privacy and civil liberties guardrails for commercially acquired information and that even ODNI lacks awareness of the full scope of data brokerage contracts across its 18 units. Critically, the report recognizes that modern advancements in data collection have outpaced existing legal safeguards: “Today’s CAI [commercially available information] is more revealing, available on more people (in bulk), less possible to avoid, and less well understood than traditional PAI [publicly available information].” The ODNI report demonstrates how the traditional view of the privacy-security trade-off is becoming increasingly nuanced, especially as gaps in outdated federal law around data collection and transfers expand the number of actors and risk vectors involved. National Security Adviser Jake Sullivan recently noted that there are also geopolitical implications to consider: “Our strategic competitors see big data as a strategic asset.” When Congress banned the popular mobile app TikTok on government devices in the 2023 National Defense Authorization Act (NDAA), it cited fears that the Chinese Communist Party (CCP) could use the video-hosting app to spy on Americans. However, the NDAA did not address how numerous other smartphone apps, beyond TikTok, share personal information with data brokers—which, in turn, could transfer it to adversarial entities. In 2013, over 250,000 website privacy policies acknowledged sharing data with other companies; since then, this number inevitably has increased. In a digitized society, unchecked data collection has become a vulnerability for U.S. national security—not merely, as some once viewed, a strength. The reinvigorated focus on TikTok’s data collection practices creates a certain paradox. While politicians have expressed concerns about Chinese government surveillance through mobile apps, U.S. government agencies have purchased access to smartphone geolocation data and social media images related to millions of Americans from data brokers without a warrant. The U.S. government has simultaneously treated TikTok as a national security risk and a handy source of information, reportedly issuing the app over 1,500 legal requests for data in 2021 alone. It is also important to note that national security is not the only value that can come into tension with information privacy, as unfettered data collection carries broader implications for civil rights, algorithmic fairness, free expression, and international commerce, affecting individuals both within and outside the United States.

Washington, DC: The Center for Strategic and International Studies (CSIS) 2023. 60p.

De-Risking Authoritarian AI: A Balanced Approach to Protecting Our Digital Ecosystems

By Gilding, Simeon

From the document: "It seems like an age since we worried about China's dominion over the world's 5G [fifth generation] networks. These days, the digital authoritarian threat feels decidedly steampunk--Russian missiles powered by washing-machine chips and stately Chinese surveillance balloons. And, meanwhile, our short attention spans are centred (ironically) on TikTok--an algorithmically addictive short video app owned by Chinese technology company ByteDance. More broadly, there are widespread concerns that 'large language model' (LLM) generative AI such as ChatGPT [Chat Generative Pre-Trained Transformer] will despoil our student youth, replace our jobs and outrun the regulatory capacity of the democracies. [...] This report is broken down into six sections. The first section highlights our dependency on AI-enabled products and services. The second examines China's efforts to export AI-enabled products and services and promote its model of digitally enabled authoritarianism, in competition with the US and the norms and values of democracy. This section also surveys PRC [People's Republic of China] laws compelling tech-sector cooperation and explains the nature of the threat, giving three examples of Chinese AI-enabled products of potential concern. It also explains why India is particularly vulnerable to the threat. In the third section, the report looks at the two key democratic responses to the challenge of AI: on the one hand, US efforts to counter both China's development of advanced AI technologies and the threat from Chinese technology already present in the US digital ecosystem; on the other, a draft EU Regulation to protect the fundamental rights of EU citizens from the pernicious effects of AI. The fourth section of the report proposes a framework for triaging and managing the risk of China's authoritarian AI-enabled products and services embedded in democratic digital ecosystems. The final section acknowledges complementary efforts to mitigate the PRC threat to democracies' digital ecosystems."

Scaling Trust on the Web

By Sugarman, Eli; Daniel, Michael; François, Camille; Chowdhury, A. K. M. Azam; Chowdhury, Rumman; Willner, Dave; Roth, Yoel

From the document: "Digital technologies continue to evolve at breakneck speed, unleashing a dizzying array of society-wide impacts in their wake. In the last quarter of 2022 alone: Meta, Accenture, and Microsoft announced a massive partnership to establish immersive spaces for enterprise environments; Elon Musk took over Twitter; the third-largest cryptocurrency exchange in the world collapsed overnight; the European Union's landmark Digital Services Act came into force; and generative artificial intelligence ('GAI') tools were released to the public for the first time. Within a fifty-day span, the outline of a new internet age came into sharper focus. In December 2022, the Atlantic Council's Digital Forensic Research Lab began to assemble a diverse array of experts who could generate an action-oriented agenda for future online spaces that can better protect users' rights, support innovation, and incorporate trust and safety principles--and do so quickly. [...] The task force specifically considered the emerging field of 'trust and safety' (T&S) and how it can be leveraged moving forward. That field provides deep insights into the complex dynamics that have underpinned building, maintaining, and growing online spaces to date. Moreover, the work of T&S practitioners, in concert with civil society and other counterparts, now rests at the heart of transformative new regulatory models that will help define how technology is developed in the twenty-first century. 'This executive report captures the task force's key findings and provides a short overview of the truths, trends, risks, and opportunities that task force members believe will influence the building of online spaces in the immediate, near, and medium term. It also summarizes the task force's recommendations for specific, actionable interventions that could help to overcome systems gaps the task force identified.'"

Atlantic Council Of The United States. Digital Forensic Research Lab. 2023. 150p.

Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks

Edited by Christian RathgebRuben TolosanaRuben Vera-Rodriguez, and Christoph Busch

This open access book provides the first comprehensive collection of studies dealing with the hot topic of digital face manipulation such as DeepFakes, Face Morphing, or Reenactment. It combines the research fields of biometrics and media forensics including contributions from academia and industry. Appealing to a broad readership, introductory chapters provide a comprehensive overview of the topic, which address readers wishing to gain a brief overview of the state-of-the-art. Subsequent chapters, which delve deeper into various research challenges, are oriented towards advanced readers. Moreover, the book provides a good starting point for young researchers as well as a reference guide pointing at further literature. Hence, the primary readership is academic institutions and industry currently involved in digital face manipulation and detection. The book could easily be used as a recommended text for courses in image processing, machine learning, media forensics, biometrics, and the general security area.

Cham: Springer Nature, 2022. 481p.