The Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

Posts tagged AI
Overview of the Impact of GenAI and Deepfakes on Global Electoral Processes

CERVINI, ENZO MARIA LE FEVRE; CARRO, MARÍA VICTORIA

From the document: "Generative Artificial Intelligence's (GenAI) capacity to produce highly realistic images, videos, and text poses a significant challenge, as it can deceive viewers and consumers into accepting artificially generated content as authentic and genuine. This raises concerns about the dissemination of false information, disinformation, and its implications for public trust and democratic processes. Additionally, this phenomenon prompts critical ethical and legal inquiries, including issues surrounding the attribution of authority and accountability for the generated content. [...] This article delves into the impact of generative AI on recent and future political elections. We'll examine how deepfakes and other AI-generated content are used, along with their potential to sway voters. We'll also analyze the strategies various stakeholders are deploying to counter this growing phenomenon."

ITALIAN INSTITUTE FOR INTERNATIONAL POLITICAL STUDIES. 22 MAR, 2024. 44p.

Artificial Intelligence in the Biological Sciences: Uses, Safety, Security, and Oversight [November 22, 2023]

By KUIKEN, TODD

From the document: "Artificial intelligence (AI) is a term generally thought of as computerized systems that work and react in ways commonly thought to require intelligence. AI technologies, methodologies, and applications can be used throughout the biological sciences and biology R&D, including in engineering biology (e.g., the application of engineering principles and the use of systematic design tools to reprogram cellular systems for a specific functional output). This has enabled research and development (R&D) advances across multiple application areas and industries. For example, AI can be used to analyze genomic data (e.g., DNA sequences) to determine the genetic basis of a particular trait and potentially uncover genetic markers linked with those traits. It has also been used in combination with biological design tools to aid in characterizing proteins (e.g., 3-D structure) and for designing new chemical structures that can enable specific medical applications, including for drug discovery. AI can also be used across the scientific R&D process, including the design of laboratory experiments, protocols to run certain laboratory equipment, and other 'de-skilling' aspects of scientific research. The convergence of AI and other technologies associated with biology can lower technical and knowledge barriers and increase the number of actors with certain capabilities. These capabilities have potential for beneficial uses while at the same time raising certain biosafety and biosecurity concerns. For example, some have argued that using AI for biological design can be repurposed or misused to potentially produce biological and chemical compounds of concern."

Library Of Congress. Congressional Research Service. 2023.

Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0: Next-Generation Chinese Astroturfing and Coping with Ubiquitous AI

Marcellino, William M.; Beauchamp-Mustafaga, Nathan; Kerrigan, Amanda; Chao, Lev Navarre; Smith, Jackson

From the webpage: "In this Perspective, the authors argue that the emergence of ubiquitous, powerful generative AI poses a potential national security threat in terms of the risk of misuse by U.S. adversaries (in particular, for social media manipulation) that the U.S. government and broader technology and policy community should proactively address now. Although the authors focus on China and its People's Liberation Army as an illustrative example of the potential threat, a variety of actors could use generative AI for social media manipulation, including technically sophisticated nonstate actors (domestic as well as foreign). The capabilities and threats discussed in this Perspective are likely also relevant to other actors, such as Russia and Iran, that have already engaged in social media manipulation."

Rand Corporation . 2003. 42p.

Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

By Biden, Joseph R., Jr.

From the document: "Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society."

United States. Office Of The Federal Register. 2023. 36p.

2023-2024 CISA Roadmap for Artificial Intelligence

By United States. Cybersecurity & Infrastructure Security Agency

From the document: "As noted in the landmark Executive Order 14110, 'Safe, Secure, And Trustworthy Development and Use of Artificial Intelligence (AI),' [hyperlink] signed by the President on October 30, 2023, 'AI must be safe and secure.' As the nation's cyber defense agency and the national coordinator for critical infrastructure security and resilience, CISA [Cybersecurity & Infrastructure Security Agency] will play a key role in addressing and managing risks at the nexus of AI, cybersecurity, and critical infrastructure. This '2023-2024 CISA Roadmap for Artificial Intelligence' serves as a guide for CISA's AI-related efforts, ensuring both internal coherence as well as alignment with the whole-of-government AI strategy. [...] The security challenges associated with AI parallel cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, putting the burden of security on the customer. Although AI software systems might differ from traditional forms of software, fundamental security practices still apply. Thus, CISA's AI roadmap builds on the agency's cybersecurity and risk management programs. Critically, manufacturers of AI systems must follow secure by design [hyperlink] principles: taking ownership of security outcomes for customers, leading product development with radical transparency and accountability, and making secure by design a top business priority. As the use of AI grows and becomes increasingly incorporated into critical systems, security must be a core requirement and integral to AI system development from the outset and throughout its lifecycle."

United States. Cybersecurity & Infrastructure Security Agency. Nov, 2023. 21p.

The Prospect of a Humanitarian artificial Intelligence: Agency and Value Alignment

By Carlo Montemayor

In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life. He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral, and political implications and takes into account how epistemic agency differs from moral agency. Through his insightful comparisons between human and animal intelligence, Montemayor makes it clear why adopting a need-based attention approach justifies a humanitarian framework. This is an urgent, timely argument for developing AI technologies based on international human rights agreements.

London: Bloomsbury Academic, 2023. 297p.

HSAC Artificial Intelligence Mission Focused Subcommittee Final Report

United States. Department Of Homeland Security

From the document: "On March 27, 2023, Secretary Mayorkas requested that the Homeland Security Advisory Council (HSAC) form two subcommittees to develop Artificial Intelligence (AI) Strategy. The Secretary asked our subcommittee to focus on mission enhancing use cases of AI and recognized the rapid development and introduction of artificial intelligence and machine learning (AI/ML) programs into the workforce, markets, and daily life. In December 2020, the Department of Homeland Security published a comprehensive plan on addressing the implications of AI parallel to the rise of its prevalence. The 'Artificial Intelligence Strategy' highlighted AI/ML programs and their innumerous effects on workforce development, the value of investing in their optimization capabilities, and how to bolster the public's trust and understanding of their functioning. While defending against threats posed by this technology, it is the Department's intention to explore avenues by which these programs can be leveraged to improve its mission. Utilized in an ethical, informed, and responsible manner, AI/ML systems have the ability to improve transportation security, accelerate migrant processing timelines, bolster the functioning of supply chains, intercept illicit contraband, and more."

Washington. DC. United States. Department Of Homeland Security . 2023. 21p.

Onboard AI: Constraints and Limitations

By Miller, Kyle A.; Lohn, Andrew J.

From the document: "This report highlights how constraints can create a gap between the AI that sets performance records and the AI implemented in the real world. We begin with a brief explanation of why one would run AI onboard a device, as opposed to a cloud or data center. Part two overviews constraints that can inhibit models and compute hardware from running onboard. Part three investigates three case studies to illuminate how these constraints impact AI performance: computer vision models on drones, satellites, and autonomous vehicles. These case studies are only meant to elucidate the constraints on various systems, and are not meant to be a comprehensive assessment of constraints across all or most systems that could use onboard AI. Part four provides a broad assessment of trends based on findings from the case studies, and considers how they might impact onboard AI functionality in the future. Part five concludes with recommendations to better manage the constraints of onboard AI."

Georgetown University. Walsh School Of Foreign Service. Center For Security And Emerging Technology


What Policymakers Need to Know About Artificial Intelligence

By Frana, Philip L.

From the webpage: "Generative AI language models currently operate only within the controlled environments of computer systems and networks, and their capabilities are constrained by training datasets and human uses. The generative transformer architecture [hyperlink] that is powering the current wave of artificial intelligence may reshape many areas of daily life. OpenAI CEO Sam Altman has been making a global tour to engage with legislators, policymakers, and industry leaders about his company's pathbreaking Generative Pre-trained Transformer (GPT) series of large language models (LLMs). While acknowledging that AI could inflict damage on the world economy, disrupt labor markets, and transform global affairs in unforeseen ways, he emphasizes that responsible use and regulatory transparency will allow the technology to make positive contributions [hyperlink] to education, creativity and entrepreneurship, and workplace productivity."

Atlantic Council Of The United States. 2023. 21p.

The Democratization of Artificial Intelligence Net Politics in the Era of Learning Algorithms (Edition 1)

Edited by Andreas Sudmann 

After a long time of neglect, Artificial Intelligence is once again at the center of most of our political, economic, and socio-cultural debates. Recent advances in the field of Artifical Neural Networks have led to a renaissance of dystopian and utopian speculations on an AI-rendered future. Algorithmic technologies are deployed for identifying potential terrorists through vast surveillance networks, for producing sentencing guidelines and recidivism risk profiles in criminal justice systems, for demographic and psychographic targeting of bodies for advertising or propaganda, and more generally for automating the analysis of language, text, and images. Against this background, the aim of this book is to discuss the heterogenous conditions, implications, and effects of modern AI and Internet technologies in terms of their political dimension: What does it mean to critically investigate efforts of net politics in the age of machine learning algorithms?

Bielefeld: transcript Verlag, 2019. 335p.

Artificial Intelligence Index Report 2023

By Stanford University. Human-Centered Artificial Intelligence

From the document: "Welcome to the sixth edition of the AI [artificial intelligence] Index Report! This year, the report introduces more original data than any previous edition, including a new chapter on AI public opinion, a more thorough technical performance chapter, original analysis about large language and multimodal models, detailed trends in global AI legislation records, a study of the environmental impact of AI systems, and more. The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The report aims to be the world's most credible and authoritative source for data and insights about AI."

Stanford University. 2023. 386p.

Algorithmic Reason: The New Government of Self and Other

By Claudia Aradau and Tobias Blanke

Are algorithms ruling the world today? Is artificial intelligence making life-and-death decisions? Are social media companies able to manipulate elections? As we are confronted with public and academic anxieties about unprecedented changes, this book offers a different analytical prism to investigate these transformations as more mundane and fraught. Aradau and Blanke develop conceptual and methodological tools to understand how algorithmic operations shape the government of self and other. While disperse and messy, these operations are held together by an ascendant algorithmic reason. Through a global perspective on algorithmic operations, the book helps us understand how algorithmic reason redraws boundaries and reconfigures differences. The book explores the emergence of algorithmic reason through rationalities, materializations, and interventions. It traces how algorithmic rationalities of decomposition, recomposition, and partitioning are materialized in the construction of dangerous others, the power of platforms, and the production of economic value. The book shows how political interventions to make algorithms governable encounter friction, refusal, and resistance. The theoretical perspective on algorithmic reason is developed through qualitative and digital methods to investigate scenes and controversies that range from mass surveillance and the Cambridge Analytica scandal in the UK to predictive policing in the US, and from the use of facial recognition in China and drone targeting in Pakistan to the regulation of hate speech in Germany. Algorithmic Reason offers an alternative to dystopia and despair through a transdisciplinary approach made possible by the authors’ backgrounds, which span the humanities, social sciences, and computer sciences.

Oxford, UK; New York: Oxford University Press, 2022. 289p.