Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

Posts tagged cybersecurity
Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0: Next-Generation Chinese Astroturfing and Coping with Ubiquitous AI

Marcellino, William M.; Beauchamp-Mustafaga, Nathan; Kerrigan, Amanda; Chao, Lev Navarre; Smith, Jackson

From the webpage: "In this Perspective, the authors argue that the emergence of ubiquitous, powerful generative AI poses a potential national security threat in terms of the risk of misuse by U.S. adversaries (in particular, for social media manipulation) that the U.S. government and broader technology and policy community should proactively address now. Although the authors focus on China and its People's Liberation Army as an illustrative example of the potential threat, a variety of actors could use generative AI for social media manipulation, including technically sophisticated nonstate actors (domestic as well as foreign). The capabilities and threats discussed in this Perspective are likely also relevant to other actors, such as Russia and Iran, that have already engaged in social media manipulation."

Rand Corporation . 2003. 42p.

Seismic Shifts: How Economic, Technological, and Political Trends Are Challenging Independent Counter-Election-Disinformation Initiatives in the United States

By Jackson, Dean; Adler, William T.; Dougall, Danielle; Jain, Samir

From the document: "In March 2023, internet scholar Kate Klonick wrote a counterintuitive essay entitled 'The End of the Golden Age of Tech Accountability' in which she argues that '2021 was a heyday for trust and safety,' a time when tech companies felt public pressure to take a number of positive (if insufficient) self-regulatory steps. She laments that platforms are now backtracking as a result of economic headwinds and the failure of many governments to pass meaningful regulation while public outrage was at its peak. A few months later, in June 2023, the prominent technology journalist Casey Newton cited Klonick's argument in a newsletter, asking, 'Have we reached peak trust and safety?' The trends detailed in this report will probably tempt most readers to answer 'yes.' There are many reasons to be pessimistic about prospects for improvement. But improvement is possible if the field accepts that election disinformation is an environmental hazard to be managed, not a disease to be cured. Few signs in the near term point to huge gains in the health of the U.S. media ecosystem. Steps can be taken to protect and better support researchers, diminish the prevalence and severity of harm, achieve incremental improvements in tech accountability and transparency, and set up the trust and safety field for long-term success."

Center For Democracy And Technology. 2023. 108p.

Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

By Biden, Joseph R., Jr.

From the document: "Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society."

United States. Office Of The Federal Register. 2023. 36p.

2023-2024 CISA Roadmap for Artificial Intelligence

By United States. Cybersecurity & Infrastructure Security Agency

From the document: "As noted in the landmark Executive Order 14110, 'Safe, Secure, And Trustworthy Development and Use of Artificial Intelligence (AI),' [hyperlink] signed by the President on October 30, 2023, 'AI must be safe and secure.' As the nation's cyber defense agency and the national coordinator for critical infrastructure security and resilience, CISA [Cybersecurity & Infrastructure Security Agency] will play a key role in addressing and managing risks at the nexus of AI, cybersecurity, and critical infrastructure. This '2023-2024 CISA Roadmap for Artificial Intelligence' serves as a guide for CISA's AI-related efforts, ensuring both internal coherence as well as alignment with the whole-of-government AI strategy. [...] The security challenges associated with AI parallel cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, putting the burden of security on the customer. Although AI software systems might differ from traditional forms of software, fundamental security practices still apply. Thus, CISA's AI roadmap builds on the agency's cybersecurity and risk management programs. Critically, manufacturers of AI systems must follow secure by design [hyperlink] principles: taking ownership of security outcomes for customers, leading product development with radical transparency and accountability, and making secure by design a top business priority. As the use of AI grows and becomes increasingly incorporated into critical systems, security must be a core requirement and integral to AI system development from the outset and throughout its lifecycle."

United States. Cybersecurity & Infrastructure Security Agency. Nov, 2023. 21p.

Onboard AI: Constraints and Limitations

By Miller, Kyle A.; Lohn, Andrew J.

From the document: "This report highlights how constraints can create a gap between the AI that sets performance records and the AI implemented in the real world. We begin with a brief explanation of why one would run AI onboard a device, as opposed to a cloud or data center. Part two overviews constraints that can inhibit models and compute hardware from running onboard. Part three investigates three case studies to illuminate how these constraints impact AI performance: computer vision models on drones, satellites, and autonomous vehicles. These case studies are only meant to elucidate the constraints on various systems, and are not meant to be a comprehensive assessment of constraints across all or most systems that could use onboard AI. Part four provides a broad assessment of trends based on findings from the case studies, and considers how they might impact onboard AI functionality in the future. Part five concludes with recommendations to better manage the constraints of onboard AI."

Georgetown University. Walsh School Of Foreign Service. Center For Security And Emerging Technology


Feminist Theorisation of Cybersecurity to Identify and Tackle Online Extremism

By Elsa Bengtsson Meuller,

From the document: "Online abuse and extremism disproportionately target marginalised populations, particularly people of colour, women and transgender and non‐binary people. The core argument of this report focuses on the intersecting failure of Preventing and Counter Violent Extremism (P/CVE) policies and cybersecurity policies to centre the experiences and needs of victims and survivors of online extremism and abuse. In failing to do so, technology companies and states also fail to combat extremism. The practice of online abuse is gendered and racialised in its design and works to assert dominance through male supremacist logic. Online abuse is often used by extremist groups such as the far right, jihadist groups and misogynist incels. Yet online abuse is not seen as a 'threat of value' in cybersecurity policies. Additionally, the discipline of terrorism studies has failed to engage with the intersection of racism and misogyny properly. Consequently, we fail to centre marginalised victims in our responses to extremism and abuse. Through the implementation of a feminist theorisation of cybersecurity to tackle extremism, this report proposes three core shifts in our responses to online extremism: Incorporate misogynist and racist online abuse into our conceptions of extremism. Shift the focus from responding to attacks and violence to addressing structural violence online. Empower and centre victims and survivors of online abuse and extremism."

Global Network On Extremism And Technology (Gnet). 2023. 32p.