Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

Social sciences examine human behavior, social structures, and interactions in various settings. Fields such as sociology, psychology, anthropology, and economics study social relationships, cultural norms, and institutions. By using different research methods, social scientists seek to understand community dynamics, the effects of policies, and factors driving social change. This field is important for tackling current issues, guiding public discussions, and developing strategies for social progress and innovation.

Posts tagged social media
Social Media's Role in the UK Riots

By The Center for Countering Digital Hate

Amidst the worst period of public disorder and violence targeting minority communities in recent history, social media platforms failed the British public. Worse still, they played a significant role in fomenting the lies, hate, extremist beliefs, and antipathy towards institutions that erupted over a series of warm summer nights into extraordinary spasms of violence across the United Kingdom. False claims about the Southport attacker’s identity – lies identifying him as a Muslim asylum-seeker – spread widely and quickly. Far-right agitators received millions of views on X, formerly Twitter. Towns and cities across the UK saw attacks on mosques and hotels housing asylum seekers, inspired by these online posts. My family was among those affected; my mother, sisters and nieces were among those to experience hate on British streets. While affected communities and authorities struggled to cope with violent attacks on- and offline, social media platforms did little to quell its spread and, worse still, profited from it. We have seen this before. In the immediate aftermath of tragic incidents, bad actors weaponise online spaces to spread disinformation and sow informational chaos. Before the facts are known, extremists capitalise on the opportunity to spread hate, mobilise new followers, and inject conspiracy into the public discourse at the moment of maximum vulnerability. Underpinning this cynical behaviour are powerful financial incentives: hate actors turn the grief that follows a tragic incident into online engagement for financial reward from social media platforms. One platform stood out. The owner of X, Elon Musk, shared false information about the situation to his 195 million followers and made a show of attacking the UK Government’s response to the outbreak of violence.i Rather than ensuring risk and illegal content were mitigated on his platform, Musk recklessly promoted the notion of an impending “civil war” in the UK.ii CCDH found far-right figures, previously banned from Twitter but reinstated under Musk’s leadership, receiving millions of views per day on X. The platform ran ads against posts inciting hate, encouraging the mobs to “permanently remove Islam from Great Britain.” iii Musk has transformed Twitter, once the go-to  source for journalists, politicians, and the public for real time news, into X, a platform with imperceptible moderation and the morality of Telegram. On the 16th of August, CCDH convened stakeholders from government departments, law enforcement, the online safety regulator, British advertisers, and frontline civil society groups to chart a path forward. The insights and policy proposals which emerged from that discussion are detailed in this paper. While recognising that there was undue criticism levied at the regulator for powers it cannot yet use under the Online Safety Act (OSA), there is also a case for action to ensure the OSA is fit for purpose. Future amendments will be needed to tackle its most glaring omissions 

London; Washington, DC: Center for Countering Digital hate, 2024. 19p.

Fighting the Tide: Encounters with Online Hate Among Targeted Groups

By The Office of the eSafety Commissioner (Australia)

Online hate is one of the most prevalent forms of digital violence. It affects many internet users in Australia and globally, especially individuals from targeted groups, including sexually diverse individuals, Aboriginal and/or Torres Strait Islanders, individuals with disability, and those from other culturally and racially marginalised backgrounds. It can take the form of hateful posts or comments about a person based on discrimination or bias related to characteristics such as their sexual orientation, gender, race, disability, religion or ethnicity.

This report is the first in a series of two reports exploring encounters with online hate among adults in Australia. It explores the prevalence, nature and impact of online hate among adults who belong to one or more of the targeted groups, drawing on data from eSafety’s Australian Adults Online survey, conducted in November 2022.

Key findings

Adults who identify as sexually diverse, Aboriginal and/or Torres Strait Islander, with disability, and/or linguistically diverse are more likely to be targeted with online hate.

Adults from these targeted groups are more likely to experience online hate based on discrimination or bias related to at least one aspect of their identity.

Most targeted adults experience online hate on social media, with the hate most often perpetrated by a stranger.

Online hate has harmful effects on the wellbeing of adults from targeted groups.

A minority of targeted adults act after encountering online hate, but many refrain from acting because they don’t think anything will change.

Canberra: Commonwealth of Australia 2025

Fighting the Tide: Encounters with Online Hate Among Targeted Groups

By The Office of the eSafety Commissioner (Australia)

Online hate is one of the most prevalent forms of digital violence. It affects many internet users in Australia and globally, especially individuals from targeted groups, including sexually diverse individuals, Aboriginal and/or Torres Strait Islanders, individuals with disability, and those from other culturally and racially marginalised backgrounds. It can take the form of hateful posts or comments about a person based on discrimination or bias related to characteristics such as their sexual orientation, gender, race, disability, religion or ethnicity.

This report is the first in a series of two reports exploring encounters with online hate among adults in Australia. It explores the prevalence, nature and impact of online hate among adults who belong to one or more of the targeted groups, drawing on data from eSafety’s Australian Adults Online survey, conducted in November 2022.

Key findings

Adults who identify as sexually diverse, Aboriginal and/or Torres Strait Islander, with disability, and/or linguistically diverse are more likely to be targeted with online hate.

Adults from these targeted groups are more likely to experience online hate based on discrimination or bias related to at least one aspect of their identity.

Most targeted adults experience online hate on social media, with the hate most often perpetrated by a stranger.

Online hate has harmful effects on the wellbeing of adults from targeted groups.

A minority of targeted adults act after encountering online hate, but many refrain from acting because they don’t think anything will change.

Government of Australia, 2024. 72p.

Online misinformation in Australia: Adults' Experiences, Abilities and Responses

By Sora Park, Tanya Notley, T. J. Thomson, Aimee Hourigan, Michael Dezuanni

The rapid uptake of social media, which Australians now use more than any other type of media, presents many opportunities for accessing information, but also presents the highly significant challenge of misinformation. The sheer volume of information online can be overwhelming and very difficult to navigate. As a result, bad actors seek to undermine democratic processes and target individuals. This has been widely recognised as a global problem. However, Australians lack the confidence and ability to verify misinformation.

This report is based on analysis of four linked datasets and finds that the vast majority of adult Australians want to be able to identify misinformation and are trying to do so. It also finds that many adult Australians overestimate their ability to verify information online.

The research findings illustrate the need for media literacy initiatives. These might include videos that show people how to fact check online or how to identify high-quality news sources, quizzes or games that help people develop their digital media knowledge and skills, explainers that show how platform business models operate and how this relates to the spread of misinformation, or in-person media production training that can help people think critically, and accurately represent people, places and ideas.

Penrith, AUS: News and Media Research Centre, Western Sydney University, 2024, 82p.

Democracies Under Threat: HOW LOOPHOLES FOR TRUMP’S SOCIAL MEDIA ENABLED THE GLOBAL RISE OF FAR-RIGHT EXTREMISM 

By Heidi Beirich, Wendy Via

The decision by multiple social media platforms to suspend or remove ex-American President Donald Trump after he incited a violent mob to invade the U.S. Capitol on January 6, 2021, was too little, too late. Even so, the deplatforming was important and it should become the standard for other political leaders and political parties around the world that have engaged in hate speech, disinformation, conspiracy-mongering and generally spreading extremist material that results in real-world damage to democracies.  

 Montgomery, AL: The Global Project Against Hate and Extremism. 2021  33p

Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0: Next-Generation Chinese Astroturfing and Coping with Ubiquitous AI

Marcellino, William M.; Beauchamp-Mustafaga, Nathan; Kerrigan, Amanda; Chao, Lev Navarre; Smith, Jackson

From the webpage: "In this Perspective, the authors argue that the emergence of ubiquitous, powerful generative AI poses a potential national security threat in terms of the risk of misuse by U.S. adversaries (in particular, for social media manipulation) that the U.S. government and broader technology and policy community should proactively address now. Although the authors focus on China and its People's Liberation Army as an illustrative example of the potential threat, a variety of actors could use generative AI for social media manipulation, including technically sophisticated nonstate actors (domestic as well as foreign). The capabilities and threats discussed in this Perspective are likely also relevant to other actors, such as Russia and Iran, that have already engaged in social media manipulation."

Rand Corporation . 2003. 42p.

Making #BlackLivesMatter in the Shadow of Selma: Collective Memory and Racial Justice Activism in U.S.

By Sarah J Jackson

It is clear in news coverage of recent uprisings for Black life that journalists and media organizations struggle to reconcile the fact of ongoing racism with narratives of U.S. progress. Bound up in this struggle is how collective memory-or rather whose collective memory-shapes the practices of news-making. Here I interrogate how television news shapes collective memory of Black activism through analysis of a unique moment when protests over police abuse of Black people became newsworthy simultaneous with widespread commemorations of the civil rights movement. I detail the complex terrain of nostalgia and misremembering that provides cover for moderate and conservative dele-gitimization of contemporary Black activism. At the same time, counter-memories, introduced most often by members of the Black public sphere, offer alternative, actionable, and comprehensive interpretations of Black protest.

Communication, Culture and Critique, Volume 14, Issue 3, September 2021, Pages 385–404,

From Bad to Worse: Algorithmic Amplification of Antisemitism and Extremism

By The Anti-Defamation League, Center for Technology and Society

The question of who is accountable for the proliferation of antisemitism, hate, and extremism online has been hotly debated for years. Are our digital feeds really a reflection of society, or do social media platforms and tech companies actually exacerbate virulent content themselves? The companies argue that users are primarily responsible for the corrosive content soaring to the top of news feeds and reverberating between platforms. This argument serves to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate. A new pair of studies from ADL (the Anti-Defamation League) and TTP (Tech Transparency Project) show how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves. While there are many variables contributing to online hate, including individual users’ own behavior, our research demonstrates how these companies are taking things from bad to worse. For these studies, we created male, female, and teen personas (without a specified gender) who searched for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games across four of the biggest social media platforms, to test how these companies’ algorithms would work. In the first study, three of four platforms recommended even more extreme, contemptuously antisemitic, and hateful content. One platform, YouTube, did not take the bait. It was responsive to the persona but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability. In our second study, we tested search functions at three companies, all of which made finding hateful content and groups a frictionless experience, by autocompleting terms and, in some cases, even auto-generating content to fill in hate data voids. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, proving, again, that this is not just a problem of scale or capability. What these investigations ultimately revealed is that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or help curb antisemitism and extremism. As debates rage between legislators, regulators, and judges on AI, platform transparency, and intermediary liability, these investigations underscore the urgency for both platforms and governments to do more.

New York: The Anti-Defamation League, Center for Technology and Society, 2023. 36p.

Moralized language predicts hate speech on social media

By Kirill Solovev, Nicolas Pröllochs

Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media.

PNAS Nexus, Volume 2, Issue 1, January 2023, pgac281

Plugged In: Problematic Instagram Use and Negative Outcomes

By Amy Prevost & Petra Jonas

Research on the negative outcomes of social media use have particularly focused on Facebook, with limited studies examining the relationship to Instagram use. This study explored the connection between Instagram use and six relevant themes related to overall well-being, including the potential for victimization. The study used both quantitative and qualitative methods. For the quantitative component, surveys were distributed to undergraduate students at two Canadian Universities. The qualitative nature of the study consisted of two focus groups which were conducted at the University of the Fraser Valley. Each focus group consisted of nine participants who engaged in dialogue regarding the six preliminary themes identified from the survey data. The study revealed that Instagram use is correlated with escapism, frustration, fear of missing out, validation, anxiety, addiction, and vulnerability to cyber victimization. Consistent with other studies in this area, our results indicated that regular Instagram use has negative psychological outcomes for individual users. The research offers some important implications and recommendations for early education, increased awareness about the potential for victimization, and early intervention strategies.

Vancouver, BC: International Centre for Criminal Law Reform , 2020. 36p.

Young People, Ethics, and the New Digital Media: A Synthesis from the GoodPlay Project

By Carrie James

Social networking, blogging, vlogging, gaming, instant messaging, downloading music and other content, uploading and sharing their own creative work: these activities made possible by the new digital media are rich with opportunities and risks for young people. This report, part of the GoodPlay Project, undertaken by researchers at Harvard Graduate School of Education's Project Zero, investigates the ethical fault lines of such digital pursuits. The authors argue that five key issues are at stake in the new media: identity, privacy, ownership and authorship, credibility, and participation. Drawing on evidence from informant interviews, emerging scholarship on new media, and theoretical insights from psychology, sociology, political science, and cultural studies, the report explores the ways in which youth may be redefining these concepts as they engage with new digital media. The authors propose a model of "good play" that involves the unique affordances of the new digital media; related technical and new media literacies; cognitive and moral development and values; online and offline peer culture; and ethical supports, including the absence or presence of adult mentors and relevant educational curricula. This proposed model for ethical play sets the stage for the next part of the GoodPlay project, an empirical study that will invite young people to share their stories of engagement with the new digital media.

Cambridge, MA: The MIT Press, 2009. 127p.

Online Hate and Harassment: The American Experience 2023

By The Anti-Defamation League, Center for Technology & Society  

Over the past year, online hate and harassment rose sharply for adults and teens ages 13-17. Among adults, 52% reported being harassed online in their lifetime, the highest number we have seen in four years, up from 40% in 2022. Both adults and teens also reported being harassed within the past 12 months, up from 23% in 2022 to 33% in 2023 for adults and 36% to 51% for teens. Overall, reports of each type of hate and harassment increased by nearly every measure and within almost every demographic group. ADL conducts this nationally representative survey annually to find out how many American adults experience hate or harassment on social media; since 2022, we have surveyed teens ages 13-17 as well. The 2023 survey was conducted in March and April 2023 and spans the preceding 12 months. Online hate and harassment remain persistent and entrenched problems on social media platforms.

New York: ADL, 2023. 51p.

Red Pilled - The Allure of Digital Hate

By Luke Munn

Hate is being reinvented. Over the last two decades, online platforms have been used to repackage racist, sexist and xenophobic ideologies into new sociotechnical forms. Digital hate is ancient but novel, deploying the Internet to boost its allure and broaden its appeal. To understand the logic of hate, Luke Munn investigates four objects: 8chan, the cesspool of the Internet, QAnon, the popular meta-conspiracy, Parler, a social media site, and Gab, the »platform for the people.« Drawing together powerful human stories with insights from media studies, psychology, political science, and race and cultural studies, he portrays how digital hate infiltrates hearts and minds.

Bielefeld: Bielefeld University Press, 2023. 204p.

Right-Wing Extremists’ Persistent Online Presence: History and Contemporary Trends

By Maura Conway, Ryan Scrivens, Logan Macnair

This policy brief traces how Western right-wing extremists have exploited the power of the internet from early dial-up bulletin board systems to contemporary social media and messaging apps. It demonstrates how the extreme right has been quick to adopt a variety of emerging online tools, not only to connect with the like-minded, but to radicalise some audiences while intimidating others, and ultimately to recruit new members, some of whom have engaged in hate crimes and/or terrorism. Highlighted throughout is the fast pace of change of both the internet and its associated platforms and technologies, on the one hand, and the extreme right, on the other, as well as how these have interacted and evolved over time. Underlined too is the persistence, despite these changes, of rightwing extremists’ online presence, which poses challenges for effectively responding to this activity moving forward.

The Hague: International Centre for Counter-Terrorism, 2019. 24p.

Hate Contagion: Measuring the spread and trajectory of hate on social media

By John D. Gallacher and Jonathan Bright

Online hate speech is a growing concern, with minorities and vulnerable groups increasingly targeted with extreme denigration and hostility. The drivers of hate speech expression on social media are unclear, however. This study explores how hate speech develops on a fringe social media platform popular with the far-right, Gab. We investigate whether users seek out this platform in order to express hate, or whether instead they develop these opinions over time through a mechanism of socialisation, as they interact with other users on the platform. We find a positive association between the time users spend on the platform and their hate speech expression. We show that while some users do arrive on these platforms with pre-existing hate stances, others develop these expressions as they get exposed to the hateful opinions of others. Our analysis reveals how hate speech develops online, the important role of the group environment in accelerating its development, and gives valuable insight to inform the development of counter measures.

Oxford, UK: University of Oxford, Oxford Internet Institute, 2021. 47p.

Combatting Online Islamophobia and Racism in Australia: the case for an eSafety duty of care

By Umar Butler

This report, commissioned by the Islamic Council of Victoria (ICV), argues that the failure of social media platforms to improve their demonstrably ineffectual systems for the review of hateful material, coupled with the grave harms of online Islamophobia, necessitates government intervention.

While there are a number of competing approaches to the regulation of social media, the ICV’s preference is to reform the systems that have enabled and, indeed, at times encouraged the widespread and unchecked dissemination of hate speech, rather than attempt the practically impossible task of taking down hundreds of millions of individual pieces of anti-Muslim content.

To implement this approach, the ICV proposes that Australia place a statutory duty on platforms to take reasonable care to protect users from harm (the ‘eSafety duty of care’), similar to the regime set to be established by the UK’s Online Safety Bill 2021. Regardless of the particular regulatory response taken, however, this report makes clear that something must be done. If not to improve the mental wellbeing of Muslim users, then at least to ensure that the events of the Christchurch attacks are never repeated.

Melbourne:Islamic Council of Victoria, 2022. 23p.

Rethinking Social Media and Extremism

Edited by Shirley Leitch and Paul Pickering

Terrorism, global pandemics, climate change, wars and all the major threats of our age have been targets of online extremism. The same social media occupying the heartland of our social world leaves us vulnerable to cybercrime, electoral fraud and the ‘fake news’ fuelling the rise of far-right violence and hate speech. In the face of widespread calls for action, governments struggle to reform legal and regulatory frameworks designed for an analogue age. And what of our rights as citizens? As politicians and lawyers run to catch up to the future as it disappears over the horizon, who guarantees our right to free speech, to free and fair elections, to play video games, to surf the Net, to believe ‘fake news’?

Rethinking Social Media and Extremism offers a broad range of perspectives on violent extremism online and how to stop it. As one major crisis follows another and a global pandemic accelerates our turn to digital technologies, attending to the issues raised in this book becomes ever more urgent.

Canberra: Australian University Press, 2022. 194p.

The Use of Social Media by United States Extremists

By Michael Jensen, Patrick James, Gary LaFree, Aaron Safer-Lichtenstein, Elizabeth Yates

Emerging communication technologies, and social media platforms in particular, play an increasingly important role in the radicalization and mobilization processes of violent and non-violent extremists (Archetti, 2015; Cohen et al., 2014; Farwell, 2014; Klausen, 2015). However, the extent to which extremists utilize social media, and whether it influences terrorist outcomes, is still not well understood (Conway, 2017). This research brief expands the current knowledge base by leveraging newly collected data on the social media activities of 479 extremists in the PIRUS dataset who radicalized between 2005 and 2016.[1] This includes descriptive analyses of the frequency of social media usage among U.S. extremists, the types of social media platforms used, the differences in the rates of social media use by ideology and group membership, the purposes of social media use, and the impact of social media on foreign fighter travel and domestic terrorism plots.

We define social media in the PIRUS dataset as any form of electronic communication through which users create online communities to share information, ideas, personal messages, and other content, such as videos and images. This form of online communication is distinct from other types of internet usage in that it emphasizes online user-to-user communication rather than passively viewing content hosted by an online domain. Additionally, our definition of social media does not include file-sharing sites (e.g., Torrent networks, Dropbox, P2P networks, etc.).

College Park, MD: START, University of Maryland, 2018. 20p.

Creating Chaos Online: Disinformation and Subverted Post-Publics

By Asta Zelenkauskaitė

With the prevalence of disinformation geared to instill doubt rather than clarity, Creating Chaos Online unmasks disinformation when it attempts to pass as deliberation in the public sphere and distorts the democratic processes. Asta Zelenkauskaitė finds that repeated tropes justifying Russian trolling were found to circulate across not only all analyzed media platforms’ comments but also across two analyzed sociopolitical contexts suggesting the orchestrated efforts behind messaging. Through a dystopian vision of publics that are expected to navigate in the sea of uncertain both authentic and orchestrated content, pushed by human and nonhuman actors, Creating Chaos Online offers a concept of post-publics. The idea of post-publics is reflected within the continuum of treatment of public, counter public, and anti-public. This book argues that affect-instilled arguments used in public deliberation in times of uncertainty, along with whataboutism constitute a playbook for chaos online.

Ann Arbor: University of Michigan Press, 2022. 318p.

"My Life Is Not Your Porn": Digital Sex Crimes in South Korea

By Human Rights Watch

This report, based on interviews with survivors and experts, and a survey, documents the spread and impact in South Korea of what are referred to there as “digital sex crimes.” Digital sex crimes are crimes involving non-consensual intimate images. These crimes are a form of gender-based violence, using digital images that are captured non-consensually and sometimes shared, captured with consent but shared non-consensually, or sometimes faked. These images are almost always of women and girls. This report explores how technological innovation can facilitate gender-based violence in the absence of adequate rights-based protections by government and companies.

New York: HRW, 2021. 103p.