Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

Posts tagged online hate
Connecting, Competing, and Trolling: “User Types” in Digital Gamified Radicalization Processes

by Linda Schlegel

The concept of gamification is increasingly applied as a framework to understand extremist online subcultures and communications. Although a number of studies have been conducted, the theoretical and empirical basis to understand the role of gamification in extremist contexts remains weak. This article seeks to contribute to the development of a gamification of radicalization theory by exploring how Marczewski’s HEXAD, a user typology for gamified applications, may facilitate our understanding of individual variations in engagement with gamified extremist content. Five user types, named after their core motivational drivers for engagement, are discussed: Socializers, Competitors, Achievers, Meaning Seekers, and Disruptors. This typology may support future studies by providing a preliminary understanding of how different game elements may appeal to different users and increase their engagement with and susceptibility to extremist content in cyberspace.

Perspectives on Terrorism , August 2021, Vol. 15, No. 4 (August 2021), pp. 54-64 .

From Bad To Worse: Amplification and Auto-Generation of Hate

By The Anti-Defamation League, Center for Technology and Society

The question of who is accountable for the proliferation of antisemitism, hate, and extremism online has been hotly debated for years. Are our digital feeds really a reflection of society, or do social media platforms and tech companies actually exacerbate virulent content themselves? The companies argue that users are primarily responsible for the corrosive content soaring to the top of news feeds and reverberating between platforms. This argument serves to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate.

A new pair of studies from ADL and TTP (Tech Transparency Project) show how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves. While there are many variables contributing to online hate, including individual users’ own behavior, our research demonstrates how these companies are taking things from bad to worse.

For these studies, we created male, female, and teen personas (without a specified gender) who searched for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games across four of the biggest social media platforms, to test how these companies’ algorithms would work. In the first study, three of four platforms recommended even more extreme, contemptuously antisemitic, and hateful content. One platform, YouTube, did not take the bait. It was responsive to the persona but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability.

In our second study, we tested search functions at three companies, all of which made finding hateful content and groups a frictionless experience, by autocompleting terms and, in some cases, even auto-generating content to fill in hate data voids. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, proving, again, that this is not just a problem of scale or capability.

What these investigations ultimately revealed is that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or help curb antisemitism and extremism.

As debates rage between legislators, regulators, and judges on AI, platform transparency, and intermediary liability, these investigations underscore the urgency for both platforms and governments to do more. Based on our findings, here are three recommendations for industry and government:

Tech companies need to fix the product features that currently escalate antisemitism and auto-generate hate and extremism. Tech companies should tune their algorithms and recommendation engines to ensure they are not leading users down paths riddled with hate and antisemitism. They should also improve predictive autocomplete features and stop auto-generation of hate and antisemitism altogether.

Congress must update Section 230 of the Communications Decency Act to fit the reality of today’s internet. Section 230 was enacted before social media and search platforms as we know them existed, yet it continues to be interpreted to provide those platforms with near-blanket legal immunity for online content, even when their own tools are exacerbating hate, harassment and extremism. We believe that by updating Section 230 to better define what type of online activity should remain covered and what type of platform behavior should not, we can help ensure that social media platforms more proactively address how recommendation engines and surveillance advertising practices are exacerbating hate and extremism, which leads to online harms and potential offline violence. With the advent of social media, the use of algorithms, and the surge of artificial intelligence, tech companies are more than merely static hosting services. When there is a legitimate claim that a tech company played a role in enabling hate crimes, civil rights violations, or acts of terror, victims deserve their day in court.

We need more transparency. Users deserve to know how platform recommendation engines work. This does not need to be a trade secret-revealing exercise, but tech companies should be transparent with users about what they are seeing and why. The government also has a role to play. We’ve seen some success on this front in California, where transparency legislation was passed in 2022. Still, there’s more to do. Congress must pass federal transparency legislation so that stakeholders (the public, researchers, and civil society) have access to the information necessary to truly evaluate how tech companies’ own tools, design practices, and business decisions impact society.

Hate is on the rise. Antisemitism both online and offline is becoming normalized. A politically charged U.S. presidential election is already under way. This is a pressure cooker we cannot afford to ignore, and tech companies need to take accountability for their role in the ecosystem.

Whether you work in government or industry, are a concerned digital citizen, or a tech advocate, we hope you find this pair of reports to be informative. There is no single fix to the scourge of online hate and antisemitism, but we can and must do more to create a safer and less hate-filled internet.

New York: ADL, 2023. 18p.

Moralized language predicts hate speech on social media

By Kirill Solovev, Nicolas Pröllochs

Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media.

PNAS Nexus, Volume 2, Issue 1, January 2023, pgac281

The (moral) language of hate

By Brendan Kennedy, Preni Golazizian, Jackson Trager, Mohammad Atari, Joe Hoover, Aida Mostafazadeh Davani, Morteza Dehghani

Humans use language toward hateful ends, inciting violence and genocide, intimidating and denigrating others based on their identity. Despite efforts to better address the language of hate in the public sphere, the psychological processes involved in hateful language remain unclear. In this work, we hypothesize that morality and hate are concomitant in language. In a series of studies, we find evidence in support of this hypothesis using language from a diverse array of contexts, including the use of hateful language in propaganda to inspire genocide (Study 1), hateful slurs as they occur in large text corpora across a multitude of languages (Study 2), and hate speech on social-media platforms (Study 3). In post hoc analyses focusing on particular moral concerns, we found that the type of moral content invoked through hate speech varied by context, with Purity language prominent in hateful propaganda and online hate speech and Loyalty language invoked in hateful slurs across languages. Our findings provide a new psychological lens for understanding hateful language and points to further research into the intersection of morality and hate, with practical implications for mitigating hateful rhetoric online.

PNAS Nexus, Volume 2, Issue 7, July 2023,

Hate in the Bay State: Extremism & Antisemitism in Massachusetts 2021-2022

By The Anti-Defamation League

Over the last two years, extremist activity in Massachusetts has mirrored developments on the national stage. Like the rest of the country, Massachusetts has seen white supremacists – including the Nationalist Social Club – increasingly make their presence known. The Bay State has also reported extensive propaganda distribution efforts, especially by Patriot Front, which resulted in Massachusetts recording the country’s second-highest number of white supremacist propaganda incidents in 2022.

Amidst increasing nationwide threats to the LGBTQ+ community, Massachusetts has also witnessed a spike in anti-LGBTQ+ activity, including waves of harassment against Boston Children’s Hospital, drag performances and LGBTQ+ events. And as the numbers of antisemitic incidents continue to rise across the country, Massachusetts was no exception. According to ADL’s annual Audit of Antisemitic Incidents, it was the sixth most affected state in the country in 2022.

This report will explore the full range of extremist groups and movements operating in Massachusetts and highlight the key extremist and antisemitic trends and incidents in the state in 2021 and 2022.

New York: ADL, 2022. 18p.

Hate is No Game: Hate and Harassment in Online Games, 2022

By The Anti-Defamation League, Center for Technology & Society

In 2021, ADL found that nearly one in ten gamers between ages 13 and 17 had been exposed to white-supremacist ideology and themes in online multiplayer games. An estimated 2.3 million teens were exposed to white-supremacist ideology in multiplayer games like Roblox, World of Warcraft, Fortnite, Apex Legends, League of Legends, Madden NFL, Overwatch, and Call of Duty. Hate and extremism in online games have worsened since last year. ADL’s annual report on experiences in online multiplayer games shows that the spread of hate, harassment, and extremism in these digital spaces continues to grow unchecked. Our survey explores the social interactions, experiences, attitudes, and behaviors of online multiplayer gamers ages 10 and above nationwide.

New York: ADL, 2022. 38p.

Pick the Lowest Hanging Fruit: Hate Crime Law and the Acknowledgment of Racial Violence

By Jeannine Bell

The U.S. has had remedies aimed at racial violence since the Ku Klux Klan Act was passed in the 1870s. Hate crime law, which is more than thirty years old, is the most recent incarnation. The passage of hate crime law, first at the federal level and later by the states, has done very little to slow the rising tide of bigotry. After a brief discussion of state and federal hate crime law, this Article will critically examine the country’s approach to hate crime. The article will then discuss one of the most prevalent forms of hate crime—bias-motivated violence that targets individuals in their homes. The Article will conclude with a discussion of the approach taken by the Justice Department in the Ahmad Arbery case as a potentially positive solution for the handling of hate crime cases.

112 J. Crim. L. & Criminology 691 (2022).

Countering and Addressing Online Hate Speech: A Guide for policy makers and practitioners

By The United Nations with the Economic and Social Research Council (ESRC) Human Rights, Big Data and Technology Project at the University of Essex

Today social media has become another vehicle for hate speech, with the ability to spread information at a speed never seen before, reaching potentially huge audiences within a few seconds. The manner in which many platforms operate feeds on hateful and discriminatory content, and provides echo chambers for such narratives. Online hate speech has led to real world harm. We have seen this from incidents of identity based violence where the perpetrators were instigated through online hate, to its widespread use to dehumanize and attack entire populations on the basis of identity. Unfortunately, many times the victims are those already most marginalized in society, including ethnic, religious, national or racial minorities, refugees and migrants, women and men, sexual orientation and gender identity minorities.

New York: United Nations, 2023. 20p.

Understanding Anti-Roma Hate Crimes and Addressing the Security Needs of Roma and Sinti Communities: A Practical Guide

By Organization for Security and Co-operation in Europe

The purpose of this Guide is to describe and analyze hate incidents and hate crimes faced by Roma and Sinti, as well as the corresponding security challenges. Considering cases from many of the 57 OSCE participating States, this Guide highlights measures that promote safety and security without discrimination, in line with OSCE commitments. This Guide provides relevant stakeholders - government offcials, political representatives, civil society and the broader public - with an overview of the situations Roma and Sinti communities face, an analysis of their corresponding security needs and areas where positive actions could improve their access to rights.

Warsaw: OSCE Office for Democratic Institutions and Human Rights (ODIHR) , 2023. 138p.

Online Hate and Harassment: The American Experience 2023

By The Anti-Defamation League, Center for Technology & Society  

Over the past year, online hate and harassment rose sharply for adults and teens ages 13-17. Among adults, 52% reported being harassed online in their lifetime, the highest number we have seen in four years, up from 40% in 2022. Both adults and teens also reported being harassed within the past 12 months, up from 23% in 2022 to 33% in 2023 for adults and 36% to 51% for teens. Overall, reports of each type of hate and harassment increased by nearly every measure and within almost every demographic group. ADL conducts this nationally representative survey annually to find out how many American adults experience hate or harassment on social media; since 2022, we have surveyed teens ages 13-17 as well. The 2023 survey was conducted in March and April 2023 and spans the preceding 12 months. Online hate and harassment remain persistent and entrenched problems on social media platforms.

New York: ADL, 2023. 51p.

Social Media and Hate

By Shakuntala Banaji and Ramnath Bhat.

Using expert interviews and focus groups, this book investigates the theoretical and practical intersection of misinformation and social media hate in contemporary societies. Social Media and Hate argues that these phenomena, and the extreme violence and discrimination they initiate against targeted groups, are connected to the socio-political contexts, values and behaviours of users of social media platforms such as Facebook, TikTok, ShareChat, Instagram and WhatsApp. The argument moves from a theoretical discussion of the practices and consequences of sectarian hatred, through a methodological evaluation of quantitative and qualitative studies on this topic, to four qualitative case studies of social media hate, and its effects on groups, individuals and wider politics in India, Brazil, Myanmar and the UK. The technical, ideological and networked similarities and connections between social media hate against people of African and Asian descent, indigenous communities, Muslims, Dalits, dissenters, feminists, LGBTQIA communities, Rohingya and immigrants across the four contexts is highlighted, stressing the need for an equally systematic political response. London;

New York: Routledge, 2022. 140p.

Online Hate and Harmful Content

By Teo Keipi, Matti Näsi, Atte Oksanen, and Pekka Räsänen.

Cross-National Perspectives. Over the past few decades, various types of hate material have caused increasing concern. Today, the scope of hate is wider than ever, as easy and often-anonymous access to an enormous amount of online content has opened the Internet up to both use and abuse. By providing possibilities for inexpensive and instantaneous access without ties to geographic location or a user identification system, the Internet has permitted hate groups and individuals espousing hate to transmit their ideas to a worldwide audience. Online Hate and Harmful Content focuses on the role of potentially harmful online content, particularly among young people. This focus is explored through two approaches: firstly, the commonality of online hate through cross-national survey statistics. This includes a discussion of the various implications of online hate for young people in terms of, for example, subjective wellbeing, trust, self-image and social relationships. Secondly, the book examines theoretical frameworks from the fields of sociology, social psychology and criminology that are useful for understanding online behaviour and online victimisation. Limitations of past theory are assessed and complemented with a novel theoretical model linking past work to the online environment as it exists today. An important and timely volume in this ever-changing digital age, this book is suitable for graduates and undergraduates interested in the fields of Internet and new media studies, social psychology and criminology. The analyses and findings of the book are also particularly relevant to practitioners and policy-makers working in the areas of Internet regulation, crime prevention, child protection and social work/youth work.

London; New York: Routledge, 2017. 154p,

Online Hate Speech in the European Union

Edited by: Stavros Assimakopoulos, Fabienne H Baider, et al.

A Discourse-Analytic Perspective. This open access book reports on research carried out as part of the European Union co-funded C.O.N.T.A.C.T. project which targeted hate speech and hate crime across a number of EU member states. It showcases the bearing that discourse analytic research can have on our understanding of this phenomenon that is a growing global cause for concern.Although 'hate speech' is often incorporated in legal and policy documents, there is no universally accepted definition, which in itself warrants research into how hatred is both expressed and perceived. The research project synthesises discourse analytic and corpus linguistics techniques, and presents its key findings here. The focus is especially on online comments posted in reaction to news items that could trigger discrimination, as well as on the folk perception of online hate speech as revealed through semi-structured interviews with young individuals across the various partner countries. This work was published by Saint Philip Street Press pursuant to a Creative Commons license permitting commercial use. All rights not granted by the work's license are retained by the author or authors.

Springer (2020) 97 pages.