The Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

SOCIAL SCIENCES-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY

Posts tagged hate speech
In Defense of Free Speech in Universities: A Study of Three Jurisdictions

By Amy Lai

In this book, Amy Lai examines the current free speech crisis in Western universities. She studies the origin, history, and importance of freedom of speech in the university setting, and addresses the relevance and pitfalls of political correctness and microaggressions on campuses, where laws on harassment, discrimination, and hate speech are already in place, along with other concepts that have gained currency in the free speech debate, including deplatforming, trigger warning, and safe space. Looking at numerous free speech disputes in the United Kingdom, the United States, and Canada, the book argues for the equal application of the free speech principle to all expressions to facilitate respectful debates. All in all, it affirms that the right to free expression is a natural right essential to the pursuit of truth, democratic governance, and self-development, and this right is nowhere more important than in the university.

Ann Arbor: University of Michigan Press, 2023. 306p.

Hate Speech: Linguistic Perspectives

By Victoria Guillén-Nieto

Hate speech creates environments that are conducive to hate crimes and broad-scale conflict. This book discusses the mechanics of hate speech and its expression from a linguistic perspective. The author addresses the challenges that legal practitioners and linguists meet when dealing with hate speech, especially with the advent of social media, and offers the reader a comprehensive linguistic approach to the legal problem of hate speech.

Berlin/Boston: De Gruyter Mouton, 2023. 190p.

From Bad To Worse: Amplification and Auto-Generation of Hate

By The Anti-Defamation League, Center for Technology and Society

The question of who is accountable for the proliferation of antisemitism, hate, and extremism online has been hotly debated for years. Are our digital feeds really a reflection of society, or do social media platforms and tech companies actually exacerbate virulent content themselves? The companies argue that users are primarily responsible for the corrosive content soaring to the top of news feeds and reverberating between platforms. This argument serves to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate.

A new pair of studies from ADL and TTP (Tech Transparency Project) show how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves. While there are many variables contributing to online hate, including individual users’ own behavior, our research demonstrates how these companies are taking things from bad to worse.

For these studies, we created male, female, and teen personas (without a specified gender) who searched for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games across four of the biggest social media platforms, to test how these companies’ algorithms would work. In the first study, three of four platforms recommended even more extreme, contemptuously antisemitic, and hateful content. One platform, YouTube, did not take the bait. It was responsive to the persona but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability.

In our second study, we tested search functions at three companies, all of which made finding hateful content and groups a frictionless experience, by autocompleting terms and, in some cases, even auto-generating content to fill in hate data voids. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, proving, again, that this is not just a problem of scale or capability.

What these investigations ultimately revealed is that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or help curb antisemitism and extremism.

As debates rage between legislators, regulators, and judges on AI, platform transparency, and intermediary liability, these investigations underscore the urgency for both platforms and governments to do more. Based on our findings, here are three recommendations for industry and government:

Tech companies need to fix the product features that currently escalate antisemitism and auto-generate hate and extremism. Tech companies should tune their algorithms and recommendation engines to ensure they are not leading users down paths riddled with hate and antisemitism. They should also improve predictive autocomplete features and stop auto-generation of hate and antisemitism altogether.

Congress must update Section 230 of the Communications Decency Act to fit the reality of today’s internet. Section 230 was enacted before social media and search platforms as we know them existed, yet it continues to be interpreted to provide those platforms with near-blanket legal immunity for online content, even when their own tools are exacerbating hate, harassment and extremism. We believe that by updating Section 230 to better define what type of online activity should remain covered and what type of platform behavior should not, we can help ensure that social media platforms more proactively address how recommendation engines and surveillance advertising practices are exacerbating hate and extremism, which leads to online harms and potential offline violence. With the advent of social media, the use of algorithms, and the surge of artificial intelligence, tech companies are more than merely static hosting services. When there is a legitimate claim that a tech company played a role in enabling hate crimes, civil rights violations, or acts of terror, victims deserve their day in court.

We need more transparency. Users deserve to know how platform recommendation engines work. This does not need to be a trade secret-revealing exercise, but tech companies should be transparent with users about what they are seeing and why. The government also has a role to play. We’ve seen some success on this front in California, where transparency legislation was passed in 2022. Still, there’s more to do. Congress must pass federal transparency legislation so that stakeholders (the public, researchers, and civil society) have access to the information necessary to truly evaluate how tech companies’ own tools, design practices, and business decisions impact society.

Hate is on the rise. Antisemitism both online and offline is becoming normalized. A politically charged U.S. presidential election is already under way. This is a pressure cooker we cannot afford to ignore, and tech companies need to take accountability for their role in the ecosystem.

Whether you work in government or industry, are a concerned digital citizen, or a tech advocate, we hope you find this pair of reports to be informative. There is no single fix to the scourge of online hate and antisemitism, but we can and must do more to create a safer and less hate-filled internet.

New York: ADL, 2023. 18p.

Hate in the Lone Star State: Extremism & Antisemitism in Texas

By The Anti-Defamation League, Center on Extremism

Since the start of 2021, Texas has experienced a significant amount of extremist activity. One driver of this phenomenon is Patriot Front, a white supremacist group that has distributed propaganda across Texas – and the rest of the U.S. – with alarming frequency, using the state as a base of operations. Two other factors are extremists who continue to target the LGBTQ+ community and QAnon supporters who have gathered for conferences and rallies across the state.

Texas has also seen a significant increase in antisemitic incidents over the last two years. It recorded the country’s fifth-highest number of antisemitic incidents in 2022, at a time when ADL has tracked the highest-ever number of antisemitic incidents nationwide.

This report will explore a range of extremist groups and movements operating in Texas and highlights the key extremist and antisemitic trends and incidents in the state in 2021 and 2022. It also includes noteworthy events and incidents from the first half of 2023.

Key Statistics

  • Antisemitic Incidents: According to the ADL’s annual Audit of Antisemitic Incidents, Texas has seen a dramatic rise in antisemitic incidents in recent years. In 2022, the number of incidents increased by 89% from 2021 levels, rising from 112 to 212 incidents. Since 2021, ADL has tracked a total of 365 incidents in the state.

  • Extremist Plots and Murders: In 2021 and 2022, ADL documented two extremist murders in Texas and six terrorist plots. In 2023, a gunman who embraced antisemitism, misogyny and white supremacy opened fire in a mall parking lot in Allen, killing eight people and wounding seven more before police shot and killed him.

  • Extremist Events: Since 2021, ADL has documented 28 extremist events in Texas, including banner drops, flash demonstrations, training events, fight nights, protests, rallies and meetings.

  • White Supremacist Propaganda: In 2022, ADL documented 526 instances of white supremacist propaganda distributions across Texas, a 60% increase from 2021 (329). There have been 1,073 propaganda incidents since 2021. The groups responsible for the majority of the incidents were Patriot Front and the Goyim Defense League (GDL).

  • Hate Crimes Statistics: According to the latest FBI hate crimes statistics from 2021, there were 542 reported hate crimes in Texas in that year, an increase of 33% from the 406 incidents recorded in 2020. Hate and bias crime data in Texas and nationally highlights how hate crimes disproportionately impact the Black community.

  • Insurrection Statistics: Seventy-four of the 968 individuals logged by the George Washington University Program on Extremism who have been charged in relation to the January 6, 2021 attack on the U.S. Capitol are Texas residents, the second most in the nation.

  • ADL and Princeton’s Bridging Divides Initiative Threats and Harassment Dataset: The Threats and Harassment Dataset (THD) tracks unique incidents of threats and harassment against local U.S. officials between January 1, 2020, and September 23, 2022 in three policy areas (election, education and health). Texas recorded seven incidents of threats and harassment against local officials.

New York: ADL, Center on Extremism, 2023. 23p.

Hate in the Prairie State: Extremism & Antisemitism in Illinois

By The Anti-Defamation League

In May 2023, a man outraged over abortion rights set his sights on a building in Danville, Illinois, that was slated to become a clinic offering women’s health services, including abortions. The man, Philip Buyno of Prophetstown, allegedly filled containers with gasoline and loaded them into his car. His alleged efforts to destroy the clinic – by ramming his car into the building and throwing a gas can into the space – failed, and he was arrested. He later told the FBI he’d “finish the job” if given the chance.

Buyno was an extremist, intent on attacking his perceived enemy no matter the cost. Over the past several years, Americans have witnessed a barrage of extremist activity: attacks on our democratic institutions, antisemitic incidents, white supremacist propaganda efforts, vicious, racially motivated attacks, bias crimes against the LGBTQ+ community and violent threats to women’s healthcare providers.

Illinoisians have watched these same hatreds – and more – manifest in their own state.

This report explores a range of extremist groups and movements operating in Illinois and highlights the key extremist and antisemitic trends and incidents in the state in 2021 and 2022. It also includes noteworthy events and incidents from the first half of 2023.

There is no single narrative that tells the story of extremism and hate in Illinois. Instead, the impact is widespread and touches many communities. As in the rest of the country, both white supremacist and antisemitic activity have increased significantly over the last two years, but that’s not the whole story.

The Prairie State is also home to a sizeable number of current and former law enforcement officers who have at one point belonged to or associated with extremist organizations or movements. Our research additionally shows a continued threat to Illinois’s women’s health facilities, which have been targeted with arson and other violent plots by anti-abortion extremists. This reflects the broader, national threat to reproductive rights.

Key Statistics

Antisemitic Incidents: According to ADL’s annual Audit of Antisemitic Incidents, Illinois has seen a dramatic rise in antisemitic incidents in recent years. In 2022, the number of incidents increased by 128% from 2021 levels, rising from 53 to 121. The state’s total was the seventh-highest number of incidents in the country in a year when ADL tracked the highest-ever number of antisemitic incidents nationwide. This is a dramatic increase from 2016, when there were 10 incidents. Preliminary numbers through June 2023 indicate that there have been at least 33 additional antisemitic incidents in the state.

Extremist Plots and Murders: In 2021 and 2022, ADL documented one extremist murder in Illinois. In November 2022, a man allegedly intentionally drove the wrong way on an interstate highway and crashed into another car, killing the driver. The man said he wanted to kill himself after being convicted for crimes committed while participating in the January 6 insurrection, and he has been charged with additional crimes, including first-degree murder.

Extremist Events: Since 2021, ADL has documented four white supremacist extremist events in Illinois, predominately marches and protests.

White Supremacist Propaganda: In 2022, ADL documented 198 instances of white supremacist propaganda distributions across Illinois, an increase of 111% from 2021 (94). Through May 2023, there have been an additional 64 white supremacist propaganda incidents. Patriot Front was responsible for a large majority of white supremacist propaganda throughout Illinois.

Hate Crimes Statistics: According to the latest FBI hate crimes statistics available, there were 101 reported hate crimes in Illinois that targeted a variety of communities, including Jewish, Black and Asian American and Pacific Islander. This total was an increase of 80% from the 56 incidents recorded in 2020.

Insurrection Statistics: Thirty-six of the 968 individuals logged by the George Washington University Program on Extremism who have been charged in relation to the January 6, 2021 attack on the U.S. Capitol are Illinois residents.

ADL and Princeton University’s Bridging Divides Initiative Threats and Harassment Dataset: The Threats and Harassment Dataset (THD) tracks unique incidents of threats and harassment against local U.S. officials between January 1, 2020, and September 23, 2022, in three policy areas (election, education and health). Illinois recorded six incidents of threats and harassment against local officials.

New York, ADL, Center on Extremism, 2023. 24p.

From Bad to Worse: Auto-generating & Autocompleting Hate

By The Anti-Defamation League, Center for Technology and Society

Executive Summary Do social media and search companies exacerbate antisemitism and hate through their own design and system functions? In this joint study by the ADL Center for Technology and Society (CTS) and Tech Transparency Project (TTP), we investigated search functions on both social media platforms and Google. Our results show how these companies’ own tools–such as autocomplete and auto-generation of content–made finding and engaging with antisemitism easier and faster.1 In some cases, the companies even helped create the content themselves. KEY FINDINGS: • Facebook, Instagram, and YouTube are each hosting dozens of hate groups and movements on their platforms, many of which violate the companies’ own policies but were easy to find via search. Facebook and Instagram, in fact, continue hosting some hate groups that parent company Meta has previously banned as “dangerous organizations.” • All of the platforms made it easier to find hate groups by predicting searches for the groups as researchers began typing them in the search bar. • Facebook automatically generated business Pages for some hate groups and movements, including neo-Nazis. Facebook does this when a user lists an employer, school, or location in their profile that does not have an existing Page–regardless of whether it promotes hate. Our researchers compiled a list of 130 hate groups and movements from ADL’s Glossary of Extremism, picking terms that were tagged in the glossary with all three of the following categories: “groups/ movements,” “white supremacist,” and “antisemitism.”2 The researchers then typed each term into the respective search bars of Facebook, Instagram and YouTube, and recorded the results. The study also found that YouTube auto-generated channels and videos for neo-Nazi and white supremacist bands, including one with a song called “Zyklon Army,” referring to the poisonous gas used by Nazis for mass murder in concentration camps. • In a final test, researchers examined the “knowledge panels” that Google displays on search results for hate groups–and found that Google in some cases provides a direct link to official hate group websites and social media accounts, increasing their visibility and ability to recruit new members.

New York: Anti-Defamation League, Center for Technology and Society, 2023. 18p.

Extremist Offender Management in Europe: Country Reports

By The International Center for the Study of Radicalization (ICSR)

The ten country papers in this volume are part of a project which has investigated policies and approaches towards extremist prisoners across Europe. They formed the empirical basis for our report, Prisons and Terrorism: Extremist Offender Management in 10 European Countries (London: ICSR, 2020), which was published in July 2020 and is available from www.icsr.info. Our aim was to identify trade-offs and dilemmas but also principles and best practices that may help governments and policymakers spot new ideas and avoid costly and counterproductive mistakes. In doing so, we commissioned local experts to write papers on the situation in their respective countries. To make sure that findings from the different case studies could be compared, each author was asked to address the same topics and questions (Appendix I), drawing on government statistics, reports, interviews with various stakeholders, and their own, previously published research. The resulting data is inevitably imperfect. For example, there is a large ‘known unknown’ that relates to the post-release situation. It is possible that many inmates who adopt extremist ideas or associate with extremist networks in prison simply abandon and disassociate from them upon release. Likewise, some cases that are often portrayed as instances of prison radicalisation are difficult to verify. Anis Amri, the perpetrator of the 2016 Christmas market attack in Berlin, reportedly radicalised in Sicilian prisons between 2011 and 2015. Yet there are few details on his prison stay, and in any case, it is apparent that his subsequent involvement in the extremist milieus of Düsseldorf and Berlin were just as important as his time in prison. Nevertheless, our contributors’ collective insight – often based on years of study of the countries in question – into this subject is our project’s unique strength. The picture they paint is one of European countries trying to grapple with a challenging – and rapidly changing – situation, as many European countries had to deal with an increase and diversification of their extremist offender populations, raising systemic questions about prison regimes, risk assessments, probation schemes, and opportunities for rehabilitation and reintegration that had previously often been dealt with on a case-by-case basis. Many of the questions raised in this volume will undoubtedly keep policymakers and societies busy for years. The papers – together with our report – are a first, systematic contribution towards tackling them

London: ICSR King’s College London , 2020. 104p.

Hate in the Lone Star State: Extremism and Antisemitism in Texas

By Anti-Defamation League, Center on Extremism

ince the start of 2021, Texas has experienced a significant amount of extremist activity. One driver of this phenomenon is Patriot Front, a white supremacist group that has distributed propaganda across Texas – and the rest of the U.S. – with alarming frequency, using the state as a base of operations. Two other factors are extremists who continue to target the LGBTQ+ community and QAnon supporters who have gathered for conferences and rallies across the state. Texas has also seen a significant increase in antisemitic incidents over the last two years. It recorded the country’s fifth-highest number of antisemitic incidents in 2022, at a time when ADL has tracked the highest-ever number of antisemitic incidents nationwide. This report will explore a range of extremist groups and movements operating in Texas and highlights the key extremist and antisemitic trends and incidents in the state in 2021 and 2022. It also includes noteworthy events and incidents from the first half of 2023.

New York: ADL, 2023. 23p.

Moralized language predicts hate speech on social media

By Kirill Solovev, Nicolas Pröllochs

Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media.

PNAS Nexus, Volume 2, Issue 1, January 2023, pgac281

The (moral) language of hate

By Brendan Kennedy, Preni Golazizian, Jackson Trager, Mohammad Atari, Joe Hoover, Aida Mostafazadeh Davani, Morteza Dehghani

Humans use language toward hateful ends, inciting violence and genocide, intimidating and denigrating others based on their identity. Despite efforts to better address the language of hate in the public sphere, the psychological processes involved in hateful language remain unclear. In this work, we hypothesize that morality and hate are concomitant in language. In a series of studies, we find evidence in support of this hypothesis using language from a diverse array of contexts, including the use of hateful language in propaganda to inspire genocide (Study 1), hateful slurs as they occur in large text corpora across a multitude of languages (Study 2), and hate speech on social-media platforms (Study 3). In post hoc analyses focusing on particular moral concerns, we found that the type of moral content invoked through hate speech varied by context, with Purity language prominent in hateful propaganda and online hate speech and Loyalty language invoked in hateful slurs across languages. Our findings provide a new psychological lens for understanding hateful language and points to further research into the intersection of morality and hate, with practical implications for mitigating hateful rhetoric online.

PNAS Nexus, Volume 2, Issue 7, July 2023,

Hate in the Bay State: Extremism & Antisemitism in Massachusetts 2021-2022

By The Anti-Defamation League

Over the last two years, extremist activity in Massachusetts has mirrored developments on the national stage. Like the rest of the country, Massachusetts has seen white supremacists – including the Nationalist Social Club – increasingly make their presence known. The Bay State has also reported extensive propaganda distribution efforts, especially by Patriot Front, which resulted in Massachusetts recording the country’s second-highest number of white supremacist propaganda incidents in 2022.

Amidst increasing nationwide threats to the LGBTQ+ community, Massachusetts has also witnessed a spike in anti-LGBTQ+ activity, including waves of harassment against Boston Children’s Hospital, drag performances and LGBTQ+ events. And as the numbers of antisemitic incidents continue to rise across the country, Massachusetts was no exception. According to ADL’s annual Audit of Antisemitic Incidents, it was the sixth most affected state in the country in 2022.

This report will explore the full range of extremist groups and movements operating in Massachusetts and highlight the key extremist and antisemitic trends and incidents in the state in 2021 and 2022.

New York: ADL, 2022. 18p.

Hate is No Game: Hate and Harassment in Online Games, 2022

By The Anti-Defamation League, Center for Technology & Society

In 2021, ADL found that nearly one in ten gamers between ages 13 and 17 had been exposed to white-supremacist ideology and themes in online multiplayer games. An estimated 2.3 million teens were exposed to white-supremacist ideology in multiplayer games like Roblox, World of Warcraft, Fortnite, Apex Legends, League of Legends, Madden NFL, Overwatch, and Call of Duty. Hate and extremism in online games have worsened since last year. ADL’s annual report on experiences in online multiplayer games shows that the spread of hate, harassment, and extremism in these digital spaces continues to grow unchecked. Our survey explores the social interactions, experiences, attitudes, and behaviors of online multiplayer gamers ages 10 and above nationwide.

New York: ADL, 2022. 38p.

Pick the Lowest Hanging Fruit: Hate Crime Law and the Acknowledgment of Racial Violence

By Jeannine Bell

The U.S. has had remedies aimed at racial violence since the Ku Klux Klan Act was passed in the 1870s. Hate crime law, which is more than thirty years old, is the most recent incarnation. The passage of hate crime law, first at the federal level and later by the states, has done very little to slow the rising tide of bigotry. After a brief discussion of state and federal hate crime law, this Article will critically examine the country’s approach to hate crime. The article will then discuss one of the most prevalent forms of hate crime—bias-motivated violence that targets individuals in their homes. The Article will conclude with a discussion of the approach taken by the Justice Department in the Ahmad Arbery case as a potentially positive solution for the handling of hate crime cases.

112 J. Crim. L. & Criminology 691 (2022).

Countering and Addressing Online Hate Speech: A Guide for policy makers and practitioners

By The United Nations with the Economic and Social Research Council (ESRC) Human Rights, Big Data and Technology Project at the University of Essex

Today social media has become another vehicle for hate speech, with the ability to spread information at a speed never seen before, reaching potentially huge audiences within a few seconds. The manner in which many platforms operate feeds on hateful and discriminatory content, and provides echo chambers for such narratives. Online hate speech has led to real world harm. We have seen this from incidents of identity based violence where the perpetrators were instigated through online hate, to its widespread use to dehumanize and attack entire populations on the basis of identity. Unfortunately, many times the victims are those already most marginalized in society, including ethnic, religious, national or racial minorities, refugees and migrants, women and men, sexual orientation and gender identity minorities.

New York: United Nations, 2023. 20p.

A Year of Hate: Anti-Drag Mobilisation Efforts Targeting LGBTQ+ People in the UK

By Aoife Gallagher

Research by the Institute for Strategic Dialogue (ISD) has found that in the year since June 2022, anti-drag mobilisation in the UK has become a key focus for a variety of groups and actors. Anti-vaxxers, white nationalist groups, influential conspiracy theorists and “child protection” advocates have at times formed an uneasy – even fractious – coalition of groups opposing all-ages drag events. The driving force behind these protests is a mix of far-right groups and COVID-19 conspiracists.

While public debate about what is appropriate entertainment for children, and at what ages, is absolutely legitimate and deserves fair hearing, the identified tactics used by these actors only serve to undermine that discussion with chilling consequences for free expression, and create fertile ground for a potential uptick in violence. Furthermore, our analysis has found evidence that the UK is importing anti-LGBTQ+ rhetoric and strategies from similar movements in the US, with the “groomer” slur – used to frame LGBTQ+ people as a danger to children – becoming commonplace among anti-LGBTQ+ campaigners. Even though UK activity has not reached the level of violence seen in the US, abuse and harassment of hosts, performers and attendees at such events is a regular occurrence, and multiple events have been cancelled due to safety concerns. This report documents anti-drag activity in the UK by searching news reports, Twitter mentions and messages shared in relevant UK Telegram channels and groups. It outlines the actors involved, the tactics used and the impact of such activity between June 1, 2022 and May 27, 2023

Amman; Berlin; London; Paris; Washington DC: Institute for Strategic Dialogue, 2023. 20p.

A Year of Hate: Anti-drag Mobilization Efforts Targeting LGBTQ+ People in the US

By Clara Martiny and Sabine Lawrence

This country profile provides an analysis of on- and offline anti-drag mobilization in the United States; key tactics used by groups and individuals protesting drag events; and principal narratives deployed against drag performers. Through ethnographic monitoring of relevant US-based Telegram channels, Twitter profiles, Facebook groups, and use of external resources such as the Armed Conflict Location and Event Data Project (ACLED), Crowd Counting Consortium, and previous reports on anti-drag activity by groups such as GLAAD and the Southern Poverty Law Center (SPLC), ISD analysts compiled, categorized and analyzed anti-drag protests or online threats against drag events from June 1, 2022 to May 20, 2023.

The findings of this research reveal that the first five months of 2023 have seen more incidents of anti-drag protests, online and offline threats, and violence (97 in total; average of 19.4 per month) than in the last seven months of 2022 (106 in total; average of 15.1 per month).1 Notably, ISD analysts find that the actors behind anti-drag activity are not just traditional anti-LGBTQ+ groups but include growing numbers of assorted other actors, from local extremists and white supremacists through to parents’ rights activists, members of anti-vaxxer groups, and Christian nationalists. ISD also finds an increasing number of incidents where online hate speech has manifested in offline activity – for example, a popular online slur being found spray painted on a location hosting a drag event. This report also shows the concerning upward trend of anti-drag mobilization across the US, and shows how it harms the LGBTQ+ community, small business, parents, and poses serious risks to community security throughout the nation. And, while public debate about what is appropriate entertainment for children, and at what ages, is absolutely legitimate and deserves fair hearing, the identified tactics only serve to undermine that discussion, with chilling consequences for free expression, and create fertile ground for a potential uptick in violence.

Amman; Berlin; London; Paris; Washington DC: Institute for Strategic Dialogue, 2023.24p.

A Year of Hate: Understanding Threats and Harassment Targeting Drag Shows and the LGBTQ+ Community

By Tim Squirrell and Jacob Davey

Internationally, rising hate and extremism pose an existential threat to human rights and democratic freedoms. LGBTQ+ communities are often the first group to come under attack, and understanding the contours of these assaults matters both for the protection of these communities and to be better able to safeguard human rights and democracy more broadly. In new research by ISD, including four country profiles, we examine the trends in anti-LGBTQ+ hate and extremism with a particular focus on harassment targeting all-ages drag shows. In this report, ISD analyses the narratives, themes, actors and tactics involved in anti-drag activism in the US, UK, Australia and France. It examines the footprint of 274 anti-drag mobilisations: 11 in Australia, 3 in France, 57 in the UK and 203 in the USA. Anti-drag activity was also found in Ireland, Finland, Sweden and Switzerland as well as other European countries during the reporting period, usually in isolated cases. Due to finite resources these instances were not analysed in depth, but would merit further research. This research draws on ethnographic monitoring of over 150 Telegram channels, Twitter profiles and Facebook groups, as well as external resources such as news reports, Armed Conflict Location & Event Data (ACLED) and Crowd Counting and previous reports on anti-drag by GLAAD and the Southern Poverty Law.

Amman; Berlin; London; Paris; Washington DC: Institute for Strategic Dialogue, 2023. 19p.

Understanding Anti-Roma Hate Crimes and Addressing the Security Needs of Roma and Sinti Communities: A Practical Guide

By Organization for Security and Co-operation in Europe

The purpose of this Guide is to describe and analyze hate incidents and hate crimes faced by Roma and Sinti, as well as the corresponding security challenges. Considering cases from many of the 57 OSCE participating States, this Guide highlights measures that promote safety and security without discrimination, in line with OSCE commitments. This Guide provides relevant stakeholders - government offcials, political representatives, civil society and the broader public - with an overview of the situations Roma and Sinti communities face, an analysis of their corresponding security needs and areas where positive actions could improve their access to rights.

Warsaw: OSCE Office for Democratic Institutions and Human Rights (ODIHR) , 2023. 138p.

Online Hate and Harassment: The American Experience 2023

By The Anti-Defamation League, Center for Technology & Society  

Over the past year, online hate and harassment rose sharply for adults and teens ages 13-17. Among adults, 52% reported being harassed online in their lifetime, the highest number we have seen in four years, up from 40% in 2022. Both adults and teens also reported being harassed within the past 12 months, up from 23% in 2022 to 33% in 2023 for adults and 36% to 51% for teens. Overall, reports of each type of hate and harassment increased by nearly every measure and within almost every demographic group. ADL conducts this nationally representative survey annually to find out how many American adults experience hate or harassment on social media; since 2022, we have surveyed teens ages 13-17 as well. The 2023 survey was conducted in March and April 2023 and spans the preceding 12 months. Online hate and harassment remain persistent and entrenched problems on social media platforms.

New York: ADL, 2023. 51p.

Auditing Elon Musk’s Impact on Hate Speech and Bots

By Daniel Hickey, Matheus Schmitz, Daniel Fessler, Paul E. Smaldino, Goran Muric, Keith Burghardt

On October 27th, 2022, Elon Musk purchased Twitter, becoming its new CEO and firing many top executives in the process. Musk listed fewer restrictions on content moderation and removal of spam bots among his goals for the platform. Given findings of prior research on moderation and hate speech in online communities, the promise of less strict content moderation poses the concern that hate will rise on Twitter. We examine the levels of hate speech and prevalence of bots before and after Musk’s acquisition of the platform. We find that hate speech rose dramatically upon Musk purchasing Twitter and the prevalence of most types of bots increased, while the prevalence of astroturf bots decreased.

Pre-publication: 2023. 6p.