Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

Posts tagged Hate
Hate of the Nation: A Landscape Mapping of Observable, Plausibly Hateful Speech on Social Media

By  Jacob Davey, Jakob Guhl, and Carl Miller

As Ofcom prepared for its duties as the UK’s incoming social media regulator, it commissioned ISD to produce two reports to better understand the risk of UK users encountering online terrorists, incitement to violence, and hate content across a range of digital services. This report provides an overview of public English-language messages collected from Facebook, Instagram, Twitter, Reddit, and 4chan across the month of August 2022 which we class as ‘plausibly hateful’. This is where at least one of the reasonable interpretations of the message is that it seeks to dehumanize, demonize, express contempt or disgust for, exclude, harass, threaten, or incite violence against an individual or community based on a protected characteristic. Protected characteristics are understood to be race, national origin, disability, religious affiliation, sexual orientation, sex, or gender identity.

Amman | Berlin | London | Paris | Washington DC: Institute for Strategic Dialogue. 2023. 34p.

30 Years of Trends in Terrorist and Extremist Games 

By Emily Thompson and Galen Lamphere-Englund

Violent extremist, terrorist, and targeted hate actors have been actively exploiting video games to propagandize, recruit, and fundraise for more than 30 years. This report presents an analysis of that history using a unique dataset, the Extremist and Terrorist Games Database (ETGD), developed by the authors. It contains 155 reviewed entries of standalone games, modifications for existing games (mods), and browser‑based games dating from 1982 to 2024. The titles analyzed appear across the ideological spectrum: far right (101 titles), jihadist (24), far left (1), and other forms of extremism and targeted hate (29), including school‑massacre ideation (12). They span platforms ranging from simple standalone games for Atari in the 1980s to sophisticated mods for some of today’s most popular games. The number of titles has increased year on year – in line with global conflict and extremist ideological trends, and revealing a continued push by malicious actors to exploit gaming. Meanwhile, the means of distribution have shifted from violent extremist organizations and marketplaces – such as white supremacist, neo‑Nazi, and jihadist organizations – to distributed repositories of extremist games hosted on internet archives, Ethereum‑hosted file‑sharing, Telegram and with subtly coded titles on mainstream platforms like Steam. While most of the titles in the ETGD are available for free, several that have been sold (often at symbolic prices like $14.88 or $17.76) appear to have generated revenue for groups ranging from Hezbollah to the National Alliance, an American neo‑Nazi group. Through new analysis of Steam data, we also show that a small number of extremist and targeted hate titles have generated almost an estimated $600,000 in revenue for small publishers on the platform. Far from being a comprehensive analysis of the ETGD, we intend this preliminary launch report to form a basis for future research of the dataset and a framework for continued contributions to the ETGD from Extremism and Gaming Research Network (EGRN) members. Above all, we seek to contribute to sensible policymaking to prevent violent extremism that situates games as part of a wider contested and exploited information space, which deserves far more attention from those working towards peaceful ends. Complete recommendations are provided in the conclusion section of this report but include the following: 1. Prohibit and prevent violent extremist exploitation: Gaming platforms should explicitly prohibit violent extremist and terrorist behaviors and content. Leadership exists here from Twitch, Discord, Microsoft/Xbox, and the affiliated Activision‑Blizzard.  a. Audio and video platforms, such as Spotify, Apple Music, and YouTube should seek to identify extremist gaming content currently available under misleading titles and tags. b. Flag and remove extremist titles across platforms: Hashing and preventing outlinking to ETGD games and links should be a priority across platforms. 2. Improve reporting mechanisms: Platforms must improve reporting mechanisms to make it easier for players to report violative content found in games and in‑game conduct. 3. Understand and take down distributed repositories: Larger repositories of extremist gaming content readily available on the surface web accelerate user exposure. 4. Collaborate across sectors: Addressing the spread of extremist games requires a collaborative effort between tech companies, government agencies, and civil society organizations. 5. Educate across sectors: Programmes supporting educators and frontline community moderators should be developed. 6. Support research and innovation: Including cross‑sector initiatives like the Global Network on Extremism and Technology (GNET) and EGRN, which produced this database. 7. Enhance regulatory frameworks: Governments should update regulatory frameworks applying to digital platforms, recognizing the nuances of gaming platforms and complying with human rights. 8. Encourage positive community engagement: Thoughtful, well-designed community guidelines, moderation policies, and reporting mechanisms can support community‑building.  

London: The Global Network on Extremism and Technology (GNET) , 2024. 40p.'

Communities of Hateful Practice: The Collective Learning of Accelerations Right-Wing Extremists, With a Case Study of The Halle Synagogue Attack

By Michael Fürstenberg

In the past, far-right aggression predominantly focused on national settings and street terror against minorities; today, however, it is increasingly embedded in global networks and acts within a strategic framework aimed at revolution, targeting the liberal order as such. Ideologically combining antisemitism, racism, and anti-feminism/anti-LGBTQI, adherents of this movement see modern societies as degenerate and weak, with the only solution being a violent collapse that they attempt to accelerate with their actions. The terrorist who attacked the synagogue and a kebab shop in Halle, Germany, in October 2019 clearly identified with this transnational community and situated his act as a continuation of a series of attacks inspired by white supremacy in the past decade. The common term ‘lone wolf’ for these kinds of terrorists is in that sense a misnomer, as they are embedded in digital ‘wolf packs’. Although this movement is highly decentralized and heterogeneous, there are interactive processes that connect and shape the online milieu of extremists into more than the sum of its parts, forming a structure that facilitates a certain degree of cohesion, strategic agency, and learning. This paper uses the model of collective learning outside formal organizations to analyze how the revolutionary accelerationist right as a community of practice engages in generating collective identities and knowledge that are used in the service of their acts of death and destruction.

Halle (Saale), Germany: Max Planck Institute for Social Anthropology, 2022. 62p.

Defining and Identifying Hate Motives: Bias Indicators For The Australian Context

By Matteo Vergani,  Angelique Stefanopoulos, Alexandra Lee, Haily Tran, Imogen Richards, Dan Goodhardt, Greg Barton

Bias indicators – that is, facts, circumstances, or patterns that suggest that an act was motivated in whole or in part by bias – can be a useful tool for stakeholders working on tackling hate crimes. Government and non-government agencies can use them to improve and standardise data collection around hate crimes, which can have a cascade of positive effects. For example, they can help to demonstrate in court the prejudice motivation of a crime – and we know that this is often hard in Australia, because the legislation has a very high threshold of proving hateful motivation. They can also improve the precision of measurements of the prevalence of hate crimes in communities, which is necessary for planning appropriate mitigation policies and programmes and for assessing their impact. Bias indicators can also be useful for non-government organisations to make sure that their data collection and research is reliable, consistent and a powerful tool for advocacy and education. We acknowledge that bias indicators can be misused: for example, our lists are not to be read as exhaustive, and users should take them as examples only. Also, incidents can present bias indicators from multiple lists, and coders should not stop at trying to code the incident as targeting one identity only. Importantly, our bias indicators lists should not be used by practitioners to make an assessment of whether an incident is bias motivated or not. The absence of bias indicators does not mean that an incident is not hate motivated – if a victim or a witness perceives that there was a prejudice-motivation. At the same time, the presence of a bias indicator does not necessarily demonstrate that an incident is bias motivated (as the term ‘indicator’ implies). Ultimately, a judge will make this decision. In the Australian context, we are proposing that bias indicators should be used to support data collection, and to make sure that all potentially useful evidence is collected when an incident is reported. This report is structured in two parts: in Part 1, we introduce and discuss the concept of bias indicators, including their uses, benefits, and risks. In Part 2, we present a general list of bias indicators (which might be used to code a hate  motivated incident), followed by discrete lists of bias indicators for specific target identities. We also present a separate list for online bias indicators, which might apply to one or more target identities. We are keen to engage with government and non-government agencies that plan to use bias indicators and find this report useful. We welcome opportunities to share additional insights from our research on how 

Melbourne: Centre for Resilient and Inclusive Societies. 2022. 40p.

Enhancing Support for Asian American Communities Facing Hate Incidents: Community Survey Results from Los Angeles and New York City

By Lu DongJennifer BoueyGrace TangStacey YiDouglas YeungRafiq DossaniJune LimYannan LiSteven Zhang

Since the onset of the coronavirus pandemic, Asian American communities have faced a new wave of anti-Asian hate throughout the United States. Given diverse communication channels that are clustered by ethnicity, language preferences, and immigration generations within Asian American populations, there is a pressing need for culturally and linguistically appropriate strategies to raise awareness of available services to address anti-Asian hate. Community-based organizations (CBOs) play a crucial role in this regard, but they require tailored strategies to effectively reach and support Asian American communities. The authors conducted a community survey in Los Angeles (LA) and New York City (NYC) to provide CBOs that serve Asian and Asian American communities with important insights to enhance outreach and support strategies, ensuring that these strategies are accessible and effectively meeting the needs of community members who are affected by anti-Asian discrimination and violence.

Key Findings

  • Among survey respondents, who were mostly from Chinese, Korean, and Thai ethnic groups, 37 percent of participants reported experiencing an anti-Asian hate incident; rates were similar in LA and NYC.

  • English-speaking respondents, younger (18–24 years old) respondents, and respondents from higher income brackets were more likely to report experiencing an anti-Asian hate incident.

  • About 61 percent of respondents indicated that they would report a hate incident to the police, and 61 percent would also seek help from CBOs that provide support services to hate-crime victims. Only 37 percent of respondents would use local community service numbers (211 or 311), and 13 percent indicated that they would not take any action. First-generation immigrants were more likely to take actions than were later generations.

  • Major barriers to reporting incidents include language issues, lack of time, and lack of awareness of available resources. Approximately 45 percent of participants were unaware of community-based resources available to address anti-Asian hate; there were more-significant knowledge gaps in LA than in NYC.

  • Despite most Asian Americans appreciating community-based counter-hate-incident services — such as medical support and counseling — actual use rates were low.

  • Respondents from later immigrant generations (1.5, second, and third or later generations) reported more barriers and expressed more concerns about seeking support from CBOs after experiencing anti-Asian hate incidents.

Recommendations

  • Strengthen services to meet the needs of members of two Asian American subgroups who might need more-tailored outreach and support: English-speaking later generations of Asian Americans who have more exposure to discrimination and older adults who might have difficulty recognizing and expressing their experiences of racism.

  • Leverage close family ties and use diverse linguistic and cultural social media platforms to enhance outreach and information dissemination about anti-hate resources at CBOs.

  • Empower first-generation community influencers to enhance outreach.

  • Enhance CBOs' policy advocacy through strengthened data collection.

Santa Monica, CA: RAND, 2024,

A Year of Hate: Anti-Drag Mobilisation Efforts Targeting LGBTQ+ People in Australia

By Elise Thomas

Drag Queen Story Hours (DQSH) and similar drag events for child audiences have been held in libraries across Australia for several years. In previous years these events were mostly uncontroversial and the response to them positive, despite some critical commentary from right-wing media and politicians. In late 2022 and over the course of 2023, however, the situation changed.  

Inspired by increasing transphobic and anti-drag rhetoric and conspiracy theories about drag performers emanating from the US, a loose network began to mobilise to disrupt all-ages drag events in Australia. At least a dozen events across the country were targeted with online harassment and/or offline protest between September 2022 and February 2024, and likely more which were not publicly reported on. This is occurring in the context of broader anti-LGBTQ+ hate and mobilisation, including incidents during WorldPride celebrations in Sydney, which ran from 17 February to 5 March 2023; a violent mass attack on pro-LGBTQ+ protesters on 21 March; and the attendance of neo-Nazis at an anti-trans rally in Melbourne on 18 March.  

This country profile uses analysis of open sources including social media content (primarily from Facebook and Telegram), protest footage and media interviews to examine the growth of anti-drag hate and harassment in Australia. It breaks down the groups and influencers involved into four broad categories: fringe politicians and far right media; conspiracy theory groups left over from the anti-lockdown movement; neo-Nazis; and Christian groups active in anti-LGBTQ+ demonstrations.  

Amman; Berlin; London;Paris; Washington, DC : Institute for Strategic Dialogue. 2024, 22pg

The interaction between online and offline Islamophobia and anti-mosque campaigns

By Gabriel Ahmanideen

In the aftermath of the war on terror, mosques have become targets for hate groups, leveraging online platforms to amplify global anti-mosque campaigns. These groups link local protestors with international hate networks, fuelling both online and offline (i.e., onsite) anti-mosque campaigns. Thoroughly reviewing the literature addressing the interaction between online and offline Islamophobia and introducing an anti-mosque social media page instilling the public with online and offline anti-mosque hate, this article suggests a strong interaction between online and offline Islamophobia. In the provided case study from the Stop Mosque Bendigo (SMB), purposeful sampling was used to collect postings before and after the Christchurch Mosque attacks to analyse the evolution of online anti-mosque campaigns in tandem with real-life hate cases. The literature and the case study reveal the interaction between local and global, digital, and physical realms, as well as the convergence of everyday racism with extremist far-right ideologies like the Great Replacement theory. Relying on the present literature and indicative findings, the article advocates for systematic investigations to uncover the direct connection between online hate and physical attacks and urges closer monitoring and accountability for those online platforms and social media pages apparently contributing to onsite hate-driven actions.

Australia, Sociology Compass. 2023, 14pg