Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

Vetting for Virtue: Democracy’s Challenge in Excluding Criminals from Office

By Sigurd S. Arntzen, Jon H. Fiva, Rune J. Sørensen   

This paper assesses the effectiveness of democratic systems in preventing individuals with criminal backgrounds from holding political office. Unlike many countries, Norway has no legal restrictions against felons running for office. We analyze local election candidates from 2003 to 2019, paired with administrative records of criminal offenses. We demonstrate that individuals with criminal records are systematically penalized at every stage of their political careers. Candidates are less likely to have criminal records than the general population, with elected officials less likely to have criminal backgrounds than their unelected peers, and mayors being the most lawful. Through a series of counterfactual exercises, we demonstrate that the most significant reduction in criminal involvement occurs at the nomination stage, especially within established local party organizations.

Munich: CESifo, Munich, 2024

NYC for Racial Justice

By New York City Racial Justice Commission

In March 2021, Mayor de Blasio announced the formation of the Racial Justice Commission and appointed 11 Commissioners, including Chair Jennifer Jones Austin and Vice Chair Henry Garrido, to focus on racial justice and reconciliation, with a mandate to identify and root out structural racism. The Racial Justice Commission (RJC) has the formal powers of a charter revision commission, including the ability to propose changes to the NYC Charter. The NYC Charter is the foundation of how our City functions and governs, and it has a direct impact on the way we live and work. The Commission set out to examine the NYC Charter to identify barriers to power, access, and opportunity for Black, Indigenous, Latinx, Asian, Pacific Islander, Middle Eastern and all People of Color (BIPOC*) in New York City and put forward ballot proposals aimed at removing those barriers and advancing racial equity. New Yorkers will vote on these proposed changes in November 2022. The Commission operates independently from the Mayor’s Office and other agencies. As a charter revision commission, the Racial Justice Commission was tasked with reviewing the entire City Charter and proposing amendments, or changes, to be considered by voters and voted upon in a general election. A charter revision commission can choose to make proposals that change the entire charter, or a specific section. Given this authority, and the unique opportunity posed by the transformative potential of this moment in history, the Racial Justice Commission decided to focus on identifying and proposing structural changes in the NYC Charter that will advance racial justice and equity and begin to dismantle structural racism for all New Yorkers. The Commission began by defining a vision for racial equity, one where the worth, talents, and contributions of all people in society are valued and recognized, and where race is not a determinant of economic, political, social, or psychological outcomes, as it neither confers privilege nor denies opportunities.

PUBLIC ENGAGEMENT

Charter revision commissions can perform research, speak with experts and community leaders, conduct public meetings and public input sessions as they collect information and ideas, and make decisions about what proposed charter changes to recommend. Engaging New Yorkers in the process undertaken by the Racial Justice Commission was critical. While most charter revision commissions seek input and specific proposals from the public, the Racial Justice Commission knew it would be important to also recognize the deep pain of racial trauma and the history of injustices suffered. So, the Commission heard from New Yorkers not only on their ideas, but also on their experiences—the challenges faced, systemic barriers in place, and the personal and community impact these injustices have had. The Commission employed a wide range of engagement tools in order to reach the broadest range of New Yorkers as possible, with an emphasis on reaching Black, Indigenous, Latinx, Asian, Pacific Islander, Middle Eastern New Yorkers and all People of Color (BIPOC*) who are not as effectively reached through traditional government engagement vehicles. The Commission held public input sessions in every borough and online; received input online from over 1,250 New Yorkers; heard from thought leaders and experts from a range of fields, backgrounds, and expertise; spread the word to over 1,000 New Yorkers through presentations to community boards and civic groups; and conducted targeted interviews and focus groups with critical stakeholders working in racial equity and racial justice.

New York: NYC Racial Justice Commission, 2022. 147p.

Hamas’s Influence on US Campuses: A Study of Networks, Strategies, and Ideological Advocacy

By The Program on Extremism

Hamas has operated for decades in the US through fundraising, influence operations, and strategic adaptation, using charities and neutral rhetoric to conceal its true objectives. On college campuses, Hamas-linked networks have exploited academic freedom to further their agenda, a strategy that is the latest iteration of plans conceived as far back as the early 1990s. Groups like Samidoun and actors linked to the Iranian regime have cooperated with US-based Hamas networks, conducting similar influence and fundraising operations. 

Washington, DC: Program on Extremism at George Washington University, 2024. 23p.

The Surprising Decline of Workplace Sexual Harassment Incidence in the U.S. Federal Workforce

By Michael J. Rosenfeld

U.S. Merit Systems Protection Board (USMSPB) surveys document a decline of more than 50 percent between 1987 and 2016 in the percentage of women working for the federal government who have been sexually harassed (narrowly or broadly defined) in the prior two years. This decline has been underappreciated due to the infrequency of USMSPB surveys and the delayed release of the USMSPB report based on the 2016 survey. The decline in workplace sexual harassment of women has taken place across all federal agencies and at all workplace gender balances. While, in 1987, there was a strong positive correlation between male predominance in the workplace and women’s reports of sexual harassment, this association was greatly diminished by 2016. The formerly substantial gender divide in attitudes toward sexual harassment was also mostly diminished by 2016. By extrapolating the USMSPB surveys of federal workers to the entire U.S. workforce, I estimate that 4.8 million U.S. women were harassed at work in 2016 (using a narrow definition of harassment) and 7.6 million U.S. women were harassed at work in 1987 when the female workforce was substantially smaller. More than 700 women were sexually harassed at work in the United States in 2016 for every sexual harassment complaint filed with the Equal Employment Opportunity Commission. The observed decline in sexual harassment has implications for theories about law and social change ” 

Sociological Science 11: 934-964, 2024.

Evaluating the Effectiveness of the Say Something Anonymous Reporting System

By Hsing-Fang Hsieh, Justin Heinze

Abstract. Anonymous tip lines have the potential to improve school safety by providing secure multi-modal reporting systems and enabling a coordinated response between schools, law enforcement and crisis responders. The SS-ARS, developed and implemented by the Sandy Hook Promise (SHP) Foundation, is an educational school safety initiative that trains youth, parents, schools, and communities to recognize warning signs in writing, speaking, or web content that could lead to harmful behaviors towards themselves or others, and to safely report potential threats. SS-ARS combines a school-wide violence prevention program that enhances risk recognition, empowers and engages school communities in violence prevention, and facilitates coordination between schools and law enforcement with a multi-modal ARS. SHP has implemented the Say Something program in schools across the U.S. and trained over 12 million students (Sandy Hook Promise Foundation). In a recent systematic review of anonymous reporting systems (ARS) in U.S. schools, Messman et al. (2021) identified just four empirical studies about the implementation or effectiveness of ARS, but none of these studies used experimental designs. To address this gap of research, we examined the effectiveness of the Say Something Anonymous Reporting System (SS-ARS) program in improving school safety in a cluster randomized control trial in collaboration with the Miami-Dade County Public Schools (M-DCPS). I. Major goals and objectives The current project had four major goals. Our goals for the project were to: 1. Conduct a cluster randomized control trial to test the effectiveness of the SS-ARS intervention to improve participants’ ability to recognize signs of mental duress, violent antecedents, and other risk behaviors, increase reporting of risk behaviors, and improve school community response and school climate over time; 2. Examine changes in violence in school communities (e.g., fights, bullying) and student criminal justice involvement stemming from improved recognition and reporting of risk behaviors; 3. Identify key factors associated with program fidelity, reach, adoption, and sustainability; 4. Perform a cost/effectiveness analysis. We had five main objectives under these goals. Our objectives were to: 1. Recruit 30 schools that will be randomly assigned to receive the SS-ARS program (intervention group) or to receive the usual school safety practices (control condition). 2. Conduct pre- and post-test surveys of students, teachers, and administrators attending both the intervention and control schools. Participants will be followed longitudinally over the study period (from baseline to 18-month post-test survey). 3. Conduct structured interviews with key program personnel at all treatment schools to assess program implementation factors and outcomes. 4. Extract administrative data from both intervention and control school records to assess violent incidents and school response. We will also work with the Miami-Dade Schools Police Department (M-DSPD) to extract geocoded crime data in surrounding neighborhoods.  5. Compare change over time between the intervention and control groups. Analyses will include both student and school-level data. We will examine the stability of change with three data points over 18 months post-intervention. Analyses will examine program effectiveness and the implementation factors associated with program effectiveness.

Ann Arbor: University of Michigan, 2023. 41p.

ROUGH JUSTICE: THE CHALLENGES FACED BY SCOTLAND’S JUSTICE WORKERS

By Unity Consulting Scotland

Over the course of the past few years PCS has consistently heard from our members about the workload pressures facing them in the Crown Office and Procurator Fiscal Service (COPFS) and the Scottish Courts and Tribunal Service (SCTS). The growing understanding of the issues facing staff led PCS to commission UNITY Consulting Scotland to conduct research into the experiences of the COPFS and SCTS workforces. This report is based on the voices and viewpoints of workers in COPFS and SCTS. The views of the workers who took part in this research confirm what we already knew about the challenges they are facing and the impact this is having on them and the delivery of justice. They must be listened to and changes must be made to improve their working environment.

Edinburgh: Unity Consulting, 2024. 132p.

A Discrimination Report Card

By Patrick M. Kline, Evan K. Rose, and Christopher R. Walters

We develop an empirical Bayes ranking procedure that assigns ordinal grades to noisy measurements, balancing the information content of the assigned grades against the expected frequency of ranking errors. Applying the method to a massive correspondence experiment, we grade the race and gender contact gaps of 97 U.S. employers, the identities of which we disclose for the first time. The grades are presented alongside measures of uncertainty about each firm’s contact gap in an accessible report card that is easily adaptable to other settings where ranks and levels are of simultaneous interest.

Chicago: University of Chicago, The Becker Friedman Institute for Economics 2(BFI) , 2024

Race and Gender Characteristics of Homicides and Death Sentences in Duval County, FL and in the State of Florida, 1973-2022

By Frank R. Baumgartner

I have compiled data from the FBI Supplemental Homicide Reports from 1976 through 2019 (the last data currently available) on homicides in Florida and in Duval County, and information about all death sentences imposed in those two jurisdictions since the modern system of capital punishment was created in Florida 1973. This consists of a record of 1,103 death sentences imposed state-wide and 112 in Duval County. The corresponding numbers of homicide offenders are 20,831 (state-wide) and 1,742 (Duval County). I have used this data to calculate rates of death sentences per 100 homicides, in Florida and in Duval County, by race of offender, race of victim, gender of offender, and gender of victim. This report begins by describing the race and gender information I collected and how often it was missing. It next presents a detailed table to document the figures used to calculate the rates of death sentences per 100 homicides in Florida and Duval. My narrative analysis of these tables follows, after which I give a similar analysis limited to those cases in Florida resulting in execution. As will be seen, I ultimately conclude that neither death sentences nor executions are applied in an equal manner; they are instead driven powerfully by the race and gender of the victim, with the highest rates of death sentencing and executions, both in Florida and Duval County, reserved for black offenders who kill white victims, and highest of all for black men who kill white women.

Washington, DC: American Civil Liberties Union, 2023. 42p.

Race and Gender Disparities in Capitally-Charged Louisiana Homicide Cases, 1976-2014

By Tim Lyman, Frank R. Baumgartner, and Glenn L. Pierce

Out of 6,512 homicides from 1976 through 2014, we review the outcomes of 1,822 capitallycharged homicide cases across eight judicial districts in Louisiana. In most cases, capital charges were reduced; but in 385 cases, the state sought death to the final stage of the prosecution. In 107 cases, a death sentence was imposed. We analyze these outcomes, looking at legally relevant factors, as well as legally irrelevant ones, in determining final capital charges and death sentences. Legally relevant factors include the number of victims as well as various statutory aggravating circumstances (e.g., victims under 12 or over 64, simultaneous felony circumstances, the type of weapon, the relationship between the victim and offender). Legally irrelevant factors include the judicial district and the race and gender of the offenders and victims, respectively. Many legally relevant factors have powerful impacts: the number of victims, certain felony circumstances, child victims, elderly victims are all associated with higher rates of final capital charging or death sentencing. But we also show that factors which appear legally irrelevant in theory have have powerful effects; rates of capital prosecution and death sentencing are substantially different based on the race of victim and the combined races of the offenders and the victims, for example. We found only modest differences across the eight judicial districts we studied, but especially significant differences in rates of final capital charges and death sentences in cases that involved white victims, particularly white females. No demographic combination was as likely to see a final capital charge or a death sentence as those cases with a black male offender and a white female victim, which were more than five times as likely to lead to a final capital charge or a death sentence, compared to the much more frequent crimes involving black offenders and black victims. These findings come after a review of the bivariate relations as well as a series of multivariate logistic regressions. The Louisiana death penalty system is heavily weighted by a tendency to seek the harshest penalties in those cases with white female victims. Our powerful and consistent findings of racial and gender-based disparities hold in a multivariate analysis and are inconsistent with the equal protection of the law or any common understanding of equality or justice.

SUL Rev., 2021

The Online Ecosystem of the German Far-Right 

By Jakob Guhl, Julia Ebner and Jan Rau

On the 8th October 2019, a 28-year old man with self-made guns and body armour attacked a synagogue and a kebab shop in Halle, killing two people. He live-streamed the attack and published a ‘manifesto’ online. His intention was to kill Jews, whom he blamed for feminism and mass migration. He introduced himself as “Anon” (anonymous), a reference to ‘imageboard’ websites such as 4chan and 8chan. Shortly after, users on 4chan cynically joked about whether the attack had lived up to similar attacks in Pittsburgh, San Diego, and El Paso in the US and Christchurch in New Zealand. In each of these attacks, the perpetrators were found to be have been immersed in far-right online sub-cultures. The presence of extremist and terrorist groups on mainstream platforms like Facebook, Twitter and YouTube has been the focus of much attention in recent years, but the attacks cited above have raised concerns about the far-right subcultures that have emerged on alternative platforms like 4chan and Telegram, chat forums like Gab, and gaming applications like Discord. With mainstream social media companies forced to make greater efforts to remove extremists and hate speech from their platforms in Germany with the NetzDG law, these alternative social platforms have become increasingly important to an international far-right community that includes anti-Muslim movements like PEGIDA, ‘Identitarian’ ethnonationalist groups like Generation Identity, and militant Neo-Nazis like the Atomwaffen Division. In addition to being places where far-right terrorists are glorified, they have also become sites for activists to strategise and spread disinformation campaigns, coordinate harassment against female politicians and create meme campaigns to influence elections and political discourse.6 ISD research in the German national, Bavarian and European Parliamentary elections showed how these groups were coordinating in particular to support the right-wing populist party Alternative for Deutchland (AfD).7 Complementing these alternative social media platforms is an ecosystem of online alternative media outlets that masquerade as ‘news’ sources. Presenting themselves as alternatives to mainstream media, many of these outlets amplify far-right, anti-migrant and anti-progressive talking points through sensationalist ‘click-bait’ stories. Taken together, this toxic far-right ecosystem is potentially contributing to a rise in far-right motivated terrorism, which has increased 320% in the past five years, whilst also giving safe spaces and providing contents for those who want to undermine democracy. Policymakers are increasingly asking what can be done, but at present too little is known about these communities. To address these issues, ISD’s Digital Analysis Unit undertook one of the most comprehensive mappings of this alternative ecosystem in Germany to date. While these platforms draw in a global audience, which we are consistently mapping and analysing to understand the international connectivity of the far-right, this report focuses specifically on the German-speaking communities within this ecosystem. The German government has been at the forefront of devising legislation to force the mainstream social media companies to remove illegal hate speech from their platforms. With the NetzDG bill, passed in 2017, social media companies face large fines if they do not remove illegal content within 24 hours. While many have criticised the NetzDG bill as infringing on free speech or being ineffective by focusing on content removal, there is also the risk that it is driving extremist groups into more closed, alternative platforms which are currently not subject to the legislation. These alternative platforms present significant challenges for regulation. They may lack the resources to effectively monitor or remove extremist communities, or they may be ideologically committed to libertarian values and free speech and thus unwilling to moderate these communities.  Drawing together ISD’s digital ethnographic work across dozens of closed forums and chat groups with the latest in machine learning and natural language processing, in this report we provide an initial glimpse into the size and nature of the far-right communities on these platforms. We present data gathered from user-generated surveys on these platforms, revealing the motivations for joining and the ideological views of those drawn to these groups. Using Method 52, a proprietary software tool for the analysis and classification of unstructured text, we trained an algorithm to identify antisemitic narratives.8 We also analyse the role of alternative ‘news’ outlets in disseminating far-right concepts, drawing on ISD’s partnership with the MIT Media Lab to create the ‘Hate Observatory’, based on its Media Cloud software, the world’s largest online database of online media, containing 1,4 billion stories from 60,000 sources, to compare the frequency and types of coverage of far-right themes in mainstream and alternative media. Based on our research findings, we make a series of recommendations for tech companies, government, civil society and researchers about how to prevent these alternative platforms from being used to further radicalise or undermine democracy. Key Findings – We identified 379 far-right and right-wing populist channels across ten alternative platforms investigated for this report. Alternative platforms with notable far-right presence included: the messaging application Telegram (129 channels), the Russian social network website VK (115 groups), video-sharing website Bitchute (79), and social networking sites Gab (38 channels), Reddit (8 groups), Minds (5 communities) and Voat (5 communities). Analysis of the community standards of these platforms shows that they can be divided into two groups. Firstly, those designed for non-political purposes, such as gaming, which have been hijacked by far-right communities. Secondly, those that are based on libertarian ideals and defend the presence of far-right communities on the basis of freedom of speech. While membership numbers in these groups were not always identifiable, our analysis suggests that there are between 15,000 and 50,000 Germanspeaking individuals with far-right beliefs using these platforms, with varying levels of activity. The channel with the most followers had more than 40,000 followers. Although we identified a few platforms that were created by right-wing populist influencers, such as video-sharing sites FreiHoch3 and Prometheus, the number of users was too small to merit inclusion in the analysis. – A spectrum of far-right groups are active on alternative platforms: while there are a greater number of anti-Muslim and neo-Nazi affiliated channels, ‘Identitarian’ groups appear to have the largest reach. Of the 379 groups and channels that we identified, 104 were focused on opposition to Islam and Muslims, immigration and refugees and 92 channels expressed overt support for National Socialism. We identified 35 channels and groups associated with Identitarian and ethnonationalist groups. 117 communities and groups did not fall neatly into any specific category but instead contained a mix of content from the categories described above. It is important to note that a larger number of channels does not necessarily equate to a larger number of people reached. For example, the largest Identitarian channel has more than 35,000 followers, which is significantly greater than the largest anti-Muslim channel (18,000) or the largest neo-Nazi channel (around 10,000). (continued)  

London, Washington DC; Beirut; Toronto:  IDS - Institute for Strategic Dialogue: 2020. 76p.

Representation of Slave Women in Discourses on Slavery and Abolition, 1750-1838

By Henrice Altink

This book analyzes textual representations of Jamaican slave women in three contexts--motherhood, intimate relationships, and work--in both pro- and antislavery writings. Altink examines how British abolitionists and pro-slavery activists represented the slave women to their audiences and explains not only the purposes that these representations served, but also their effects on slave women’s lives.

London; New York: Routledge, 2005.272p.

Towards an International /Code of Conduct Code for Private Security Providers: A View From Inside a Multistakeholder Process

By Anne Marie Buzato

The use of private security companies (PSCs) to perform services that are traditionally associated with the state presents a challenge to regulatory and oversight frameworks. Analyzing developments leading to the International Code of Conduct for Security Providers (ICOC) and the ICOC Association, this paper argues that a multistakeholder approach to develop standards adapted for the private sector and which creates governance and oversight mechanisms fills some of the governance gaps found in traditional regulatory approaches.

London: Ubiquity Press, 2015. 51p.

It’s Not Funny Anymore. Far-Right Extremists’ Use of Humour

By Maik Fielitz and Reem Ahmed,  Radicalisation Awareness Network

Humour has become a central weapon of extremist movements to subvert open societies and to lower the threshold towards violence. Especially within the context of a recent wave of far-right terrorist attacks, we witness “playful” ways in communicating racist ideologies. As far-right extremists strategically merge with online cultures, their approach changes fundamentally. This trend has been especially facilitated by the so-called alt-right and has spread globally. This predominantly online movement set new standards to rebrand extremist positions in an ironic guise, blurring the lines between mischief and potentially radicalising messaging. The result is a nihilistic form of humour that is directed against ethnic and sexual minorities and deemed to inspire violent fantasies — and eventually action. This paper scrutinises how humour functions as a potential factor in terms of influencing far-right extremist violence. In doing so, we trace the strategic dissemination of far-right narratives and discuss how extremists conceal their misanthropic messages in order to deny ill intention or purposeful harm. These recent developments pose major challenges for practitioners: As a new generation of violent extremists emerges from digital subcultures without a clear organisational centre, prevention strategies need to renew focus and cope with the intangible nature of online cultures.  

Luxembourg: Publications Office of the European Union, 2021. 18p.

Hate of the Nation: A Landscape Mapping of Observable, Plausibly Hateful Speech on Social Media

By  Jacob Davey, Jakob Guhl, and Carl Miller

As Ofcom prepared for its duties as the UK’s incoming social media regulator, it commissioned ISD to produce two reports to better understand the risk of UK users encountering online terrorists, incitement to violence, and hate content across a range of digital services. This report provides an overview of public English-language messages collected from Facebook, Instagram, Twitter, Reddit, and 4chan across the month of August 2022 which we class as ‘plausibly hateful’. This is where at least one of the reasonable interpretations of the message is that it seeks to dehumanize, demonize, express contempt or disgust for, exclude, harass, threaten, or incite violence against an individual or community based on a protected characteristic. Protected characteristics are understood to be race, national origin, disability, religious affiliation, sexual orientation, sex, or gender identity.

Amman | Berlin | London | Paris | Washington DC: Institute for Strategic Dialogue. 2023. 34p.

Gaming and Extremism: The Extreme Right on DLive

By Elise Thomas

DLive is a live-streaming platform created in 2017 and acquired by BitTorrent in 2019. From late 2019 onward, the combination of lax content moderation and DLive’s in-built opportunities for monetisation1  using a blockchain-based cryptocurrency reportedly attracted2 significant numbers of extreme right and fringe streamers to the platform. In early 2021, at least nine channels are alleged to have live-streamed the January 6th incursion into the US Capitol on the platform.3 DLive has a policy of tagging channels that contain political or adult content as ‘X tag’ channels. In the wake of the events at the Capitol, DLive took the step of demonetising5  all X tag channels. They also suspended the accounts of users who had streamed the Capitol incursion, announced a content moderation review of all X tag channels with significant viewership, and temporarily suspended all use of their platform for those in the Washington DC area ahead of the Presidential Inauguration.  This briefing details the results of an ethnographic analysis of the role which DLive plays in UK extreme right-wing mobilization online, with specific attention played to the overlap between extremist use of the platform and the targeting of gamers for radicalisation. In total, we watched 13.5 hours of live-streamed content and analyzed the activity of 100 extreme right accounts. The time which ISD analysts spent scoping the platform overlapped with the removal of several high-profile extreme right-wing users of the platform. Importantly this analysis helps document how extremists are using a multi-platform strategy to avoid the negative impacts that content moderation efforts can have on their communications strategies. Key Findings • A relatively wide range of extremist influencers including British white nationalists use DLive as part of a broader strategy to broadcast extreme right ideology to their audiences. The monetization provided by DLive means that as well as providing a means to stream shows to audiences the platform offers the opportunity of netting them funds. • Extremists have an ambivalent relationship with DLive, treating it as part of a multi-platform strategy designed to circumnavigate content moderation. We found that extremists used DLive opportunistically due to the relative freedom it afforded them to broadcast content that would not be allowed on other platforms. However, this was not out of any particular affection for the platform, with extremists often streaming across multiple platforms in a bid to avoid moderation efforts. • Efforts by DLive to implement more robust terms of service appear to be having an impact on extremist activity. Several of the accounts we monitored were removed by DLive over the course of our analysis. Additionally, the users we monitored often discussed using alternative platforms like Trovo and Odysee to broadcast, which they felt provided more permissive environments for extremist activity. • We found limited evidence to suggest that the live streaming of gaming is used as a strategy by extremists to radicalize new users on DLive. Out of the 100 extremist accounts analyzed, only seven used DLive to stream gaming. Of these seven only three appeared to use gaming to advance extreme right ideology and movements. Analysing the gaming content produced by these users it appears that gaming primarily functions as a means for extreme right wing influencers to reach established audiences and strengthen existing extremist communities, rather than to radicalise and recruit new members.  

Beirut; Berlin; London; Paris; Washington DC Institute for Strategic Dialogue, 2021. 11p.

Transmisogyny, Colonialism and Online Anti‐Trans Activism Following Violent Extremist Attacks in the US and EU

By Anne Craanen, Charley Gleeson and Anna Meier

This report investigates the rise of online anti-trans activism following two prominent attacks involving LGBTQ+ communities, namely the October 2022 attack on a gay bar in Bratislava, Slovakia, and the March 2023 shooting at a school in Nashville, Tennessee perpetrated by a trans man.

We use a postcolonial approach, through which we find that the transphobia espoused online following the attacks was predominantly transmisogynistic, a consequence of the colonial logics around gender which assign the monopoly of violence to white cisgender men. The main themes identified were the erasure of trans identities, particularly transmasculinity, the overlap between transmisogyny and other forms of discrimination, and the demonization of trans people. 

The most important conclusion from our research is for everyone – technology companies, policymakers and other stakeholders – to take transphobia and transmisogyny seriously. Too often transmisogyny is seen as a side problem, or as a complement to another set of more radical ideas, including but not limited to white nationalism or anti-government sentiment. It can often be the case that transphobia, alongside misogyny, hate speech, or other forms of discrimination, is seen as “harmful but lawful” or described as “borderline content”, thereby not in need of online moderation. While simply removing such material from platforms may be neither appropriate nor advisable in all cases, there are other forms of content moderation that platforms can consider, depending on how online transphobia manifests itself. 

In the conclusion of our work, we provide practical recommendations to technology companies of all sizes for tackling transphobia more effectively. Key among these are the importance of knowledge-sharing between platforms and subject matter experts, defining transphobia and transmisogyny in platforms’ terms of service, and employing content moderation practices such as disinformation tags and algorithmic deprioritization. 

Recommendations for technology companies:

  1. Increase online monitoring following attacks that are directly relevant to the LGBTQ+  community as transphobic content is likely to increase, including material that violates terms of service, incites violence or is otherwise illegal. 

  2. Collaborate with experts to comprehend and classify transphobic rhetoric, and produce a taxonomy alongside subject-matter specialists, technology representatives, civil society, and government partners.

  3. Consider diverse moderation methods, removing illegal content and also using alternatives to removal such as fact-checking and algorithmic adjustments to mitigate exposure to transphobic channels and content.

  4. Define transphobia in terms of service to guide users as to what is allowed on platforms and enable user reporting. 

  5. Design clear reporting and appeal mechanisms for moderated content, including online transphobia, to protect digital and human rights.

” London: Global Network on Extremism and Technology (GNET), May 2024.2024. 26p.

Gaming and Extremism: The Extreme Right on Steam 

By Pierre Vaux, Aoife Gallagher, Jacob Davey

Steam is a video game supply service, described as the “single largest distribution site for PC games”. At the start of February 2021, the platform set a new record as 26.4 million users signed into the platform simultaneously, breaking its previous record of 25.4 million set only the month before. In addition to its online store and game launcher, the Steam community feature allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat. These groups serve as a means to enable connectivity around a certain subject or game, forming hubs where users with shared interests can collaborate. Often, Steam groups facilitate interaction between groups of players known as ‘clans’ who play together in one or more multiplayer games. However, several groups have been created to allow networking between people supportive of right-wing extremism. In this chapter, we provide an analysis of 45 interconnected Steam community groups associated with the extreme right. This cohort is a sample of a larger network of potentially extremist groups on the platform, and as such should be seen as a snapshot indicating broader trends on the platform, rather than a comprehensive overview of extreme right activity. Key Findings • The extreme right uses Steam as a hub for individual extremists to connect and socialize. The Steam groups examined by ISD, which often have members in common, span the extreme right ideological spectrum. This network connects supporters of far-right political parties, such as the British National Party (BNP), with groups promoting neo-Nazi organizations, like the Misanthropic Division. • Steam seems to have an entrenched and long-lasting extreme right community. Many of the groups analyzed date back to 2016 or even earlier. Steam’s permissive attitude to this harmful activity means that these communities have a haven to promote and discuss extremist ideology and content. • In addition to connecting individuals who support the extreme right, some groups also provide off-ramps to ideological content and other social media platforms, suggesting that Steam is being used to recruit to specific movements. This includes links to far-right blogs, podcasts and articles, as well as invitations to join Telegram groups and vetted Discord servers. • Some groups provide platforms for groups of individuals to engage in trolling and harassment ‘raids’ against communities deemed to be political enemies. Users were seen naming target sites and asking fellow group members to join them in raiding or spamming them, with the result that these communities are making Steam a more toxic space for other users. • Our analysis suggests that gaming seems to be largely used as a means of community building rather than as a deliberate strategy for radicalization or recruitment. Individuals who are already engaged with the extreme right appear to use Steam as a platform to connect with like-minded individuals over a shared hobby. However, we also found examples of political games, such as ‘Feminazi 3000’ being used as a means of advertising political identity, as well as historical strategy games being used as a means of living out extremist fantasies, such as winning World War II for Germany.  

  Beirut; Berlin; London; Paris; Washington DC Institute for Strategic Dialogue (2021).  14p.

30 Years of Trends in Terrorist and Extremist Games 

By Emily Thompson and Galen Lamphere-Englund

Violent extremist, terrorist, and targeted hate actors have been actively exploiting video games to propagandize, recruit, and fundraise for more than 30 years. This report presents an analysis of that history using a unique dataset, the Extremist and Terrorist Games Database (ETGD), developed by the authors. It contains 155 reviewed entries of standalone games, modifications for existing games (mods), and browser‑based games dating from 1982 to 2024. The titles analyzed appear across the ideological spectrum: far right (101 titles), jihadist (24), far left (1), and other forms of extremism and targeted hate (29), including school‑massacre ideation (12). They span platforms ranging from simple standalone games for Atari in the 1980s to sophisticated mods for some of today’s most popular games. The number of titles has increased year on year – in line with global conflict and extremist ideological trends, and revealing a continued push by malicious actors to exploit gaming. Meanwhile, the means of distribution have shifted from violent extremist organizations and marketplaces – such as white supremacist, neo‑Nazi, and jihadist organizations – to distributed repositories of extremist games hosted on internet archives, Ethereum‑hosted file‑sharing, Telegram and with subtly coded titles on mainstream platforms like Steam. While most of the titles in the ETGD are available for free, several that have been sold (often at symbolic prices like $14.88 or $17.76) appear to have generated revenue for groups ranging from Hezbollah to the National Alliance, an American neo‑Nazi group. Through new analysis of Steam data, we also show that a small number of extremist and targeted hate titles have generated almost an estimated $600,000 in revenue for small publishers on the platform. Far from being a comprehensive analysis of the ETGD, we intend this preliminary launch report to form a basis for future research of the dataset and a framework for continued contributions to the ETGD from Extremism and Gaming Research Network (EGRN) members. Above all, we seek to contribute to sensible policymaking to prevent violent extremism that situates games as part of a wider contested and exploited information space, which deserves far more attention from those working towards peaceful ends. Complete recommendations are provided in the conclusion section of this report but include the following: 1. Prohibit and prevent violent extremist exploitation: Gaming platforms should explicitly prohibit violent extremist and terrorist behaviors and content. Leadership exists here from Twitch, Discord, Microsoft/Xbox, and the affiliated Activision‑Blizzard.  a. Audio and video platforms, such as Spotify, Apple Music, and YouTube should seek to identify extremist gaming content currently available under misleading titles and tags. b. Flag and remove extremist titles across platforms: Hashing and preventing outlinking to ETGD games and links should be a priority across platforms. 2. Improve reporting mechanisms: Platforms must improve reporting mechanisms to make it easier for players to report violative content found in games and in‑game conduct. 3. Understand and take down distributed repositories: Larger repositories of extremist gaming content readily available on the surface web accelerate user exposure. 4. Collaborate across sectors: Addressing the spread of extremist games requires a collaborative effort between tech companies, government agencies, and civil society organizations. 5. Educate across sectors: Programmes supporting educators and frontline community moderators should be developed. 6. Support research and innovation: Including cross‑sector initiatives like the Global Network on Extremism and Technology (GNET) and EGRN, which produced this database. 7. Enhance regulatory frameworks: Governments should update regulatory frameworks applying to digital platforms, recognizing the nuances of gaming platforms and complying with human rights. 8. Encourage positive community engagement: Thoughtful, well-designed community guidelines, moderation policies, and reporting mechanisms can support community‑building.  

London: The Global Network on Extremism and Technology (GNET) , 2024. 40p.'

Addressing Key Risk Factors for Suicide at a Societal Level

By Jane Pirkis, Jason Bantjes, Rakhi Dandona, Duleeka Knipe, Alexandra Pitman, Jo Robinson, Morton Silverman, Keith Hawton 

 A public health approach to suicide prevention recognizes the powerful influence of social determinants. In this paper—the fifth in a Series on a public health approach to suicide prevention—we consider four major risk factors for suicide (alcohol use, gambling, domestic violence and abuse, and suicide bereavement) and examine how their influence on suicide is socially determined. Cultural factors and societal responses have an important role in all four risk factors. In the case of alcohol use and gambling, commercial entities are culpable. This Series paper describes a range of universal, selective, and indicated interventions that might address these risk factors and focuses particularly on key universal interventions that are likely to yield substantial population-level benefits

The Lancet Public Health Available online 10 September 2024 In Press, Corrected Proof

Restriction of Access to Means Used for Suicide 

 By Keith Hawton, Duleeka Knipe, Jane Pirkis 

 One of the most effective public health measures to prevent suicide is the restriction of access to means used in suicidal acts. This approach can be especially effective if a method is common and readily accessible. Suicide methods vary widely, and there have been several examples where means restriction has been applied, often with considerable success. Factors contributing to the availability of suicide methods can include access to physical means as well as cognitive awareness of methods. In this paper, which is the second in a Series on a public health approach to suicide prevention, we focus primarily on examples of restricting access to physical means of suicide, such as pesticides, firearms, and medication. We also discuss restricting the cognitive availability of means through attention to media and other representations of suicide methods. There are challenges associated with restricting access to means, including resistance to measures required to change the availability of some methods (which might, in part, be commercially determined) and method substitution, whereby one suicide method is replaced by another. Nevertheless, this means restriction must be an integral part of all national and local suicide prevention strategies

The Lancet, The Lancet Public Health, Volume 0, Issue 0, Online first, Sept. 2024.