The Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

SOCIAL SCIENCES-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY

Posts tagged online
Hate in the Sunshine State: Extremism and Antisemitism in Florida 2020-2022

By The Anti-Defamation League, Center on Extremism

This report examines the extremist and antisemitic trends and incidents in the state of Florida from 2020 to the present. The past two years have seen a significant increase in extremist related incidents both nationwide and in the state of Florida. These incidents have been driven, in part, by widespread disinformation and conspiracy theories which have animated extremists and fueled antisemitism. The result: unrest and violence, from the January 6 insurrection to white supremacist activity to a spike in hate crimes. In Florida, new white supremacist groups have formed, including White Lives Matter, Sunshine State Nationalists, NatSoc Florida and Florida Nationalists, while existing neo-Nazi and accelerationist groups have broadened their audience both online and on the ground activities. Other extremist groups such as Oath Keepers and the Proud Boys have shifted their strategies to focus on the local level, disrupting school board meetings and even running for political office.

New York: ADL, 2022. 46p.

Very Fine People: What Social Media Platforms Miss About White Supremacist Speech

By Libby Hemphill

Social media platforms provide fertile ground for white supremacist networks, enabling farright extremists to find one another, recruit and radicalize new members, and normalize their hate. Platforms such as Facebook and Twitter use content matching and machine learning to recognize and remove prohibited speech, but to do so, they must be able to recognize white supremacist speech and agree that it should be prohibited. Critics in the press1 and advocacy organizations still argue that social media companies haven’t been aggressive or broad enough in removing prohibited content. There is little public conversation, however, about what white supremacist speech looks like and whether white supremacists adapt or moderate their speech to avoid detection. Our team of researchers set out to better understand what constitutes English-language white supremacist speech online and how it differs from general or non-extremist speech. We also sought to determine whether and how white supremacists adapt their speech to avoid detection. We used computational methods to analyze existing sets of known white supremacist speech (text only) and compared those speech patterns to general or non-extremist samples of online speech. Prior work confirms that extremists use social media to connect and radicalize, and they use specific linguistic markers to signal their group membership. We sampled data from users of the white nationalist website Stormfront and a network of “alt-right” users on Twitter. Then, we compared their posts to typical, non-extremist Reddit comments.* We found that platforms often miss discussions of conspiracy theories about white genocide and Jewish power and malicious grievances against Jews and people of color. Platforms also let decorous but defamatory speech persist. With all their resources, platforms could do better. With all their power and influence, platforms should do better.

New York: ADL, 2022. 66p.

Online Hate and Harassment: The American Experience 2021

By Anti-Defamation League Center for Technology and Society

How safe are social media platforms now? Throughout 2020 and early 2021, major technology companies announced that they were taking unprecedented action against the hate speech, harassment, misinformation and conspiracy theories that had long flourished on their platforms. According to the latest results from ADL’s annual survey of hate and harassment on social media, despite the seeming blitz of self-regulation from technology companies, the level of online hate and harassment reported by users barely shifted when compared to reports from a year ago. This is the third consecutive year ADL has conducted its nationally representative survey. Forty-one percent of Americans said they had experienced online harassment over the past year, comparable to the 44% reported in last year’s “Online Hate and Harassment” report. Severe online harassment comprising sexual harassment, stalking, physical threats, swatting, doxing and sustained harassment also remained relatively constant compared to the prior year, experienced by 27% of respondents, not a significant change from the 28% reported in the previous survey. ● LGBTQ+ respondents reported higher rates of overall harassment than all other demographics for the third consecutive year, at 64%. ● 36% of Jewish respondents experienced online harassment, comparable to 33% the previous year. ● Asian-American respondents have experienced the largest single year-over-year rise in severe online harassment in comparison to other groups, with 17% reporting it this year compared to 11% last year. This year, fewer respondents who experienced physical threats reported them to social media platforms than was the case the year before; these users also reported that platforms were doing less to address their safety. ● 41% of respondents who experienced a physical threat stated that the platform took no action on a threatening post, ● comparable to the 38% who had reported a similar lack of action the year before. ● 38% said they did not flag the threatening post to the platform, no statistically significant change from 33% the prior year. ● Only 14% of those who experienced a physical threat said the platform deleted the threatening content, a significant drop from 22% the prior year. ● Just 17% of those who experienced a physical threat stated that the platform blocked the perpetrator who posted the content, a sharp decrease from the prior year’s 28%.

New York: ADL, 2021. 46p.

Breaking the Building Blocks of Hate: A Case Study of Minecraft Servers

By Rachel Kowert, Austin Botelho and Alex Newhouse

The online game Minecraft, owned by Microsoft, has amassed 141 million active users since it was launched in 2011. It is used in school communities, among friend groups and even has been employed by the U.N. Despite its ubiquity as an online space, little has been reported on how hate and harassment manifest in Minecraft, as well as how it performs content moderation. To fill this research gap, Take This, ADL and the Middlebury Institute of International Studies, in collaboration with GamerSafer, analyzed hate and harassment in Minecraft based on anonymized data from January 1st to March 30th, 2022 consensually provided from three private Minecraft servers (no other data was gathered from the servers except the anonymized chat and report logs used in this study). While this analysis is not representative of how all Minecraft spaces function, it is a crucial step in understanding how important online gaming spaces operate, the form that hate takes in these spaces, and whether content moderation can mitigate hate.

New York: Anti-Defamation League Center for Technology and Society, 2022. 20p.

Extreme Digital Speech: Contexts, Responses and Solutions

Edited by Bharath Ganesh and Jonathan Bright

Extreme digital speech (EDS) is an emerging challenge that requires co-ordination between governments, civil society and the private sector. In this report, a range of experts on countering extremism consider the challenges that EDS presents to these stakeholders, the impact that EDS has and the responses taken by these actors to counter it. By focusing on EDS, our consideration of the topic is limited to the forms of extreme speech that take place online, often on social media platforms and multimedia messaging applications such as WhatsApp and Telegram. Furthermore, by focusing on EDS rather than explicitly violent forms of extreme speech online, we (as Matti Pohjonen writes in this report) ‘depart’ from a focus on violence and incorporate a broader range of issues such as hateful and dehumanising speech and the complex cultures and politics that have formed around EDS. This focus brings into view a much broader range of factors that help assemble extremist networks online.  

  • This perspective is necessary, given the role that hate speech plays in extreme right-wing networks and the complexity of Daesh propaganda which uses videos to synthesise utopic images of life in the so-called ‘Khilafa’. Following JM Berger’s recent book, Extremism (2018), we can think of EDS as a core practice that produces an archive of online extremist resources that reinforce the sense of in-group belonging across a network of geographically dispersed users, whether this be the networks of jihadists connected on Telegram, or right-wing extremists that use trolling tactics to hack mainstream opinion on Twitter. All the same, while it is well-known that EDS is prolific online, there is little understanding of what kind of impact participation in these networks actually has on the likelihood of an individual’s engagement in political violence. Moreover, very little is known about what methods are being used to challenge EDS and what solutions are  best suited for this problem. This report seeks to provide policymakers, researchers and practitioners with an overview of the context of EDS, its impact, and the responses and solutions being mobilised to counter it. In order to do this, this report assembles a set of ten brief essays intended to serve as a starting point for further exploration of a range of topics related to the management of EDS across government, civil society and the private sector.

Dublin: VOX-POL Network of Excellence, Dublin City University, 2019. 123p.

Reconciling Impact and Ethics: An Ethnography of Research in Violence Online Political Extremism

By  Dounia Mahlouly

Gathering empirical evidence from interviews and focus groups, this study highlights some of the ethical dilemmas faced by the academic community tasked with developing new methodological tools and conceptual frameworks for the study of violent online political extremism. At the same time, it examines how academics position themselves in relation to a broad range of non-academic stakeholders involved in the public debate about where violent extremism, terrorism and the Internet intersect. It argues that these external actors are introducing a multisectoral ‘market’ for research on online violent extremism, which creates both opportunities and limitations for the academic community. Finally, it analyses how academics from across a range of disciplines will be able to secure access to data and competitive research tools, while also engaging in a critical reflection about the ethical considerations at stake.

Dublin: VOX-Pol Network of Excellence, Dublin City University, 2019. 35p.