Open Access Publisher and Free Library
10-social sciences.jpg

SOCIAL SCIENCES

EXCLUSION-SUICIDE-HATE-DIVERSITY-EXTREMISM-SOCIOLOGY-PSYCHOLOGY-INCLUSION-EQUITY-CULTURE

Posts tagged online extremism
The Gamification of (Violent) Extremism: An exploration of emerging trends, future threat scenarios and potential P/CFE solutions

By Suraj Lakhani, Jessica White and Claudia Wallner

The intersection between (violent) extremism and video-gaming – spanning across jihadist, farright, and other types of ideologies – is long-standing, though is an area that is under-researched. As part of this, particularly scant attention has been paid to the concept of ‘gamification’; i.e. the application of gaming and game-design principles within non-gaming environments (Lakhani and Wiedlitzka, 2022). The primary objective of this paper is to provide an understanding of how (violent) extremism can be (and has been) gamified, what emerging trends and future scenarios might be, and the potential influence (or lack thereof) that gamification has within (violent) extremism. On the basis of this understanding, this paper will outline relevant concepts of action through preventing and countering (violent) extremism (P/CVE) considerations and offer policy (and broader) recommendations on how to account for the element of gamification and potential actions to prevent and counter the phenomenon. Through existing literature and open-source materials – including academic articles, research reports, policy documents, newspaper articles, investigative journalism, government inquiries and previous relevant Radicalisation Awareness Network (RAN) Policy Support (PS) deliverables, etc. – this paper will investigate the following key questions: what is gamification of (violent) extremism, what are the current and future threats it presents to the European Union (EU), and how can it be countered? In order to address this, the paper is organised into the following sections. SECTION 1 (‘CONCEPTUALISATION OF GAMIFICATION’) Section 1 (‘Conceptualisation of gamification’) will provide a conceptual overview of gamification, including outlining a working definition, in order to provide a foundation for the remainder of the paper. This section will also outline the concept’s origins and examine how these can be applied to the context of (violent) extremism. There will additionally be a contextualisation of the phenomenon in regard to the threat of (violent) extremism within EU Member States (MS) overall. SECTION 2 (‘CURRENT AND FUTURE THREATS’) Section 2 (‘Current and future threats’) will discuss the potential ways in which (violent) extremism can be gamified, predominantly through outlining a range of current examples. These examples are by no means exhaustive, but do provide a sufficient overview regarding the types of gamification approaches taken within this context, by both (violent) extremist organisations and individuals. This section will conclude by considering the emerging trends and conceivable future scenarios in this field. SECTION 3 (‘ADDRESSING GAMIFICATION WITH P/CVE’) Section 3 (‘Addressing gamification with P/CVE’) outlines how gamified (violent) extremism can be addressed in P/CVE programming and whether or not it requires specifically tailored responses. This section will also inform discussions on whether current responses are fit for purpose and how these approaches potentially need to be tailored or evolve in order to deal with the threat posed by the gamification of (violent) extremism more effectively. SECTION 4 (‘POLICY AND RECOMMENDATIONS’) Section 4 (‘Policy and recommendations’) will then consider any current policy which relates to the gamification of (violent) extremism across EU MS. This will be followed by a number of relevant recommendations for policymakers stemming from existing research and literature. This includes providing recommendations for P/CVE based on promising approaches. This section will also discuss the current state of work in this area of study and make relevant research-related recommendations. CONCLUSIONS Finally, a ‘Conclusions’ section will discuss the potential value and limitations of gamification as a concept in relation to (violent) extremism. This is underpinned by the consideration of whether gamification is purposeful or relates to actions undertaken by those familiar with a particular subculture, i.e. gamers.

Luxembourg: Publications Office of the European Union, 2022 25p.

Connecting, Competing, and Trolling: “User Types” in Digital Gamified Radicalization Processes

by Linda Schlegel

The concept of gamification is increasingly applied as a framework to understand extremist online subcultures and communications. Although a number of studies have been conducted, the theoretical and empirical basis to understand the role of gamification in extremist contexts remains weak. This article seeks to contribute to the development of a gamification of radicalization theory by exploring how Marczewski’s HEXAD, a user typology for gamified applications, may facilitate our understanding of individual variations in engagement with gamified extremist content. Five user types, named after their core motivational drivers for engagement, are discussed: Socializers, Competitors, Achievers, Meaning Seekers, and Disruptors. This typology may support future studies by providing a preliminary understanding of how different game elements may appeal to different users and increase their engagement with and susceptibility to extremist content in cyberspace.

Perspectives on Terrorism , August 2021, Vol. 15, No. 4 (August 2021), pp. 54-64 .

How anti-feminist and anti-gender ideologies contribute to violent extremism – and what we can do about it Policy Brief

By The Violence Prevention Network and the Centre for Feminist Foreign Policy

Anti-feminist and anti-gender ideologies - and their basis in hostility and hatred towards women and LGBTQI* people - have long been an overlooked factor in analysing radicalisation and violent extremism. Both ideologies strongly appeal to groups organised around exclusionary principles because they provide language and a framework for the defence of hierarchical structures in society (Denkovski et al., 2021, 18). This trend is increasingly manifesting itself across a spectrum of violence. Despite a striking prevalence of anti-feminism and anti-gender attitudes within extremist worldviews, these motives have been considered at best secondary when analysing extremist attacks and groups (Wolf 2021). Yet, for extremist actors, they constitute a core element of their ideologies, a relevant area of recruitment within and outside extremist scenes, and an opportunity for strategic alliances. Throughout right-wing attacks in the past decade, such as those in Christchurch, Hanau, and Halle, a clear pattern of anti-feminist and misogynistic beliefs can be detected. Within such attacks, the ideological basis for mass public violence is formed by adherence to multiple, overlapping exclusionary attitudes. For instance, one conspiracy theory that finds popularity among right-wing actors is that of the “Great Replacement”. According to this idea, feminism was invented by Jewish elites to lower birth rates and advance mass migration, with the goal of replacing white European populations with non-European, non-white people, specifically Muslims (Fedders 2018). The Christchurch attacker had uploaded an online “manifesto” titled “the Great Replacement” before the attack on two mosques that killed 51 people - illustrating how anti-feminism is often intricately interwoven with racist and anti-Semitic thinking. The issue of overlapping ideological codes, elements, and groups is becoming increasingly important as we witness growing complexity in the right-wing landscape of radicalisation and violence. However, misogyny and anti-feminism are also integral to violent attacks outside of right-wing scenes. Several terrorist attacks by members of the incel1 community, such as those in the Californian city of Isla Vista in 2014, as well as the 2018 Toronto and 2019 Tallahassee attacks, have led to an increased awareness of the incel threat and the beginning of its consideration as a security threat in Western countries (see, for instance, Moonshot 2021). While embedded in a much broader online misogynist scene, misogynist incel ideologies promote particularly extreme misogyny, anti-feminism, and sexism. Misogynist incels see women as depriving them of their natural entitlement to sex. The use of dehumanising and aggressive language – and, in parts, open calls to violence Anti-feminist and anti-gender ideologies - and their basis in hostility and hatred towards women and LGBTQI* people - have long been an overlooked factor in analysing radicalisation and violent extremism. Both ideologies strongly appeal to groups organised around exclusionary principles because they provide language and a framework for the defence of hierarchical structures in society (Denkovski et al., 2021, 18). This trend is increasingly manifesting itself across a spectrum of violence. Despite a striking prevalence of anti-feminism and anti-gender attitudes within extremist worldviews, these motives have been considered at best secondary when analysing extremist attacks and groups (Wolf 2021). Yet, for extremist actors, they constitute a core element of their ideologies, a relevant area of recruitment within and outside extremist scenes, and an opportunity for strategic alliances. Throughout right-wing attacks in the past decade, such as those in Christchurch, Hanau, and Halle, a clear pattern of anti-feminist and misogynistic beliefs can be detected. Within such attacks, the ideological basis for mass public violence is formed by adherence to multiple, overlapping exclusionary attitudes. For instance, one conspiracy theory that finds popularity among right-wing actors is that of the “Great Replacement”. According to this idea, feminism was invented by Jewish elites to lower birth rates and advance mass migration, with the goal of replacing white European populations with non-European, non-white people, specifically Muslims (Fedders 2018). The Christchurch attacker had uploaded an online “manifesto” titled “the Great Replacement” before the attack on two mosques that killed 51 people - illustrating how anti-feminism is often intricately interwoven with racist and anti-Semitic thinking. The issue of overlapping ideological codes, elements, and groups is becoming increasingly important as we witness growing complexity in the right-wing landscape of radicalisation and violence. However, misogyny and anti-feminism are also integral to violent attacks outside of right-wing scenes. Several terrorist attacks by members of the incel1 community, such as those in the Californian city of Isla Vista in 2014, as well as the 2018 Toronto and 2019 Tallahassee attacks, have led to an increased awareness of the incel threat and the beginning of its consideration as a security threat in Western countries (see, for instance, Moonshot 2021). While embedded in a much broader online misogynist scene, misogynist incel ideologies promote particularly extreme misogyny, anti-feminism, and sexism. Misogynist incels see women as depriving them of their natural entitlement to sex. The use of dehumanising and aggressive language – and, in parts, open calls to violence provides the framework in which attacks, as mentioned above, occur. The most well-known incel attacker, for instance, just weeks before the attack in Isla Vista called upon incels to “realise their true strength and numbers”, “overthrow this oppressive feminist system”, and “start envisioning a world where WOMEN FEAR YOU” (Glasstetter 2014). These attacks were broadly referenced and discussed within incel and misogynist scenes and the extreme right more specifically. In Halle, the right-wing extremist who killed two people and tried to enter a local synagogue was listening to music that makes explicit references in name and content to the incel attack in Toronto in 2018.

Berlin: Violence Prevention Network, 2021. 15p.

Bad Gateway: How Deplatforming Affects Extremist Websites

By Megan Squire

Deplatforming websites—removing infrastructure services they need to operate, such as website hosting—can reduce the spread and reach of extremism and hate online, but when does deplatforming succeed? This report shows that deplatforming can decrease the popularity of extremist websites, especially when done without warning. We present four case studies of English-language, U.S.-based extremist websites that were deplatformed: the Daily Stormer, 8chan/8kun, TheDonald.win/Patriots.win, and Nicholas Fuentes/America First. In all of these cases, the infrastructure service providers considered deplatforming only after highly publicized or violent events, indicating that at the infrastructure level, the bar to deplatforming is high. All of the site administrators in these four cases also elected to take measures to remain online after they were deplatformed. To understand how deplatforming affected these sites, we collected and analyzed publicly available data that measures website-popularity rankings over time.

We learned four important lessons about how deplatforming affects extremist websites:

  • It can cause popularity rankings to decrease immediately.

  • It may take users a long time to return to the website. Sometimes, the website never regains its previous popularity.

  • Unexpected deplatforming makes it take longer for the website to regain its previous popularity levels.

  • Replicating deplatformed services such as discussion forums or live-streaming video products on a stand-alone website presents significant challenges, including higher costs and smaller audiences.

    Our findings show that fighting extremism online requires not only better content moderation and more transparency from social media companies but also cooperation from infrastructure providers like Cloudflare, GoDaddy, and Google, which have avoided attention and critique.

New York: Anti-Defamation League, Center for Technology and Society, 2023. 37p.

From Bad to Worse: Algorithmic Amplification of Antisemitism and Extremism

By The Anti-Defamation League, Center for Technology and Society

The question of who is accountable for the proliferation of antisemitism, hate, and extremism online has been hotly debated for years. Are our digital feeds really a reflection of society, or do social media platforms and tech companies actually exacerbate virulent content themselves? The companies argue that users are primarily responsible for the corrosive content soaring to the top of news feeds and reverberating between platforms. This argument serves to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate. A new pair of studies from ADL (the Anti-Defamation League) and TTP (Tech Transparency Project) show how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves. While there are many variables contributing to online hate, including individual users’ own behavior, our research demonstrates how these companies are taking things from bad to worse. For these studies, we created male, female, and teen personas (without a specified gender) who searched for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games across four of the biggest social media platforms, to test how these companies’ algorithms would work. In the first study, three of four platforms recommended even more extreme, contemptuously antisemitic, and hateful content. One platform, YouTube, did not take the bait. It was responsive to the persona but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability. In our second study, we tested search functions at three companies, all of which made finding hateful content and groups a frictionless experience, by autocompleting terms and, in some cases, even auto-generating content to fill in hate data voids. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, proving, again, that this is not just a problem of scale or capability. What these investigations ultimately revealed is that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or help curb antisemitism and extremism. As debates rage between legislators, regulators, and judges on AI, platform transparency, and intermediary liability, these investigations underscore the urgency for both platforms and governments to do more.

New York: The Anti-Defamation League, Center for Technology and Society, 2023. 36p.

Online Hate and Harassment: The American Experience 2023

By The Anti-Defamation League, Center for Technology & Society  

Over the past year, online hate and harassment rose sharply for adults and teens ages 13-17. Among adults, 52% reported being harassed online in their lifetime, the highest number we have seen in four years, up from 40% in 2022. Both adults and teens also reported being harassed within the past 12 months, up from 23% in 2022 to 33% in 2023 for adults and 36% to 51% for teens. Overall, reports of each type of hate and harassment increased by nearly every measure and within almost every demographic group. ADL conducts this nationally representative survey annually to find out how many American adults experience hate or harassment on social media; since 2022, we have surveyed teens ages 13-17 as well. The 2023 survey was conducted in March and April 2023 and spans the preceding 12 months. Online hate and harassment remain persistent and entrenched problems on social media platforms.

New York: ADL, 2023. 51p.

Feminist Theorisation of Cybersecurity to Identify and Tackle Online Extremism

By Elsa Bengtsson Meuller,

From the document: "Online abuse and extremism disproportionately target marginalised populations, particularly people of colour, women and transgender and non‐binary people. The core argument of this report focuses on the intersecting failure of Preventing and Counter Violent Extremism (P/CVE) policies and cybersecurity policies to centre the experiences and needs of victims and survivors of online extremism and abuse. In failing to do so, technology companies and states also fail to combat extremism. The practice of online abuse is gendered and racialised in its design and works to assert dominance through male supremacist logic. Online abuse is often used by extremist groups such as the far right, jihadist groups and misogynist incels. Yet online abuse is not seen as a 'threat of value' in cybersecurity policies. Additionally, the discipline of terrorism studies has failed to engage with the intersection of racism and misogyny properly. Consequently, we fail to centre marginalised victims in our responses to extremism and abuse. Through the implementation of a feminist theorisation of cybersecurity to tackle extremism, this report proposes three core shifts in our responses to online extremism: Incorporate misogynist and racist online abuse into our conceptions of extremism. Shift the focus from responding to attacks and violence to addressing structural violence online. Empower and centre victims and survivors of online abuse and extremism."

Global Network On Extremism And Technology (Gnet). 2023. 32p.

Post-Digital Cultures of the Far Right: Online Actions and Offline Consequences in Europe and the US

Edited by Maik Fielitz and Nick Thurston

How have digital tools and networks transformed the far rights strategies and transnational prospects? This volume presents a unique critical survey of the online and offline tactics, symbols and platforms that are strategically remixed by contemporary far-right groups in Europe and the US. It features thirteen accessible essays by an international range of expert scholars, policy advisors and activists who offer informed answers to a number of urgent practical and theoretical questions: How and why has the internet emboldened extreme nationalisms? What counter-cultural approaches should civil societies develop in response?How have digital tools and networks transformed the far rights strategies and transnational prospects? This volume presents a unique critical survey of the online and offline tactics, symbols and platforms that are strategically remixed by contemporary far-right groups in Europe and the US. It features thirteen accessible essays by an international range of expert scholars, policy advisors and activists who offer informed answers to a number of urgent practical and theoretical questions: How and why has the internet emboldened extreme nationalisms? What counter-cultural approaches should civil societies develop in response?How have digital tools and networks transformed the far rights strategies and transnational prospects? This volume presents a unique critical survey of the online and offline tactics, symbols and platforms that are strategically remixed by contemporary far-right groups in Europe and the US. It features thirteen accessible essays by an international range of expert scholars, policy advisors and activists who offer informed answers to a number of urgent practical and theoretical questions: How and why has the internet emboldened extreme nationalisms? What counter-cultural approaches should civil societies develop in response?

Bielefeld, Germany:  transcript Verlag, 2019. 210p.

The European Far-right Online: An Exploratory Twitter Outlink Analysis of German & French Far-Right Online Ecosystems

By Stuart Macdonald, Kamil Yilmaz, Chamin Herath, J.M. Berger, Suraj Lakhani, Lella Nouri, & Maura Conway

Focus on violent and non-violent activities and content in online spaces has yielded valuable insights into the evolution of extremist exploitation of social media and the internet. Over the past decade, much attention has been dedicated to understanding jihadist—particularly the so-called Islamic State’s—use of popular social media platforms and encrypted messaging apps to spread propaganda and entice followers. In recent years, however, attention to far-right extremist exploitation of online spaces has been growing. 1: As Conway, Scrivens, and Macnair have comprehensively documented, 2: right-wing extremist (a subset of the broader far-right) online communities have a lengthy history, transitioning from dial-up bulletin board systems; to static websites and online forums; to social media platforms, messaging and other communication “apps.” While far-right online communities were and still are largely decentralized, these communities can be considered loosely interconnected, their online interdependence tracing back to the “hot-links” page on the original Stormfront internet forum where outlinks to like-minded websites and forums were posted. Far-right, including right-wing extremist, online communities have since been described as an “ecosystem,” consisting of various types of online spaces or “entities” (e.g., websites, social media platforms). 3: Still, the actual extent to which these networks are interdependent or overlapping, as opposed to largely insulated groupings, platforms, and activities, remains to be fully interrogated. Doing so requires more localized research efforts, focused on identifying the nature of content shared and platforms used in far-right communities and ecosystems online to more fully examine interconnections between them.

Resolve Network, 2022. 48p.


Hate, Extremism, and Terrorism in Alberta, Canada, and Beyond: The Shift from 2019 to 2022

By Michele St-Amant, David Jones, Michael King, & John McCoy

There have been significant changes in the three years since the Organization for the Prevention of Violence (OPV) published its first report about hate-motivated violence, extremism and terrorism in Alberta, Building Awareness, Seeking Solutions. The Covid-19 pandemic, protests against public health measures, tense elections in the United States, and the backlash to racial justice movements, among other events, have had broad social repercussions. Some of which have changed the composition and scale of the threat of extremism and terrorism – which has become more diffuse and comprised of a broader set of grievances and ideologies.

This report is organized using the nomenclature developed by the Government of Canada to categorize different forms of extremist ideologies. As such, the findings related to ideologically motivated violent extremism and religiously motivated violent extremism are summarized first.i Next, we summarize our findings about conspiracy theories and hate incidents, including crimes, within Alberta and across Canada.

Alberta: Organization for the Prevention of Violence, 2022. 115p.

The Online Extremist Ecosystem: Its Evolution and a Framework for Separating Extreme from Mainstream

By Heather J. Williams, Alexandra T. Evans, Jamie Ryan, Erik E. Mueller, Bryce Downing

In this Perspective, the authors introduce a framework for internet users to categorize the virtual platforms they use and to understand the likelihood that they may encounter extreme content online.

The authors first provide a landscape of the online extremist "ecosystem," describing how the proliferation of messaging forums, social media networks, and other virtual community platforms has coincided with an increase in extremist online activity. Next, they present a framework to describe and categorize the platforms that host varying amounts of extreme content as mainstream, fringe, or niche. Mainstream platforms are those for which only a small portion of the content would be considered inappropriate or extreme speech. Fringe platforms are those that host a mix of mainstream and extreme content—and where a user might readily come across extreme content that is coded or obscured to disguise its violent or racist underpinning. Niche platforms are those that openly and purposefully cater to an extreme audience.

Santa Monica, CA: RAND Corporation, 2021. 44p.