Read-Me.Org

View Original

From Bad To Worse: Amplification and Auto-Generation of Hate

By The Anti-Defamation League, Center for Technology and Society

The question of who is accountable for the proliferation of antisemitism, hate, and extremism online has been hotly debated for years. Are our digital feeds really a reflection of society, or do social media platforms and tech companies actually exacerbate virulent content themselves? The companies argue that users are primarily responsible for the corrosive content soaring to the top of news feeds and reverberating between platforms. This argument serves to absolve these multi-billion-dollar companies from responsibility for any role their own products play in exacerbating hate.

A new pair of studies from ADL and TTP (Tech Transparency Project) show how some of the biggest social media platforms and search engines at times directly contribute to the proliferation of online antisemitism, hate, and extremism through their own tools and, in some cases, by creating content themselves. While there are many variables contributing to online hate, including individual users’ own behavior, our research demonstrates how these companies are taking things from bad to worse.

For these studies, we created male, female, and teen personas (without a specified gender) who searched for a basket of terms related to conspiracy theories as well as popular internet personalities, commentators, and video games across four of the biggest social media platforms, to test how these companies’ algorithms would work. In the first study, three of four platforms recommended even more extreme, contemptuously antisemitic, and hateful content. One platform, YouTube, did not take the bait. It was responsive to the persona but resisted recommending antisemitic and extremist content, proving that it is not just a problem of scale or capability.

In our second study, we tested search functions at three companies, all of which made finding hateful content and groups a frictionless experience, by autocompleting terms and, in some cases, even auto-generating content to fill in hate data voids. Notably, the companies didn’t autocomplete terms or auto-generate content for other forms of offensive content, such as pornography, proving, again, that this is not just a problem of scale or capability.

What these investigations ultimately revealed is that tech companies’ hands aren’t tied. Companies have a choice in what to prioritize, including when it comes to tuning algorithms and refining design features to either exacerbate or help curb antisemitism and extremism.

As debates rage between legislators, regulators, and judges on AI, platform transparency, and intermediary liability, these investigations underscore the urgency for both platforms and governments to do more. Based on our findings, here are three recommendations for industry and government:

Tech companies need to fix the product features that currently escalate antisemitism and auto-generate hate and extremism. Tech companies should tune their algorithms and recommendation engines to ensure they are not leading users down paths riddled with hate and antisemitism. They should also improve predictive autocomplete features and stop auto-generation of hate and antisemitism altogether.

Congress must update Section 230 of the Communications Decency Act to fit the reality of today’s internet. Section 230 was enacted before social media and search platforms as we know them existed, yet it continues to be interpreted to provide those platforms with near-blanket legal immunity for online content, even when their own tools are exacerbating hate, harassment and extremism. We believe that by updating Section 230 to better define what type of online activity should remain covered and what type of platform behavior should not, we can help ensure that social media platforms more proactively address how recommendation engines and surveillance advertising practices are exacerbating hate and extremism, which leads to online harms and potential offline violence. With the advent of social media, the use of algorithms, and the surge of artificial intelligence, tech companies are more than merely static hosting services. When there is a legitimate claim that a tech company played a role in enabling hate crimes, civil rights violations, or acts of terror, victims deserve their day in court.

We need more transparency. Users deserve to know how platform recommendation engines work. This does not need to be a trade secret-revealing exercise, but tech companies should be transparent with users about what they are seeing and why. The government also has a role to play. We’ve seen some success on this front in California, where transparency legislation was passed in 2022. Still, there’s more to do. Congress must pass federal transparency legislation so that stakeholders (the public, researchers, and civil society) have access to the information necessary to truly evaluate how tech companies’ own tools, design practices, and business decisions impact society.

Hate is on the rise. Antisemitism both online and offline is becoming normalized. A politically charged U.S. presidential election is already under way. This is a pressure cooker we cannot afford to ignore, and tech companies need to take accountability for their role in the ecosystem.

Whether you work in government or industry, are a concerned digital citizen, or a tech advocate, we hope you find this pair of reports to be informative. There is no single fix to the scourge of online hate and antisemitism, but we can and must do more to create a safer and less hate-filled internet.

New York: ADL, 2023. 18p.