Open Access Publisher and Free Library
01-crime.jpg

CRIME

CRIME-VIOLENT & NON-VIOLENT-FINANCLIAL-CYBER

Posts tagged Social Media
What Role Does Social Media Play in Violence Affecting Young People? 

By Cassandra Popham, Ellie Taylor and William Teager

The Youth Endowment Fund surveyed over 10,000 teenage children (aged 13-17) in England and Wales about their experiences of violence. The findings are detailed across five reports, each focusing on a different aspect. In this second report, we examine teenage children’s experiences of violence on social media. We aim to understand its prevalence, the nature of the content the children encounter and its impact on their lives. Here’s what we found. Violence is widespread on social media. Exposure to real-life violence on social media has become the norm rather than the exception for teenage children. Our findings reveal that 70% of respondents have encountered some form of real-world violence online in the past 12 months. The most frequently observed content is footage of fights involving young people, with 56% of respondents reporting that they’ve seen such videos. Other common types of violence witnessed online include threats of physical harm (43%) and content related to gang activity (33%) and weapons (35%). Notably, one in nine children who say they’ve encountered weapon-related content have seen footage involving zombie knives or machetes — a figure significantly higher than the 1% of 13–17-year-olds who’ve reported that they carry such weapons, as highlighted in our first report. This suggests that social media may amplify fear by making certain behaviours appear more widespread than they are. Sexually violent content or threats have been reported by more than a quarter of teenage children (27%). For the second year in a row, TikTok is the platform where children are most likely to witness violent content.  While the majority of teenage children encounter violent content online, few actively seek it out. In fact, only 6% of those who’ve come across such content do so intentionally. Most are exposed to it inadvertently: half (50%) have seen it via someone else’s profile or feed, and just over a third (35%) have had it shared directly with them. Alarmingly, 25% of children have reported that the social media platforms themselves promote this violent content through features like ‘Newsfeed’, ‘Stories’ and the ‘For You Page’. This underscores the significant role social media companies play in amplifying exposure to violent content beyond what users might encounter by chance. Seeing violence online has real-world impacts. Viewing violent content online has impacts that extend far beyond the screen. The vast majority (80%) of teenage children who encounter weapons-related content on social media say it makes them feel less safe in their local communities. This perceived threat has tangible consequences: two-thirds (68%) of teenagers who’ve seen weapons on social media say it makes them less likely to venture outside, and 39% admit that it makes them more likely to carry a weapon themselves. The influence of social media doesn’t stop there. Nearly two-thirds (64%) of teenagers who report perpetrating violence in the past year say that social media has played a role in their behavior. Factors like online arguments and the escalation of existing conflicts are commonly cited as catalysts for real-world violence  Children support limiting access to phones and social media. The widespread exposure to real-world violence online may partly explain why many teenagers believe that access to social media should come later than access to smartphones. Our findings highlight the responsibility of social media companies to remove or restrict harmful content. They also point to the need for effective support and education to help children navigate these dangers while still benefiting from the positive aspects that social media can offer.  

London: Youth Endowment Fund, 2024. 28p.

Social Media: The Root Cause of Rising Youth Self‐Harm or a Convenient Scapegoat?

By Helen Christensen, Aimy Slade, Alexis Whitton

Recent events have reignited debate over whether social media is the root cause of increasing youth self‐harm and suicide. Social media is a fertile ground for disseminating harmful content, including graphic imagery and messages depicting gendered violence and religious intolerance. This proliferation of harmful content makes social media an unwelcoming space, especially for women, minority groups, and young people, who are more likely to be targeted by such content, strengthening the narrative that social media is at the crux of a youth mental health crisis. However, the parallel rise in social media use and youth mental health problems does not imply a causal relationship. Increased social media use may be a correlate, exacerbating factor, or a consequence of rising trends in youth self‐harm, which may have entirely separate causes. Despite its potential negative impacts, social media is also a source of information and support for young people experiencing mental health problems. Restricting young people's access to social media could impede pathways for help‐seeking. This complexity highlights the need for a considered approach.

Recommendations  

  • Understand why some individuals are more susceptible to social media harms.

  • Assess alternative explanations for youth self-harm trends.

  • Mitigate artificial intelligence (AI)-related risks.

  • Evaluate interventions that restrict social media and ensure they are evidence-based.

Medical Journal of Australia Volume221, Issue10 November 2024 Pages 524-526

‘We Want You To Be A Proud Boy’: How Social Media Facilitates Political Intimidation and Violence

By Paul M. Barrett

The main finding of this report is that social science research reveals that social media platforms can be—and often are—exploited to facilitate political intimidation and violence. Certain features of social media platforms make them susceptible to such exploitation, and some of these features should be changed to reduce the danger. “ The main finding of this report is that social science research reveals that social media platforms can be—and often are— exploited to facilitate political intimidation and violence. ” Based on a review of more than 400 studies published by peer-reviewed journals and think tanks, the report provides a platform-by-platform survey focusing on the particular features of each site that make it susceptible to exploitation by extremists promoting intimidation and violence and/or seeking recruits for their various causes. The report emphasizes that neither subjective observation nor social science research indicates that social media platforms are the sole or even primary cause of political intimidation and violence. Other media and irresponsible political leaders play crucial roles. However, the use of social media can enable or facilitate violence in a fashion that deserves attention and mitigation. Most of this problem—extremism and occasional use of force for political ends—occurs on the political right. But the left is not immune to these pathologies. The platforms discussed in the following pages range from some of the best known, like Facebook and YouTube, to the more recently ascendant TikTok to those on the right-wing fringe, such as Gab, Parler, and 4chan. Among the features, we examine are: • Facebook’s Groups, which helped the sometimes-violent QAnon to grow into a full-blown movement devoted to the delusion that former President Donald Trump has secretly battled “deep state” bureaucrats and Satanic pedophiles.1 • Instagram’s comments function, which has allowed the Iranian government to threaten dissidents with sexual assault and death as a way of silencing them.2 • TikTok’s powerful recommendation algorithm, which in one experiment promoted violent videos, including incitement of students to launch attacks at school.3 After a case study of January 6 by our collaborators at Tech Policy Press, the report concludes with recommendations for industry and government.

NYU Stern Center for Business and Human Rights Leonard N. Stern School of Business 2024. 32p.  

Social Media and Digital Politics: Networked Reason in an Age of Digital Emotion

By James Jaehoon Lee and Jeffrey Layne Blevins

 Informed by critical theory, this book employs Social Network Analysis (SNA) to examine the ever-increasing impact that social media has on politics and contemporary civic discourse. In just the past decade, social media platforms have been at the forefront of political discord that played out in the January 6th insurrection, the expulsion of a US President from major social media platforms, the attempted regulation of social media in various states, and the takeover of Twitter (now “X”) by one of the richest and (arguably) most financially influential persons in the world. This book examines these phenomena through a comprehensive and in-depth exploration of their meaning and implication for democratic society. Informed by SNA, James Jaehoon Lee and Jeffrey Layne Blevins examine several types of social and political commentary on one of the most influential social media networks and argue that the use of emotional appeals in these posts about social and political topics degrades the quality of civic discourse and encourages the abandonment of reasoning in democratic self-governance. A timely and vital text for upper-level students and scholars in a variety of disciplines from media and communication studies, journalism, and digital humanities to social network analysis, political science, and sociology. 

 New York; London: Routledge, 2023. 161p.

Social Media Bots: Laws, Regulations, and Platform Policies

By Kasey Stricklin and Megan K McBride

Social media bots—simply, automated programs on social media platforms—affect US national security, public discourse, and democracy. As the country continues to grapple with both foreign and domestic disinformation, the laws and platform policies governing the use of social media bots are incredibly important. As part of CNA’s study, Social Media Bots: Implications for Special Operations Forces, our literature review found that the landscape of such regulations is difficult to piece together, and applicable provisions and policies are disparately catalogued. This CNA primer helps to fill this gap by helping policy-makers and national security practitioners understand the laws and social media platform policies as they currently exist. We also consider the challenges and dilemmas faced by legislators, and social media platforms, as they attempt to craft relevant provisions to address social media bots and malign influence, and we conclude with a brief look at the consequences for breaking platform policies.

The Legal Framework: US policy-makers are constrained in their passage of bot-related laws by a number of factors. First, legislators must consider free speech rights granted by the First Amendment of the Constitution. Additionally, Section 230 of the Communications Decency Act (CDA 230) hinders the ability of policy-makers to hold social media platforms legally responsible for any material posted on their site. Further, the slow speed of congressional action compared to technological advancement, and the barriers to obtaining reliable information on the social media bot threat, have proved difficult to overcome. There are no US federal laws governing social media automation, although members of Congress have introduced several relevant pieces of legislation over the last few years. While there is some congressional interest in crafting botrelated legislation, the political will to pass such provisions has yet to materialize.

In the international arena, the European Union has been a leader in efforts to counter disinformation; it introduced a nonbinding Code of Practice in October 2018, to which many of the most prominent social media companies signed on. As a result, the platforms committed themselves to self-regulation aimed at stamping out disinformation on their sites, which includes closing fake accounts and labeling bot communications. In May 2020, the European Commission reported that, though there were positive developments toward countering disinformation, there is still much room for improvement in labeling and removing bots. It is important to keep in mind, though, that the EU has a permanent bureaucracy to study problems and propose legally and non-legally binding legislation. In the US, legislation works differently, as a legislative champion with significant clout needs to emerge in order to push forward a proposal.

Platform Policies: The social media companies face their own dilemmas when thinking about the creation of effective bot regulations. Unlike policy-makers, platforms are beholden to shareholders; and higher platform engagement generally leads to higher share values. Because bots make up a large portion of monthly active users on some platforms, the companies may be reluctant to kick off these automated accounts. However, public pressure since the 2016 US election has created a greater financial incentive to ensure engagement is authentic. The companies also worry about regulating too extensively out of fear they will then be admitting they have an affirmative duty to moderate and thus lead to the revocation of their limited immunities under CDA 230. This tension is evident in the run-up to the US presidential elections, as the social media companies seek to ensure the truthfulness of candidates on their sites, they also risk one side of the political spectrum regarding them as politically biased and seeking to regulate them in response.

Instead of specifically focusing on bot activity, the platforms tend to address bot behavior through other policies on banned behavior. We broke out the policies relevant to bots into four categories: automation, fake accounts and misrepresentation, spam, and artificial amplification. Figure 1 depicts the way these policies often overlap in detailing prohibited bot behaviors. 

The consequences for breaking platform policies vary, with the sites often looking at the specific violation, the severity of the infraction, and the user’s history on the platform. While they may simply hand out a warning or restrict the post’s viewership, the sites also reserve the right to ban users or accounts, and can even go so far as to sue for violation of their terms.

The ever-evolving threats from disinformation and malicious bots will likely continue to cause consternation in the US government. However, experts are skeptical that Congress will find a legislative solution in the near future, despite enhanced attention to the problem. Therefore, the social media platforms are likely to shoulder much of the burden going forward, and it is an open question how and to what extent the platforms should police themselves. As they grapple with the prevalence of automated accounts operating on their sites, the platforms’ policies and enforcement provisions will continue to evolve to meet the threats of the day. However, it may ultimately be the attention of the press and American public, or the initiative of a regulatory agency like the Federal Trade Commission, that provides the needed impetus for change on these issues.

Arlington, VA: CNA, 2000. 40p.