More Transparency and Less Spin: Analyzing Meta’s Sweeping Policy Changes and their Impact on Ssers
By The Center for Countering Digital Hate
Meta announced six key policy changes on January 7th . Halting “proactive” enforcement of some policies on harmful content . Demoting less content “that might violate our standards” . Dropping policies on “immigration, gender identity and gender” . Replacing independent fact-checking with Community Notes . Demoting less content about “elections, politics or social issues” . Moving trust and safety teams from California to Texas Meta intends for these policy changes to be “expanded beyond the US” • Meta’s Chief Global Affairs Officer, Joel Kaplan, has said that changes to fact-checking and enforcement will be “expanded beyond the US” in time. • Kaplan also said that changes to Meta’s hate speech policies announced on January 7th “have been implemented worldwide immediately.” 1) Meta will halt “proactive” enforcement of some policies on harmful content • Meta will halt proactive enforcement (including automatic detection) for some policies on harmful content, instead acting only in response to user reports. • Meta’s announcement explicitly states proactive enforcement will continue for terrorism, child sexual exploitation, drugs, fraud and scams. • Meta has not stated if proactive enforcement will continue for these policy areas used in Meta’s transparency reports, which we call “at risk” policy areas: o Bullying & Harassment o Dangerous Orgs: Organized Hate o Hate Speech o Suicide and Self-Injury o Violence And Incitement o Violence & Graphic Content • Meta previously credited its “proactive detection technology” as a key factor in reducing the prevalence of hate speech and harmful content on its platforms. Meta could halt 97% of its enforcement in key policy areas such as hate speech • We analyzed Meta’s transparency reports to examine the potential impact of Meta halting proactive enforcement in policy areas such as hate speech. • Last year, over 97% of Meta’s enforcement actions in “at risk” policy areas were “proactive”, with less than 3% made in response to user reports. • Even accounting for Meta’s claims about mistakes in proactive enforcement, Meta correctly acted on 277 million pieces of content in “at risk” policy areas. Meta must tell users which policies it will no longer proactively enforce, and how it will keep them safe if it stops acting on millions of pieces of harmful content.
Washington, DC; London: Center for Countering Digital Hate, 2025. 31p.