Meta's Content Moderation Rollback Fuels Spread of Hate Speech and Terrorism, ADL Warns A new report by the Anti-Defamation League (ADL) raises serious concerns that Meta's recent relaxation of content moderation policies has led to a significant increase in hateful, extremist, and pro-terrorist content on its platforms, particularly Instagram. The organization highlights a systemic failure by Meta to address flagged material, potentially jeopardizing user safety and the platform's long-term viability. Meta's significant shift in content moderation policies, initiated last year and amplified by CEO Mark Zuckerberg's January 2025 announcement to dismantle the company's fact-checking program and remove speech restrictions in an effort to 'restore free expression,' is now facing severe criticism. Zuckerberg's rationale for these changes centered on the argument that the existing system had been plagued by errors and had eroded user trust. However, a comprehensive new report, compiled by the ADL Center on Extremism in collaboration with JLens, an ADL affiliate focused on shareholder advocacy, suggests that Meta may have overcorrected with potentially dangerous consequences. Researchers involved in the report uncovered a disturbing proliferation of content promoting hatred, extremism, and terrorism across Meta's platforms, with Instagram being a particular focal point. The findings indicate a stark inadequacy in Meta's enforcement, as only a meager 7% of flagged hateful and extremist content was reportedly removed by Instagram, according to the ADL's analysis. This statistic, the ADL argues, points to a systemic failure by the social media giant to adequately protect its user base from harmful material. Alex Friedfeld, director of research and analysis at the ADL Center on Extremism, elaborated on the report's findings in an interview with Fox News Digital, explaining the dual focus of their investigation. The first prong involved identifying the prevalence of harmful content, while the second examined Meta's response to this content. Friedfeld emphasized that while Meta cannot entirely prevent individuals from posting such material, especially if they are adept at creating new accounts, the company possesses the capability and responsibility to remove it once identified. The ADL's efforts to report problematic content through Instagram's standard user reporting system yielded disheartening results. Out of 150 accounts and 103 posts reported, only 11 accounts and eight posts were ultimately removed. Furthermore, in 20 instances, Instagram cited a lack of 'bandwidth' to review the submitted reports. This assertion drew sharp criticism from ADL CEO Jonathan Greenblatt, who questioned how a company of Meta's immense scale and technological sophistication, boasting billions in revenue and a vast user base, could claim insufficient resources to moderate content that directly harms its users. Greenblatt's statement underscored the perceived disconnect between Meta's enormous profitability and its commitment to user safety, highlighting the company's capacity for technological advancement alongside its apparent reluctance to invest in robust moderation. The ADL's report serves as a stark warning, positing that the rollback of content moderation policies risks transforming Instagram into a breeding ground for hate and antisemitism, with potentially severe real-world ramifications. Greenblatt reiterated these concerns, stating that social media platforms have become significant conduits for antisemitism, anti-Zionism, and all forms of hate, with Meta's products, especially Instagram, increasingly becoming a source of worry. He highlighted the paradox of Meta's massive daily user engagement—averaging 3 billion users—contrasted with the company's considerable reduction in moderation policies over the past year and a half. The researchers also identified instances where the content of banned far-right commentator Nick Fuentes, despite his own platform ban, continued to circulate on Instagram, amplified by his followers. Fuentes himself acknowledged this trend, suggesting in a May 2025 X post that Instagram had eased its censorship, allowing clips from his shows to be posted. Even after initial bans on clipping accounts, the ADL identified 105 Instagram accounts linked to the 'Groyper movement,' collectively amassing approximately 1.4 million followers by January 2026. Beyond hate speech, the investigation uncovered accounts actively disseminating propaganda for designated Foreign Terrorist Organizations (FTOs) and Specially Designated Global Terrorists (SDGTs), direct violations of Meta's Community Standards which explicitly prohibit the presence of organizations or individuals with violent missions or engagement in violence. In one egregious example, researchers found an ISIS execution video on the platform, presented alongside an innocuous caption about a clock tower in Mecca, and identified at least 23 accounts spreading propaganda for groups like the Islamic State and al Qaeda. Moreover, 33 accounts were found to have direct or indirect ties to the Popular Front for the Liberation of Palestine, a U.S.-designated FTO implicated in the Oct