Child advocacy groups and experts are raising concerns about the proliferation of low-quality, artificially intelligence (AI)-generated content on YouTube, particularly its impact on young viewers. This content, often referred to as “AI Slop,” is drawing criticism for its potential to harm children’s development.

Letter to YouTube Leadership

A letter addressed to YouTube CEO Neal Mohan and Google CEO Sundar Pichai, whose company owns YouTube, expresses “serious concern” regarding the spread of AI-generated videos on both YouTube and YouTube Kids. The letter, sent on Wednesday, was signed by more than 200 organizations and individual experts, including child psychiatrists and educators.

Impact on Child Development

According to the letter, “AI Slop content harms children’s development by distorting their sense of reality, saturating their learning processes, and hijacking their attention, thereby extending time online and displacing offline activities necessary for their healthy development.” The groups emphasize that these harms are particularly acute for young children.

What is 'AI Slop'?

“Slop” is used to describe content that is low-quality or essentially digital waste. In the context of AI, “AI Slop” refers to the mass generation of poor-quality content, similar to spam emails. The letter urges YouTube to clearly label all AI-generated content and prohibit it entirely on YouTube Kids.

Proposed Solutions

The advocates also propose preventing the recommendation of AI-generated videos to users under 18 and implementing a parental control option to disable AI-generated content, even if a child searches for it. The letter is supported by 135 organizations, including the American Federation of Teachers and the American Counseling Association, as well as approximately 100 individual experts like Jonathan Haidt, author of “The Anxious Generation.”

YouTube's Response and Current Policies

YouTube spokesperson Boot Bullwinkle stated that the platform maintains “high standards for content on YouTube Kids, including limiting AI-generated content…to a small set of high-quality channels.” They also offer parents the option to block channels and prioritize transparency by labeling content from their own AI tools and requiring creators to disclose realistically generated AI content.

Currently, YouTube’s policy requires creators to disclose when “realistic” content is created using altered or synthetic media, including generative AI. However, disclosure is not required for AI-generated content that is clearly unrealistic, such as animations or videos with special effects.

Concerns Remain

Fairplay argues that the voluntary disclosure policy and the limited definition of altered/synthetic content leave children exposed to a flood of unlabeled AI-generated videos. They also point out that many young viewers cannot read or understand AI disclosures, leaving them vulnerable. Rachel Franz, Fairplay’s Director of the “Young Children Thrive Offline” program, stated that exposing young children to AI “slop” is further evidence of YouTube’s design to maximize children’s screen time.

Recent Investments and Legal Context

This campaign follows Google’s AI Futures Fund investing $1 million in Animaj, an AI animation studio creating videos for children. The effort also comes after a California jury found YouTube liable for designing its platform to addict young users without regard for their well-being, alongside a similar ruling against Meta. YouTube CEO Neal Mohan identified “managing AI-generated content” as a company priority for 2026, stating they are developing systems to combat spam, clickbait, and low-quality content.