YouTube has updated its rulebook for the era of deepfakes. Starting today, anyone uploading video to the platform must disclose certain uses of synthetic media, including generative AI, so viewers know what theyâre seeing isnât real. YouTube says it applies to ârealisticâ altered media such as âmaking it appear as if a real building caught fireâ or swapping âthe face of one individual with anotherâs.â
The new policy shows YouTube taking steps that could help curb the spread of AI-generated misinformation as the US presidential election approaches. It is also striking for what it permits: AI-generated animations aimed at kids are not subject to the new synthetic content disclosure rules.
YouTubeâs new policies exclude animated content altogether from the disclosure requirement. This means that the emerging scene of get-rich-quick, AI-generated content hustlers can keep churning out videos aimed at children without having to disclose their methods. Parents concerned about the quality of hastily made nursery-rhyme videos will be left to identify AI-generated cartoons by themselves.
YouTube’s new policy also says creators donât need to flag use of AI for âminorâ edits that are âprimarily aestheticâ such as beauty filters or cleaning up video and audio. Use of AI to âgenerate or improveâ a script or captions is also permitted without disclosure.
There’s no shortage of low-quality content on YouTube made without AI, but generative AI tools lower the bar to producing video in a way that accelerates its production. YouTubeâs parent company Google recently said it was tweaking its search algorithms to demote the recent flood of AI-generated clickbait, made possible by tools such as ChatGPT. Video generation technology is less mature but is improving fast.
Established Problem
YouTube is a childrenâs entertainment juggernaut, dwarfing competitors like Netflix and Disney. The platform has struggled in the past to moderate the vast quantity of content aimed at kids. It has come under fire for hosting content that looks superficially suitable or alluring to children but on closer viewing contains unsavory themes.
WIRED recently reported on the rise of YouTube channels targeting children that appear to use AI video-generation tools to produce shoddy videos featuring generic 3D animations and off-kilter iterations of popular nursery rhymes.
The exemption for animation in YouTubeâs new policy could mean that parents cannot easily filter such videos out of search results or keep YouTubeâs recommendation algorithm from autoplaying AI-generated cartoons after setting up their child to watch popular and thoroughly vetted channels like PBS Kids or Ms. Rachel.
Some problematic AI-generated content aimed at kids does require flagging under the new rules. In 2023, the BBC investigated a wave of videos targeting older children that used AI tools to push pseudoscience and conspiracy theories, including climate change denialism. These videos imitated conventional live-action educational videosâshowing, for example, the real pyramids of Gizaâso unsuspecting viewers might mistake them for factually accurate educational content. (The pyramid videos then went on the suggest that the structures can generate electricity.) This new policy would crack down on that type of video.
âWe require kids content creators to disclose content that is meaningfully altered or synthetically generated when it seems realistic,â says YouTube spokesperson Elena Hernandez. âWe donât require disclosure of content that is clearly unrealistic and isnât misleading the viewer into thinking itâs real.â
The dedicated kids app YouTube Kids is curated using a combination of automated filters, human review, and user feedback to find well-made childrenâs content. But many parents simply use the main YouTube app to cue up content for their kids, relying on eyeballing video titles, listings, and thumbnail images to judge what is suitable.
So far, most of the apparently AI-generated childrenâs content WIRED found on YouTube has been poorly made in similar ways to more conventional low-effort kids animations. They have ugly visuals, incoherent plots, and zero educational valueâbut are not uniquely ugly, incoherent, or pedagogically worthless.
AI tools make it easier to produce such content, and in greater volume. Some of the channels WIRED found upload lengthy videos, some well over an hour long. Requiring labels on AI-generated kids content could help parents filter out cartoons that may have been published with minimalâor entirely withoutâhuman vetting.
+ There are no comments
Add yours