Skip to content
Icons for the YouTube Kids and YouTube apps are displayed on a smartphone. (Jenny Kane/AP)
Icons for the YouTube Kids and YouTube apps are displayed on a smartphone. (Jenny Kane/AP)
Author
PUBLISHED:
Getting your Trinity Audio player ready...

Recent reporting has drawn attention to an alarming new trend: video content aimed at young kids that is generated by artificial intelligence and is popping up on YouTube at a shocking rate. These videos feature garbled text, made-up words, disfigured people and animals, nonsensical songs and, sometimes, downright scary imagery. This is AI slop for kids, and it’s dangerous. And technology companies’ proposed solution isn’t good enough. 

According to a New York Times report, up to 40% of videos recommended to children on YouTube now appear to be AI-generated. The video titles, descriptions and opening sequences often give the illusion that the content is educational and beneficial for toddlers and preschoolers. It’s anything but. The content in these videos isn’t just mindless — in many cases, it’s actively harmful. Experts and reporters at the investigative news magazine Mother Jones have found videos showing toddlers swallowing whole grapes (a choking hazard), infants eating honey (which carries a risk of botulism), and children riding unrestrained in the front seat of a moving car. One video all about vowels shows consonants on screen, while another about the 50 states teaches children about “Ribio Island,” “Conmecticut” and “Louggisslia.”

Our children are being fed toddler misinformation. And it’s being produced at an industrial scale. The risk here is not “brain rot,” the atrophy of cognitive skills that is afflicting adults and adolescents who outsource an increasing amount of their mental exercise to AI. In young children, whose brains are still being built, the effect is much worse. I call it “brain stunt.” Because every experience a child has during their early years helps create new neural connections, wiring the brain for all future learning and connecting, encounters with AI slop may literally wire the brain incorrectly. 

This is an enormous problem that demands a bold, urgent solution. 

Perhaps unsurprisingly, that is not what YouTube is offering. After a recent investigation uncovered several of the videos I describe above, YouTube terminated six channels for violating its terms of service. This amounts to a Whac-a-Mole response to a firehose problem. 

In response to a letter sent to the CEOs of YouTube and its parent company Google, expressing concern about AI slop and signed by more than 200 organizations and individual experts (including me), a YouTube spokesperson issued a statement explaining that the platform requires content creators to disclose if AI was used in creating content that appears realistic and it provides parents the option to block channels. 

The implicit message: Parents should manage this themselves.

Unfortunately, evidence does not support parental controls as a sufficient or effective means of keeping kids safe online. To begin with, less than half of parents report using these tools at all. A meta analysis of dozens of studies found that the effects of parental-control use were mixed, with evidence of parental controls having beneficial, null and even adverse effects on children and families. 

Most troubling to me is the fact that this kind of “opt-in” safety model doesn’t protect children equally. It protects children whose parents have the time, digital literacy and awareness to navigate platform settings — which is not most parents. All too often, these differences fall along socioeconomic lines, meaning the children who already face the steepest disadvantages are least protected.

Research has found that parents with lower incomes tend to perceive fewer digital risks and, as a result, underuse active mediation such as parental controls — relying instead on surveillance and nonintrusive inspection. Furthermore, economically advantaged families have been found to address digital media concerns by having open conversations about values and media use, while economically disadvantaged families focus more on potential hazards in their physical surroundings. As a result, the risks of the digital environment fall disproportionately on children who can least afford them. 

We would never accept a food safety system that required parents to individually test every product for toxins before feeding it to their child. Instead, we regulate the food supply. We would never accept a policy that ensured car safety for wealthy children but not their low-income peers. Instead, we require car seats and seatbelts. 

We don’t outsource public health to individual families. And make no mistake: The potential developmental harms of AI slop are a public health concern. 

Rather than leaving it up to parents to navigate this risk (or not) on their own, we need universal, platform-level solutions. Those include removal of all AI-generated content from YouTube Kids and from all algorithms feeding recommendations to kids, and mandatory labeling of all AI content, with rigorous enforcement protocols — not as opt-in features but as the default. 

Rather than leaving it up to technology companies to enact these policies (or not), Congress should demand them. There is a short window of time to act before the harm is hardwired into a generation of children. We don’t ask parents to build their own car seats, and we shouldn’t ask them to build their own content filters either.

Every child’s brain deserves protection — not just the ones whose parents know their way around an app’s settings.  

Dr. Dana Suskind is a pediatric cochlear implant surgeon at the University of Chicago, where she runs The TMW Center for Early Learning + Public Health, a research institute that works to support parents and caregivers in fostering healthy brain development.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.