November 23, 2024

Kids Are Getting the Full Blast of Generative AI

6 min read
Illustration of children climbing into a giant toy robot

This spring, the Los Angeles Unified School District—the second-largest public school district in the United States—introduced students and parents to a new “educational friend” named Ed. A learning platform that includes a chatbot represented by a small illustration of a smiling sun, Ed is being tested in 100 schools within the district and is accessible at all hours through a website. It can answer questions about a child’s courses, grades, and attendance, and point users to optional activities.

As Superintendent Alberto M. Carvalho put it to me, “AI is here to stay. If you don’t master it, it will master you.” Carvalho says he wants to empower teachers and students to learn to use AI safely. Rather than “keep these assets permanently locked away,” the district has opted to “sensitize our students and the adults around them to the benefits, but also the challenges, the risks.” Ed is just one manifestation of that philosophy; the school district also has a mandatory Digital Citizenship in the Age of AI course for students ages 13 and up.

Ed is, according to three first graders I spoke with this week at Alta Loma Elementary School, very good. They especially like it when Ed awards them gold stars for completing exercises. But even as they use the program, they don’t quite understand it. When I asked them if they know what AI is, they demurred. One asked me if it was a supersmart robot.

Children are once again serving as beta testers for a new generation of digital tech, just as they did in the early days of social media. Different age groups will experience AI in different ways—the smallest children may hear bedtime stories generated via ChatGPT by their parents, while older teens may run into chatbots on the apps they use every day—but this is now the reality. A confusing, sometimes inspiring, and frequently problematic technology is here and rewiring online life.

Kids can encounter AI in plenty of places. Companies such as Google, Apple, and Meta are interweaving generative-AI models into products such as Google Search, iOS, and Instagram. Snapchat—an app that has been used by 60 percent of all American teens and comparatively few older adults—offers a chatbot called My AI, an iteration of ChatGPT that had purportedly been used by more than 150 million people as of last June. Chromebooks, the relatively inexpensive laptops used by tens of millions of K–12 students in schools nationwide, are getting AI upgrades. Get-rich-quick hustlers are already using AI to make and post synthetic videos for kids on YouTube, which they can then monetize.

Whatever AI is actually good for, kids will probably be the ones to figure it out. They will also be the ones to experience some of its worst effects. “It is kind of a social fact of nature that kids will be more experimental and drive a lot of the innovation” in how new tech is used culturally, Mizuko Ito, a longtime researcher of kids and technology at UC Irvine, told me. “It’s also a social fact of nature that grown-ups will kind of panic and judge and try to limit.”

That may be understandable. Parents and educators have worried about kids leaning on these tools for schoolwork. Those who use ChatGPT say that they are three times more likely to use it for schoolwork than search engines like Google, according to one poll. If chatbots can write entire papers in seconds, what’s the point of a take-home essay? How will today’s kids learn how to write? Still another is bad information via bot: AI chatbots can spit out biased responses, or factually incorrect material. Privacy is also an issue; these models need lots and lots of data to work, and already, children’s data have reportedly been used without consent.

And AI enables new forms of adolescent cruelty. In March, five students were expelled from a Beverly Hills middle school after fake nude photos of their classmates made with generative AI began circulating. (Carvalho told me that L.A. has not seen “anything remotely close to that” incident within his district of more than 540,000 kids.) The New York Times has reported that students using AI to create such media of their classmates has in fact become an “epidemic” in schools across the country. In April, top AI companies (including Google, Meta, and OpenAI) committed to new standards to prevent sexual harms against children, including responsibly sourcing their training material to avoid data that could contain child sexual abuse material. (The Atlantic has a corporate partnership with OpenAI. The editorial division of The Atlantic operates independently from the business division.)

Kids, of course, are not a monolith. Different ages will experience AI differently, and every child is unique. Participants in a recent survey from Common Sense that sought to capture perspectives on generative AI from “teens and young adults”—all of whom were ages 14 to 22—expressed mixed feelings: About 40 percent said they believe that AI will bring both good and bad into their lives in the next decade. The optimistic respondents believe that it will assist them with work, school, and community, as well as supercharge their creativity, while the pessimistic ones are worried about losing jobs to AI, copyright violations, misinformation, and—yes—the technology “taking over the world.”

But I’ve wondered especially about the youngest kids who may encounter AI without any real concept of what it is. For them, the line between what media are real and what aren’t is already blurry. When it comes to smart speakers, for example, “really young kids might think, Oh, there’s a little person in that box talking to me,” Heather Kirkorian, the director of the Cognitive Development and Media Lab at the University of Wisconsin at Madison, told me. Even more humanlike AI could further blur the lines for them, says Ying Xu, an education professor at University of Michigans—to the point where some might start talking to other humans they way talk to Alexa: rudely and bossily (well, more rudely and bossily).

Older children and teens are able to think more concretely, but they may struggle to separate reality from deepfakes, Kirkorian pointed out. Even adults are struggling with the AI-generated stuff—for middle- and high-school kids, that task is still more challenging. “It’s going to be even harder for kids to learn that,” Kirkorian explained, citing the need for more media and digital literacy. Teens in particular may be vulnerable to some of AI’s worst effects, given that they’re possibly some of the biggest users of AI overall.

More than a decade on, adults are still trying to unravel what smartphones and social media did—and are doing—to young people. If anything, anxiety about their effect on childhood and mental health has only grown. The introduction of AI means today’s parents are dealing with multiple waves of tech backlash all at once. (They’re already worried about screen time, cyberbullying, and whatever else—and here comes ChatGPT.) With any new technology, experts generally advise that parents talk with their children about it, and become a trusted partner in their exploration of it. Kids, as experts, can also help us figure out the path forward. “There’s a lot of work happening on AI governance. It’s really great. But where are the children?” Steven Vosloo, a UNICEF policy specialist who helped develop the organization’s AI guidelines, told me over video call. Vosloo argued that kids deserve to be consulted as rules are made about AI. UNICEF has created its own list of nine requirements for “child-centered AI.”

Ito noted one thing that feels distinct from previous moments of technological anxiety: “There’s more anticipatory dread than what I’ve seen in earlier waves of technology.” Young people led the way with phones and social media, leaving adults stuck playing regulatory catch-up in the years that followed. “I think, with AI, it’s almost like the opposite,” she said. “Not much has happened. Everybody’s already panicked.”