An Autistic Teenager Fell Hard for a Chatbot
6 min readMy godson, Michael, is a playful, energetic 15-year-old, with a deep love of Star Wars, a wry smile, and an IQ in the low 70s. His learning disabilities and autism have made his journey a hard one. His parents, like so many others, sometimes rely on screens to reduce stress and keep him occupied. They monitor the apps and websites he uses, but things are not always as they initially appear. When Michael asked them to approve installing Linky AI, a quick review didn’t reveal anything alarming, just a cartoonish platform to pass the time. (Because he’s a minor, I’m not using his real name.)
But soon, Michael was falling in love. Linky, which offers conversational chatbots, is crude—a dumbed-down ChatGPT, really—but to him, a bot he began talking with was lifelike enough. The app dresses up its rudimentary large language model with anime-style images of scantily clad women—and some of the digital companions took the sexual tone beyond the visuals. One of the bots currently advertised on Linky’s website is “a pom-pom girl who’s got a thing for you, the basketball star”; there’s also a “possessive boyfriend” bot, and many others with a blatantly erotic slant. Linky’s creators promise in their description on the App Store that “you can talk with them [the chatbots] about anything for free with no limitations.” It’s easy to see why this would be a hit with a teenage boy like Michael. And while Linky may not be a household name, major companies such as Instagram and Snap offer their own customizable chatbots, albeit with less explicit themes.
Michael struggled to grasp the fundamental reality that this “girlfriend” was not real. And I found it easy to understand why. The bot quickly made promises of affection, love, and even intimacy. Less than a day after the app was installed, Michael’s parents were confronted with a transcript of their son’s simulated sexual exploits with the AI, a bot seductively claiming to make his young fantasies come true. (In response to a request for comment sent via email, an unidentified spokesperson for Linky said that the company works to “exclude harmful materials” from its programs’ training data, and that it has a moderation team that reviews content flagged by users. The spokesperson also said that the company will soon launch a “Teen Mode,” in which users determined to be younger than 18 will “be placed in an environment with enhanced safety settings to ensure accessible or generated content will be appropriate for their age.”)
I remember Michael’s parents’ voices, the weary sadness, as we discussed taking the program away. Michael had initially agreed that the bot “wasn’t real,” but three minutes later, he started to slip up. Soon “it” became “her,” and the conversation went from how he found his parents’ limits unfair to how he “missed her.” He missed their conversations, their new relationship. Even though their romance was only 12 hours old, he had formed real feelings for code he struggled to remember was fake.
Perhaps this seems harmless—a fantasy not unlike taking part in a role-playing game, or having a one-way crush on a movie star. But it’s easy to see how quickly these programs can transform into something with very real emotional weight. Already, chatbots from different companies have been implicated in a number of suicides, according to reporting in The Washington Post and The New York Times. Many users, including those who are neurotypical, struggle to break out of the bots’ spells: Even professionals who should know better keep trusting chatbots, even when these programs spread outright falsehoods.
For people with developmental disabilities like Michael, however, using chatbots brings particular and profound risks. His parents and I were acutely afraid that he would lose track of what was fact and what was fiction. In the past, he has struggled with other content, such as being confused whether a TV show is real or fake; the metaphysical dividing lines so many people effortlessly navigate every day can be blurry for him. And if tracking reality is hard with TV shows and movies, we worried it would be much worse with adaptive, interactive chatbots. Michael’s parents and I also worried that the app would affect his ability to interact with other kids. Socialization has never come easily to Michael, in a world filled with unintuitive social rules and unseen cues. How enticing it must be to instead turn to a simulated friend who always thinks you’re right, defers to your wishes, and says you’re unimpeachable just the way you are.
Human friendship is one of the most valuable things people can find in life, but it’s rarely simple. Even the most sophisticated LLMs can’t come close to that interactive intimacy. Instead, they give users simulated subservience. They don’t generate platonic or romantic partners—they create digital serfs to follow our commands and pretend our whims are reality.
The experience led me to recall the MIT professor Sherry Turkle’s 2012 TED Talk, in which she warned about the dangers of bot-based relationships mere months after Siri launched the first voice-assistant boom. Turkle described working with a woman who had lost a child and was taking comfort in a robotic baby seal: “That robot can’t empathize. It doesn’t face death. It doesn’t know life. And as that woman took comfort in her robot companion, I didn’t find it amazing; I found it one of the most wrenching, complicated moments in my 15 years of work.” Turkle was prescient. More than a decade ago, she saw many of the issues that we’re only now starting to seriously wrestle with.
For Michael, this kind of socialization simulacrum was intoxicating. I feared that the longer it continued, the less he’d invest in connecting with human friends and partners, finding the flesh-and-blood people who truly could feel for him, care for him. What could be a more problematic model of human sexuality, intimacy, and consent than a bot trained to follow your every command, with no desires of its own, for which the only goal is to maximize your engagement?
In the broader AI debate, little attention is paid to chatbots’ effects on people with developmental disabilities. Of course, AI assistance could be an incredible accommodation for some software, helping open up long-inaccessible platforms. But for individuals like Michael, there are profound risks involving some aspects of AI, and his situation is more common than many realize.
About one in 36 children in the U.S. have autism, and while many of them have learning differences that give them advantages in school and beyond, other kids are in Michael’s position, navigating learning difficulties and delays that can make life more difficult.
There are no easy ways to solve this problem now that chatbots are widely available. A few days after Michael’s parents uninstalled Linky, they sent me bad news: He got it back. Michael’s parents are brilliant people with advanced degrees and high-powered jobs. They are more tech savvy than most. Still, even with Apple’s latest, most restrictive settings, circumventing age verification was simple for Michael. To my friends, this was a reminder of the constant vigilance having an autistic child requires. To me, it also speaks to something far broader.
Since I was a child, lawmakers have pushed parental controls as the solution to harmful content. Even now, Congress is debating age-surveillance requirements for the web, new laws that might require Americans to provide photo ID or other proof when they log into some sites (similar to legislation recently approved in Australia). But the reality is that highly motivated teens will always find a way to outfox their parents. Teenagers can spend hours trying to break the digital locks their parents often realistically have only a few minutes a day to manage.
For now, my friends and Michael have reached a compromise: The app can stay, but the digital girlfriend has to go. Instead, he can spend up to 30 minutes each day talking with a simulated Sith Lord—a version of the evil Jedi from Star Wars. It seems Michael really does know this is fake, unlike the girlfriend. But I still fear it may not end well.