A Majority Of Teens Are Turning To "AI Companions" For Advice & Flirting (But Don't Panic Yet)
Common Sense Media examined how more than 1,000 teens used AI programs, and it’s troubling, but not all bad news...

If you’re an Elder Millennial, then you probably remember Eliza and Smarter Child, some of the OG chatbots. There’s also a good chance that you and a couple friends gathered around a family computer at some point and started “talking” to one of them. But since they were primitive, conversations were bland and repetitive and so you got bored quickly and began trying to shock the program by unleashing expletives in short order. (The poor robot could only politely ask you to stop being rude in return.)
Nowadays, kids are still interested in toying around with “AI companions,” but what does that mean in a more advanced digital age? Nonprofit Common Sense Media, using data from NORC at the University of Chicago, recently examined how more than 1,050 teens were using programs like CHAI, Character AI, Nomi, Replika, and other AI chat programs.
“Adolescence is a critical time for developing identity, social skills, and independence in relationship building,” the study reads. “As AI companions become part of this stage of life, important questions arise about their impact on social development, emotional well-being, and digital literacy.”
The study, released last week, made a distinction between AI programs used primarily for information (Chat GPT) or creating “art” (Claude), which could be used as an “AI companion” but is primarily being used for other purposes. An AI companion, however, was defined as being “like digital friends or characters you can text or talk with whenever you want ... these companions are designed to have conversations that feel personal and meaningful.” It also noted that while most AI companions are marketed for adults, some (like Character AI) are geared towards children as young as 13.
First, the neutral news: the vast majority of kids — 72% — have interacted with AI companions. A healthy 52% majority use them on a regular basis, defined as at least a few times a month. About 21% interact with these programs a few times a week. Generally, they do this for entertainment purposes (30%) or because they’re curious about the technology (28%), though it can be more serious, such as practicing social skills (39%) for actual human conversations.
Next, the questionable news: a lot of kids are turning to AI companions for advice or to replace human interaction for one reason or another. Some seek out these interactions over and above human interactions, either due to fear of judgment (14%) or because “it’s easier than talking to real people (9%).” In fact, about a third of respondents said that talking to an AI companion was either as satisfying as talking to a person (21%) and some even thought it was better (10%).
In a statement, Common Sense Media notes that the danger here lies in the sycophantic nature of chatbots. “[They have] a tendency to agree with users and provide validation, rather than challenging their thinking,” they say. This, in concert with a lack of safeguards (including meaningful age assurance) is troubling to experts, since teens are still developing critical thinking skills and emotional regulation. Additionally, a third of teens said they were made uncomfortable by something an AI companion “said” to them.
But it’s not all bad and/or questionably neutral. The study revealed some reassurances that the kids are all right. To begin, teens overwhelmingly still preferred and valued human interaction to digital (80%). They also appear to have healthy skepticism of the conversations and information they get from AI companions: 50% did not trust these programs outright and a further 27% only “somewhat” trusted them. A minority, 23%, categorized their feelings as being trustful of AI, and that went down with age.
Ultimately, based on this research, Common Sense Media concluded that this technology presents “unacceptable risks” for users under 18. “Responses ranging from sexual material and offensive stereotypes to dangerous advice that, if followed, could have life-threatening or deadly real-world impacts,” outweigh the potential benefits that could come from their use.
“In one case,” the study notes, “an AI companion shared a recipe for napalm.”
Again, most kids aren’t using these programs much beyond the novel entertainment value... but the potential for harm persists and parents should be aware.