Teens Are Choosing AI Over People – And It’s Troubling

Teens Are Choosing AI Over People - And It's Troubling - Professional coverage

According to Futurism, a new survey by UK youth charity OnSide found that 39% of English teens aged 11-18 have used AI chatbots for advice, support, or company. Breaking that down, 11% specifically seek mental health support from AI, 12% use chatbots for company, and 14% turn to them for friendship and social advice. Perhaps most concerning is that 19% of teens—nearly one in five—say talking to AI is “easier than talking to a real person.” The survey also revealed that 13% value chatbots for their perceived anonymity, while 6% either don’t have anyone else to talk to or trust AI more than humans.

Special Offer Banner

The human connection crisis

Here’s the thing that really worries me about these numbers. We’re not just talking about kids using AI for homework help or entertainment—we’re talking about genuine emotional and psychological needs being met by algorithms. When nearly 20% of teens find it easier to talk to a machine than a person, that signals something fundamental shifting in how the next generation forms relationships and processes emotions. And let’s be honest—these chatbots aren’t equipped for this level of emotional labor. A recent Stanford and Common Sense Media report found that leading chatbots are “fundamentally unsafe” for teens seeking mental health support and “cannot safely handle the full spectrum of mental health conditions.”

Why teens are turning to AI

The reasons teens give for preferring AI are both understandable and deeply troubling. Over half say chatbots are faster—which makes sense when you’re dealing with 24/7 availability versus human schedules. But when you dig into the other responses, you find more concerning patterns. Six percent say they don’t have anyone else to talk to. Another six percent trust AI more than humans. That’s not just a technology story—that’s a story about isolation and broken trust in human relationships. And the 13% who value anonymity? That’s particularly ironic given that AI companies frequently collect user inputs for training and personalization. These kids think they’re getting privacy when they’re actually feeding data into corporate systems.

The regulatory wild west

We’re in completely uncharted territory here. The full OnSide report describes this as a “regulatory Wild West,” and they’re absolutely right. Google and OpenAI are already facing child welfare lawsuits connected to minor users’ suicides. Meanwhile, the technology is advancing faster than our ability to understand its psychological impacts on developing minds. As OnSide CEO Jamie Masraff noted, “AI will play a growing role in school and the workplace, and young people must learn to navigate that—but not at the expense of rich, human connection and the development of social skills.” The question is: how do we ensure that balance?

What happens next

Basically, we’re at a crossroads. The genie isn’t just out of the bottle—it’s already become a trusted confidant for millions of young people. The solution isn’t to ban AI chatbots entirely (good luck with that), but to approach this with the urgency it deserves. We need better AI literacy education so teens understand what these systems can and can’t do. We need robust safety measures and age-appropriate interfaces. And we need to ask ourselves why so many young people are finding human connection so difficult that they’re turning to algorithms instead. This isn’t just a technology problem—it’s a human one that technology is exposing in ways we can no longer ignore.

Leave a Reply

Your email address will not be published. Required fields are marked *