As a single mum in London, I’ve caught myself wondering: if my tween won’t talk to me about her anxieties, would she talk to a friendly robot?
In a world where there aren’t enough therapists to go around, the idea of an AI “friend” who listens 24/7 is both intriguing and unsettling.
AI mental health chatbots – apps that simulate therapeutic conversations – are being hailed as a possible lifeline for families struggling to find support. With child mental health services overbooked and waiting lists stretching for months, it’s no surprise desperate parents might eye these apps as a
As a mum, I can certainly see the appeal. When my 11-year-old son had a panic attack last year, I would have welcomed any tool to help soothe him at 2am while we waited for our referral to CAMHS (Child and Adolescent Mental Health Services). AI advocates argue these chatbots could act as a bridge – a way to support kids in the interim, or alongside traditional therapy, especially for those who might otherwise slip through the cracks.
Accessibility and personalisation are the buzzwords: an app that’s always there, that perhaps even adapts to your child’s needs over time, sounds like a compassionate use of technology.
Even some experts see a role for these digital helpers. The team behind the Cambridge study, for example, suggest that robots or chatbots could be a useful addition to mental health support – a supplement to catch subtle issues early – though not a replacement for professional care.
Used wisely, AI chatbots might function like “training wheels,” helping kids practice talking about feelings in a low-stakes way. They could also guide them with evidence-based exercises (many apps use cognitive-behavioural therapy techniques) to manage stress or negative thoughts. A teenager in one recent report felt that texting her feelings to a chatbot was easier than speaking to an adult and helped her cope with pandemic loneliness – up to a point.
The promise, then, is of an empathetic robot friend who’s always available when your child needs to vent or seek comfort. For parents who have watched their kids suffer in silence or languish on waitlists, that promise is hard to ignore.
As someone who writes about careers and psychology, I’m all for innovative solutions to human problems. But as a parent, I also have to ask: at what cost does this convenience come?
In the rush to deploy AI helpers for kids, experts warn we may be overlooking a key fact: children are not just small adults. “Children are particularly vulnerable. Their social, emotional, and cognitive development is just at a different stage than adults,” explains Bryanna Moore, a bioethicist who has studied this trend. In other words, what works for a grown-up seeking therapy on an app may not be safe or effective for a child.
Here are some of the major worries being discussed:
Reading through these concerns, I feel my parental protectiveness kicking in. I remember when my son’s goldfish died; it was heartbreaking but also a learning moment about loss. If, instead, his source of comfort was an AI programmed to never die, never leave, would that stunt his ability to deal with real life? It’s an open question.
Experts like Moore stress that we must consider how children’s minds work and grow before we throw technology at their problems. Childhood is when we form our understanding of trust, empathy, and communication. Do we really want part of that shaped by lines of code?
Beyond the personal psychological risks, there’s a broader ethical landscape to navigate. The rise of AI therapists for kids is outpacing our ability to put guardrails in place. Unlike medicines or even toys, most mental health apps aren’t subject to strict regulation or quality control. That Wild West environment raises several red flags:
First of all, safety and efficacy are unproven. These chatbots are not magic pills with years of trials behind them. In fact, most AI therapy apps are completely unregulated – essentially wellness products rather than clinically approved treatments. In the U.S., the FDA has cleared only one AI-based mental health app (and it was for adult depression). For children’s use, there’s no dedicated oversight ensuring the advice given is safe or age-appropriate. As a result, there’s no guarantee an app won’t inadvertently make things worse or fail to help when a real crisis hits. It’s all very new and experimental.
Then there’s the bias in the machine to consider. “AI is only as good as the data it’s trained on,” notes Jonathan Herington, a co-author with Moore on a Journal of Pediatrics commentary.
If these chatbots learn from adult conversations or a narrow set of users, they might not understand a child from a different background. A shy 8-year-old in London may use language or express sadness in ways a system trained on, say, American teenagers wouldn’t catch.
Moreover, if the training data doesn’t include diverse cultures or family situations, the chatbot’s responses could reflect subtle biases. For example, it might not recognise slang a working-class British kid uses, or it might assume certain family structures. This “one size fits all” issue means some children could be poorly served or even alienated by the bot’s advice. Herington emphasizes that without deliberate efforts to build representative datasets, these AI tools “won’t be able to serve everyone”.
And of course, there are the privacy and data concerns. When kids spill their feelings to a chatbot, where does that data go? Sensitive information about mental health is arguably as private as it gets. Many of these apps likely collect chat transcripts or mood logs. Without strict regulations, there’s nothing to stop that data from being used for who-knows-what – targeted ads? Research?
It’s unsettling to imagine my child’s confessions sitting on a server, potentially vulnerable to leaks or misuse. And unlike a human therapist who is legally bound by confidentiality, an app’s privacy policy (often written in legalese no child would understand) is the only safeguard.
This raises questions about consent: Can a 10-year-old truly consent to how an AI uses their data? Do parents even realise what they’re agreeing to when they download the “free” chatbot?
As a digital communications professional, I’ve always championed tech innovation – but I’ve also seen how tech can outpace our ethical frameworks.
With AI chatbots for kids, it feels like we’ve opened Pandora’s box before fully understanding what’s inside. Bryanna Moore and her colleagues, including bioethicist Serife Tekin, have called for exactly this kind of reflection. They aren’t anti-technology luddites; in fact, Moore explicitly says she’s not advocating to nix therapy bots altogether. Instead, these experts urge that we “be thoughtful in how we use them, particularly when it comes to children”. That means involving pediatric therapists and child psychologists in design, testing rigorously, and developing child-specific regulations and guidelines.
Should an AI mental health app be certified like a medical device? Should there be an age limit or parental supervision required? These are the kinds of questions we need to answer now, before the technology becomes too widespread to reel in.
For the moment, it feels like we have more questions than answers. As Moore noted, “there are so many open questions that have not been answered or clearly articulated” about children’s AI therapy. We’re in uncharted territory, and moving forward without caution could mean exposing kids to unanticipated harms.
The ethical onus is on developers, policymakers, and yes, parents, to proceed carefully. After all, our children’s well-being is at stake, and that’s one area where society can’t afford to just wing it and hope for the best.
At the end of the day, I find myself torn. The tech optimist in me sees the real advantages that AI chatbots offer to young people in pain. I think of a teenager alone with intrusive thoughts at 3am, who might just find comfort texting with a bot when no human is available. I think of my own kids in the future, navigating stresses I can’t always fix – would I rather they talk to something than nothing at all? Probably, yes.
In an ideal world, every child would have immediate access to a qualified, caring human therapist. In reality, that’s far from true. So if an AI can lend an ear and maybe even save a life by encouraging a lonely child to hold on until morning, that matters.
Yet the mother in me remains deeply wary. I know how nuanced and individual each child is. Can a mass-produced chatbot ever truly understand those quirks and needs? I also think about the intangible healing power of human connection – the gentle reassurance of a real person saying “I hear you” and a hand to hold. Can a robot replicate the warmth in a therapist’s smile or the creative spontaneity of a counselling session that goes off-script because that’s what the child needs? So far, I’m not convinced.
Perhaps the answer lies in a middle path. Maybe AI chatbots could serve as a scaffold, giving support when human help isn’t available, but then gracefully stepping aside when a flesh-and-blood therapist can take over. Or maybe they’ll remain simple tools – like fancy mood journals – rather than full-on “robot therapists” for kids.
The ethical imperative is that we, as parents and a society, set the boundaries. Tech companies shouldn’t be the ones deciding how much emotional care to delegate to machines. Pediatric experts and ethicists like Moore, Herington, and Tekin are already raising the right flags, but their voices need to be part of mainstream parenting conversations too.
As AI companions inch further into our kids’ lives, it’s on all of us – parents, professionals, and policymakers – to keep the discussion going. We owe it to our kids to ask the hard questions now, so that whatever role robot therapists may play in the future, it’s one that truly benefits the next generation. After all, the goal isn’t just to ease our children’s anxieties today, but to help them grow into healthy, resilient adults tomorrow.
The post The ethics of robot therapists: Should AI chatbots be talking to our kids? appeared first on DMNews.
A coordinated malware campaign is targeting cryptocurrency and Web3 professionals through a carefully built chain…
The new live-action movie isn't the only Masters of the Universe release coming in June.…
A new video game deal has joined the party that is Woot's 'Video Games For…
Sarah J. Maas is author to some of the most popular romantasy novels in existence.…
The Boys Season 5 has its final trailer, showing off Homelander's attempt to realize what…
The Senate Committee on Health and Human Services gutted a bill that would have restricted…
This website uses cookies.