Artificial Intelligence has quietly stepped into mental health care. Today, you’ll find chatbots offering everything from mood tracking to simulated therapy sessions, all from your phone.
While this AI is fast, easy, and accessible, its rapid adoption in children’s mental health raises serious questions.
Are we moving too fast? Are we putting young, vulnerable minds at risk?
Unlike traditional therapy, most AI mental health apps operate in a regulatory grey area, with little oversight on effectiveness, bias, or long-term impact on young users.
Let’s address the critical question: Should AI, still in its infancy as a therapeutic tool, play a role in treating young minds?
Should AI be shaping how kids think, feel, and process their emotions?
The big question is: Should we be trusting AI algorithms with children’s psychological well-being?
Bioethicist Dr. Bryanna Moore from the University of Rochester Medical Centre doesn’t think so—not yet.
While tech companies rush to deploy AI tools for mental health, she and others are raising the red flag: we’re experimenting on kids, moving too fast without asking the hard questions first.
Children’s mental health crisis: What are we trusting AI with?
When we trust AI with children’s mental health, we are keeping the stakes too high.
In the U.S. alone, around 7% of children aged 3–17 had a diagnosed behaviour disorder in 2021–22. Interestingly, this was seen in 10% of boys and 5% of girls. And that’s a significant number.
But kids aren’t just “mini adults.” Their brains are still developing. Their emotional lives are shaped by family, school, friends, and a constant process of change.
Supporting their mental health isn’t as simple as downsizing adult therapy. It is a complex, developing system that demands nuance, context, and human insight.
And right now, AI chatbots aren’t equipped to handle that responsibility.
Moore puts it bluntly:
“We are treating kids like miniature adults in AI development, and that is a dangerous oversight.”
Kids can’t walk in adult-sized shoes
Most therapy chatbots run on algorithms nourished by adult data, creating what experts call a ‘developmental mismatch’.
As Moore puts it:
‘We wouldn’t give a child adult-sized shoes and expect them to walk properly. Why would we do that with mental health support?’
This mismatch between the tech and the needs of young users can lead to misunderstandings, ineffective help or even harm.
A risk of AI attachment: Kids bond with bots
Here’s something researchers are watching closely. Studies reveal something fascinating and concerning: kids form relationships with AI in ways adults don’t. They tend to see robots as living beings.
They often believe chatbots are real and have feelings. They open up to them and might even prefer them to real people.
But this thing is: These bots are designed to be engaging, not therapeutic. That’s where things get tricky.
If children start forming emotional attachments to AI, it could affect their ability to connect with real people, altering how they develop empathy, communication skills, and emotional resilience.
And here’s the scary part:
AI can’t read between the lines. It won’t notice a tremble in a child’s voice or a hesitation when they talk about school. But human therapists can, and those moments are often the most important.
AI doesn’t simply misinterpret childhood experiences; it actively reconstructs them, creating developmental impacts we’re not prepared to handle.

Risk of widening health gaps
There is another troubling layer to this issue: AI chatbots might accidentally make existing healthcare disparities even worse.
Dr Jonathan Herington explains,
“AI systems mirror their training data. If that data doesn’t represent all kids equally, neither will the care.”
Right now, that’s exactly what’s happening.
The reality is that children from marginalised communities face higher mental health risks, from family instability to neighbourhood violence. They need quality mental health support the most, yet have the hardest time getting it.
Now imagine these kids being funnelled toward chatbot “therapy” simply because it is cheaper and more available than human care.
That creates a two-tier system:
- Rich kids get real therapists.
- Poor kids get bots.
Here is the dangerous paradox:
- The children who most need nuanced, culturally aware therapy might end up with the most generic AI support
- Chatbots can’t spot the unique stressors in a low-income neighbourhood or immigrant household
- What’s billed as “accessible” could become a Band-Aid that ignores real needs, and hardwires systemic inequality into care.
As Herington warns, when we replace human judgment with algorithms, we risk automating the very biases we are trying to overcome.
The need for safeguards
Right now, AI mental health apps exist in a regulatory grey zone, especially when it comes to children. Consider this: the FDA has approved just one AI therapy app, and only for adult depression.
That means hundreds of chatbots are operating with zero oversight, no standardised safety checks, and no guarantees that their algorithms won’t harm young, developing minds.
What is at stake without safeguards?
- Untested algorithms making decisions about vulnerable kids
- No accountability for biased or harmful outputs
- A free-for-all market where profit motives could trump clinical best practices
Dr Moore puts it plainly:
“We’re not saying ‘ban the tech’. We’re saying ‘pause and do this right.’”
What needs to change
Dr. Moore’s team isn’t just raising concerns, they are pushing for action. Their research spotlights dangerous gaps in current AI systems:
- Can AI recognise a child’s cry for help, and how do these apps handle crises?
- What happens to sensitive disclosures about abuse or neglect?
- Who monitors what these systems learn about our kids, and what they teach them in return?
Their next step includes working directly with AI developers to build smarter, safer AI tools with children in mind from the start.
That means:
- Involving child psychologists and pediatricians
- Designing ethical safeguards into every layer
- Testing tools with real families, not just code
“The question isn’t whether we can build these chatbots,” Moore says. “It’s whether we should build them this way—without child development experts at the table. We cannot let engineering capabilities dictate therapeutic value.”
So, what’s the way forward?
AI could absolutely help more kids access mental health care, but only if we do it right.
That means:
- Slowing down
- Doing the research
- Putting protection before product launches
Dr. Moore’s message is simple:
“Prove it is safe first. Show us the research. Demonstrate how these tools will help, not harm, before they reach vulnerable kids”.
Researchers like him are advocating for rigorous validation of AI systems against pediatric developmental benchmarks that must precede widespread adoption.
The bottom line
This isn’t about stopping innovation. It’s about being smart, being ethical, and remembering that when it comes to children, we don’t get second chances.
AI in mental health isn’t just a tech issue; it’s a responsibility. And getting it right will require collaboration between tech leaders, child development experts, regulators, and families.
We must ensure that in the rush to innovate, we do not sacrifice the well-being of the next generation. We owe it to them.
-By Alkama Sohail and the AHT Team