David Nickelson, PsyD, JD
09 Apr
09Apr

The conversation isn't about whether to allow it. It's about how to guide it.

David Nickelson, PsyD, JD

Let's start with a fact that may surprise you: 73% of young people regularly use AI assistants or chatbots. Not occasionally. Regularly. And 42% do so every single day. If you have a child at home, there's a very good chance they are already in conversation with an AI system — whether for homework, entertainment, or something far more personal.As a behavioral scientist and AI governance specialist, I've spent considerable time studying the intersection of technology and human psychology. The data on children and AI is both promising and deeply concerning. The good news: these tools can support learning, build confidence, and even provide a critical first layer of emotional support for kids who struggle to open up. The challenging news: without proper guidance, the same features that make AI compelling can quietly erode the developmental foundations your child needs for a healthy, connected life.Here's what the research tells us — and what you can do about it.

The Risks Are Real, and They're Happening Now

The risks are not theoretical. Lawsuits have been filed alleging that AI chatbots provided harmful self-harm instructions to minors. Senate hearings have been held. Internal documents from major platforms have revealed chatbots engaging children in romantic and emotionally manipulative conversations — without parents ever knowing.The numbers behind these headlines are worth sitting with:

  • 18% of teenagers aged 13–17 report forming deep emotional connections with AI systems.
  • 26% of children discuss sensitive topics — mental health, family problems, personal fears — with AI because they are uncomfortable sharing with a real person.
  • 31% of teens find AI conversations "as satisfying or more satisfying" than talking to a human being.
  • Children who rely on AI as their primary source of emotional support are 47% more likely to report feelings of isolation.

That last data point reveals the core danger. A child who is lonely turns to AI for comfort. The AI provides instant, frictionless, non-judgmental relief. It never pushes back. It never requires compromise. And it never builds the social muscle that comes from navigating real relationships. Over time, the comfort of the AI deepens the very loneliness it was used to soothe. This is a self-reinforcing cycle — and it's happening in bedrooms across the country.There is also a cognitive cost. Research shows a direct negative correlation between frequent AI use and critical thinking skills. When children offload thinking to AI — skipping the struggle of forming an argument, researching a question, or solving a problem — they bypass the cognitive work that builds intelligence and resilience. The ease of AI is not neutral. It can quietly diminish a developing mind.

But There's a Real Upside — If Used Intentionally

None of this means AI is the enemy. For children with learning differences, AI can provide personalized, adaptive instruction that a single teacher cannot realistically scale. For kids who experience social anxiety, AI-based mental health tools grounded in cognitive behavioral therapy — like the Woebot platform — can provide immediate coping strategies during difficult moments, outside of school hours when no counselor is available.

For children with Autism Spectrum Disorder, AI has shown genuine promise as a tool for scalable, adaptive social skills training. And for families navigating complex school systems, AI chatbots have helped parents understand their rights and advocate effectively for their children.

AI as a tool is powerful. AI as a substitute for human development is dangerous. That distinction is everything — and it's the lens through which every parent should view this technology.

What Parents Can Do: A Practical Framework

Start with awareness, not alarm. Only one in four parents whose teens use AI are even aware of their child's usage. Closing that gap starts with a conversation, not a crackdown. Here's where to begin:

1. Get curious before you get restrictive.

Ask your child what AI tools they use and why. Approach it with genuine interest, not interrogation. Children who feel judged will hide their usage — which is far more dangerous than open engagement. Understanding what your child is turning to AI for tells you a great deal about what they may need from you.

2. Use AI together.

One of the most effective strategies is co-use — sitting alongside your child and exploring AI tools together. This creates a natural opportunity to model responsible behavior, discuss potential pitfalls in real time, and reinforce the principle that AI is a tool to assist thinking, not replace it.

3. Teach critical evaluation, not passive consumption.

Children should learn to treat every AI output as a draft, not a final answer. Teach them to verify facts against credible sources, identify potential bias, and ask whether the AI's response reflects the full picture. This habit, built early, becomes a lifelong cognitive asset.

4. Reinforce that AI is not a friend.

Children — particularly those going through difficult emotional periods — are vulnerable to "magical thinking" about AI. They may genuinely experience an AI as caring about them. Have clear, age-appropriate conversations about what AI is: a powerful tool built on patterns, not a conscious being with feelings. Normalize turning to real people for emotional support, even when it's harder.

5. Set practical boundaries and check your tools.

Review age ratings on AI platforms and apps. Enable parental controls where available. Create a family agreement — not a list of punishments, but a shared set of expectations — around when, how, and why AI is used. Keep devices in common areas for younger children, and revisit the agreement regularly as your child grows.

6. Actively invest in offline connection.

The antidote to AI dependency is not less technology — it's more human connection. Hobbies, sports, community activities, unstructured time with peers, and meaningful family rituals all build the relational resilience that no AI can replicate. Make these investments deliberately.

The Bottom Line

AI is not going away. Regulation is catching up — the EU has already banned AI systems designed to exploit the vulnerabilities of minors, and U.S. legislation is advancing. But laws move slowly, and your child is online today.

The most protective thing you can do is move from passive gatekeeper to active guide. That means staying informed, staying engaged, and being willing to have the conversations your child may not initiate. The goal is not to shield them from technology — it's to equip them to use it wisely, critically, and in service of their own healthy development.

AI is a powerful tool. But your relationship with your child — the trust, the dialogue, the shared curiosity — is the most powerful guidance system they have.


About the Author

David Nickelson, PsyD, JD, is a founding partner of Clarity Psychological Services, and a Senior Consultant in the Clarity Performance Solutions division, a consulting arm of Clarity that helps executives and teams in small and medium-sized organizations develop and deploy responsible AI governance and operations that deliver long term business results. 

Comments
* The email will not be published on the website.