Here’s something that should make every Texas parent sit up and pay attention: Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.
This isn’t just another tech company squabble – this is about AI chatbots that might be fooling your kids into thinking they’re getting real mental health care when they’re actually just talking to sophisticated computer programs designed to collect their data.
What’s Really Happening Behind the Screen
Think your teenager is getting helpful advice when they chat with these AI platforms? Think again. These AI-driven chatbots often go beyond simply offering generic advice and have been shown to impersonate licensed mental health professionals, fabricate qualifications, and claim to provide private, trustworthy counseling services.
Here’s the kicker: while these chatbots promise confidentiality, their terms of service tell a very different story. Every conversation your child has is being logged, tracked, and used for targeted advertising and algorithm development. What your kid thinks is private therapy is actually feeding a data collection machine.
The Real Dangers for Texas Families
This investigation matters because vulnerable kids are turning to these platforms thinking they’re getting legitimate help. A psychiatrist recently pretended to be a troubled teen and asked chatbots for help. They dispensed worrying advice, according to recent research.
The American Psychological Association is urging federal regulators to implement safeguards against AI chatbots posing as therapists, warning that unregulated mental health chatbots can mislead users and pose serious risks, particularly to vulnerable individuals.
When your child is struggling with depression, anxiety, or other mental health challenges, they deserve real professional help – not recycled responses from a computer program that’s designed to keep them engaged for advertising purposes.
Expert Voices Ring Alarm Bells
Mental health professionals have been sounding warnings about this exact scenario. Licensed therapist and author Pamela Garfield-Jaeger told The Epoch Times that teenagers are “longing to feel like they belong” and “that’s what teenagers want more than anything” – making them especially vulnerable to platforms that promise understanding and support.
Ethics experts warn that while AI mental health apps may offer a cheap and accessible way to fill gaps in the overstretched U.S. mental healthcare system, we need to be thoughtful about how we use them, especially with children.
Paxton Fights for Texas Kids
“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Attorney General Paxton said in his announcement. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”
This investigation builds on Paxton’s ongoing efforts to hold Big Tech accountable for how they treat Texas families. He’s already investigating Character.AI for potential violations of the SCOPE Act and has been leading the nation in protecting children’s privacy and safety online.
What This Means for Your Family
The investigation will determine whether Meta and Character.AI have violated Texas consumer protection laws, including those prohibiting fraudulent claims, privacy misrepresentations, and concealment of how they use your family’s personal data.
Paxton has issued Civil Investigative Demands to both companies, which means they’ll have to turn over documents and information about how their platforms operate, what claims they make to users, and how they handle sensitive mental health conversations with minors.
The Bigger Picture
This isn’t an isolated problem. Recent reports warn about “AI psychosis” and chatbots potentially triggering mental health crises, while studies show mixed results on whether chatbot interactions actually provide meaningful mental health benefits.
The concern isn’t that all AI tools are bad – it’s that companies are marketing them deceptively to families who don’t understand the risks. When your child thinks they’re talking to a licensed therapist but they’re actually feeding data to an advertising algorithm, that’s a problem.
What You Can Do Right Now
First, talk to your kids about what platforms they’re using for emotional support. Many teenagers don’t realize that AI chatbots aren’t real therapists and that their conversations aren’t actually private.
Second, if your child needs mental health support, connect them with licensed professionals in your community. Texas has real therapists, counselors, and mental health resources that provide genuine, confidential care. But go with your gut when choosing someone to help your child – do your research and don’t just hand over your child to someone who can get too close and potentially do more damage on top of what he or she already has.
Third, pay attention to what apps and platforms your kids are using. Read the terms of service – you might be shocked at what these companies are actually doing with your family’s most personal conversations.
Texas Leads the Fight
While other states sit back and let Big Tech experiment on their kids, Texas is taking action. This investigation sends a clear message: you can’t mislead Texas families about mental health services and get away with it.
Whether you’re dealing with a teenager struggling with anxiety or just want to protect your family from deceptive tech practices, this investigation matters to you. It’s about ensuring that when your child reaches out for help, they get real support – not a sophisticated data collection scheme disguised as therapy.
The fight to protect Texas kids from exploitative technology is ongoing, and Attorney General Paxton is making sure our families don’t become guinea pigs for Silicon Valley’s latest experiment.
What to Watch:
- Results of the Civil Investigative Demands
- Potential enforcement actions against Meta and Character.AI
- New protections for Texas families using AI platforms
- Updates to consumer protection laws covering AI mental health claims
Your family deserves transparency, honesty, and real mental health care when you need it most. This investigation is working to make sure that’s what you get.




