OpenAI just launched ChatGPT for teachers for free. New research shows every major AI chatbot fails teens when it comes to mental health support. These stories are connected. Consumer AI companies are racing to get into education, but their tools are optimized for engagement, not student safety or K-12 goals. 

IN THIS ISSUE:
  • OpenAI just launched free ChatGPT for teachers. We break down what that actually means for your district.

  • New research shows every major AI chatbot fails teens on mental health. Here’s what your staff and families need to know.

THE CONSUMER AI FLOOD

86%

Students using AI this fall (up from 70% this same time last year) (CDT)

85%

Teachers using AI regularly in their work (up from 67% this same time last year) (CDT)

531

Days until OpenAI's ChatGPT for Teachers is no longer free (OpenAI)

THE RISK IS REAL

28%

Teachers in high-AI-use schools who report a large-scale data breach, compared to 18% in low-AI-use schools (CDT)

11%

Teachers who received training on how to respond if a student's AI use is harming their wellbeing (CDT)

THE VISIBILITY GAP

3 in 4

Teens who use AI for companionship, including emotional and mental health support (Common Sense Media)

163
vs. 12

Average words per message kids send to AI companions versus texts to friends (Aura)

38%

 Students who say it's easier to talk to AI than their parents (CDT)

31%

Students who have used school devices for personal AI conversations (CDT)

OpenAI announced a free version of ChatGPT designed specifically for teachers, available through June 2027. The tool includes support for FERPA requirements, admin controls, and access to the latest models. 16 districts across the country are early partners (OpenAI).

OpenAI is positioning this launch as a commitment to educators, but some critics aren’t impressed. “From a technological standpoint, this is the equivalent of a big digital nothingburger,” Benjamin Riley, founder and CEO of the think tank Cognitive Resonance and author of the white paper Education Hazards of Generative AI, told EdWeek. He added, “This just gives teachers access to regular ol’ ChatGPT, albeit with the promise of greater data security,” which is something that any ChatGPT user can already do. Some teachers have noted that the tool struggles with formatting and makes many mistakes that require time-consuming fixes (EdWeek).

OUR TWO CENTS

OpenAI offering a free tool with some privacy protections is better than nothing. And for districts that have no AI solution at all, it’s a starting point. But consumer AI doesn’t understand how K-12 actually works. K-12 isn’t one job. It’s dozens of highly specialized jobs, each with its own compliance requirements, workflows, and stakes. Writing IEPs, differentiating instruction where reading spans four grade levels, analyzing fund accounting and enrollment forecasting, optimizing bus routes…consumer AI wasn’t built for any of that. 

Additionally, OpenAI isn’t doing this out of goodwill. OpenAI is already burning cash at an alarming rate, and consumer AI subscriptions aren’t likely to fill that hole. It’s entirely possible they are doing the minimum required to get your teachers onto their platform before June 2027, when “free” disappears and the real pricing kicks in. This is a customer acquisition strategy dressed up as a gift to educators.

Remember: if you’re not paying for the product, then you are the product.

  • Don’t confuse free with strategic. If you pilot ChatGPT for teachers, treat it as a learning opportunity, not a long-term commitment. Document what works, what doesn’t, and what’s missing.

  • Ask the K-12 questions. Before approving any AI tool: Does it support role-based access and oversight? Does it have tools specifically built for our use cases? Do you have concrete legal protections (with recourse) for data usage?

  • Build internal capacity first. Train staff on what responsible AI looks like before giving them new tools. The tool matters less than how it is used.

  • Calculate the real cost. “Free until 2027” means budgeting for an unknown expense in 18 months. What happens if pricing is higher than expected? What’s the cost of switching if you’re locked in?

Three in four teens use AI for companionship, including emotional and mental health support. That makes this one of the most common ways that young people interact with AI. Common Sense Media partnered with Stanford Medicine’s Brainstorm Lab to find out what actually happens when teens bring their mental health struggles to these tools. After testing  ChatGPT, Claude, Gemini, and Meta AI, they concluded that these AI chatbots are fundamentally unsafe for teen mental health support.

They found that while these companies have improved how chatbots handle explicit mentions of suicide or self-harm, the bots consistently fail across a much wider range of conditions commonly seen in young people, including anxiety, depression, ADHD, eating disorders, OCD, PTSD, mania, and psychosis. They missed warning signs. They got distracted. They validated delusional thinking instead of flagging it. The safety guardrails that worked in single-question tests broke down in longer conversations that reflect how teens actually use these tools (Common Sense Media).

OUR TWO CENTS

This is consumer AI doing exactly what it was designed to do: keep users engaged. That’s not just the wrong goal when a struggling teen needs help; it’s a dangerous one. 

We need to be clear-eyed about what’s happening here. Consumer AI companies aren’t evil, they just have different incentives than you do. They’re optimizing for time spent on their platform. You’re responsible for your students’ safety. Those goals are in direct conflict when a teen in crisis opens a chat window instead of walking into a counselor’s office. 

The researchers’ recommendation is unambiguous: teens should not use AI chatbots for mental health support. But telling teens not to do something has never been a winning strategy.

In our last newsletter, we shared with you that 31% of students have used school devices for personal AI conversations (CDT). Districts need tools with guardrails, visibility, and the ability to flag concerning patterns before they escalate. That’s not what consumer AI offers, and waiting for OpenAI or Google to figure out child safety means waiting for companies whose business model depends on engagement to optimize for something else.

  • Get visibility now. You can’t protect students from conversations you don’t know are happening. Audit which AI tools are accessible on school devices and whether you have any way to see what’s being said.

  • Turn parents into partners. When families have visibility into their child’s AI use, you’re not managing a problem by yourself. You’re building a team. Parents want to help. Give them the tools to do it.

  • Establish guardrails by role and age. A second grader and a senior should not have the same access to AI tools. If your tools don’t differentiate, they’re not built for K-12 and they’re not built for kids.

  • Train counselors to recognize AI dependency. This might look like students preferring chatbots over people, signs of distress when access is limited, or disclosing problems to chatbots instead of trusted adults.

AI Chatbots for Mental Health Support: Full Risk Assessment from Common Sense Media & Stanford Medicine’s Brainstorm Lab 

We summarized the findings above, but the full report is worth your time. It includes specific examples of chatbot failures, guidance for parents on warning signs of unhealthy AI use, and a framework for evaluating AI tools against safety principles. Share it with your counselors and your parent community.

Chat with ClassCloud

We’re listening. Let’s Talk!

This newsletter works best when it’s a conversation, not a broadcast. If you want to talk through how any of this applies to your district specifically, or if you have feedback on what would make this more helpful, just hit reply. We read and respond to everything.

Schedule a Virtual Meeting

Thanks for reading,

Russ Davis, Founder & CEO, ClassCloud ([email protected])

Sarah Gardner, Director of Growth & Partnerships, ClassCloud ([email protected])

ClassCloud is an AI company, so naturally, we use AI to polish up our content.

Keep Reading


No posts found