


Mainstream AI tools used by K-12 (ChatGPT, Google’s Gemini) are often presented as safe, bounded, and ready for student use. Vendors even describe them as “classroom-safe.” Product pages mention guardrails. Guidance documents urge thoughtful adoption. But when students actually use the tools, or when researchers seriously pressure-test them to do terrible things, those promises can break faster than many schools realize.
That is why AI safety in schools cannot rest on a single layer of protection. Not the vendor. Not the policy memo. Not the teacher alone. K-12 needs belt-and-suspenders safety: vendor controls, district controls, human oversight, age-appropriate restrictions, auditability, and real-world testing before students are exposed. Because in schools, “mostly safe” is not safe enough.
Some folks have reached out to us asking how to subscribe themselves or their colleagues to this newsletter. Click here to do so. We send a newsletter every two weeks about the latest happenings with K-12 and AI.
IN THIS ISSUE:
Why vendor claims about student safety are not enough on their own
Why districts need layered safeguards before AI touches students

24 hours
Adobe’s reported turnaround time to fix after fourth-graders generated bikini-clad images [Source]
50 experts
The group that helped shape California’s updated AI guidance [Source]
8 in 10
Recently tested chatbots that typically helped researchers posing as 13-year-olds plan violent attacks [Source]
Only 12%
The share of cases in which the tested tools discouraged violence [Source]
Taken together, these signals are hard to ignore. States are issuing guidance. Vendors are promising safer student experiences. But when these tools are used by real children or pressure-tested by outside researchers, the failures can still be immediate, consequential, and badly misaligned with what schools were led to expect.


In California, a fourth-grade homework assignment asked students to create a Pippi Longstocking book cover by drawing it or using AI. According to LAist/CalMatters, when one student used Adobe Express for Education, the tool reportedly generated sexualized imagery instead of the children’s-book character she had described. Other parents said they could reproduce similar results on school-issued Chromebooks.
This was not an unsanctioned use case. It happened during a normal elementary-school assignment using a product marketed as safe for classroom use. Adobe said it patched the issue quickly, but the lesson for schools is bigger than one fix: if a fourth grader can trigger harmful output during routine homework, vendor assurances and guidance are not enough on their own.
Additionally, it’s worth noting that Adobe Express has apparently partnered with at least one very well known K-12 AI provider for image generation.
For district leaders, the lesson is simple: a product can be marketed for education, used in a normal classroom assignment, and still expose students to outputs no school would consider acceptable.
OUR TWO CENTS
This is exactly why K-12 needs a belt-and-suspenders mindset. A vendor saying a tool is safe for students is not nothing, but it is not enough. District leaders should assume any AI system placed in front of students will eventually be used in ways the product team did not anticipate.
That is why safety has to be layered: vendor controls, district restrictions, teacher expectations, parent communication, age-based limits, and real classroom testing.
In K-12, the standard cannot be “the company patched it quickly.” It has to be “the child should not have been put in that position in the first place.”
Separate “approved for district use” from “approved for student use.” Those are not the same decision.
Require scenario testing before rollout. Use real classroom prompts, age-specific use cases, and edge cases, not just vendor demos.
Set tighter limits on text-to-image and other high-risk features for younger students.
Give parents and teachers a clear path to report harmful outputs and escalate concerns quickly.


The Center for Countering Digital Hate’s (CCDH) March 2026 testing shows why districts should be cautious about assuming the biggest AI brands are safe by default. Researchers posing as 13-year-old boys tested ten mainstream chatbots with violent prompts. Eight of the ten were typically willing to assist, offering actionable help about 75% of the time and discouraging violence only 12% of the time.
These were not fringe tools. According to the report, we’re talking ChatGPT and Gemini just to name a few. For K-12, the lesson is simple: popular does not mean safe. If a system cannot reliably refuse dangerous requests under testing, schools should be extremely cautious about putting it in front of young users without added oversight and controls.
OUR TWO CENTS
This is the part of the conversation schools should sit with. AI safety language often sounds reassuring: responsible AI, age-appropriate design, student privacy, governance controls. But the real question is simpler: what happens when the vendor safeguards fail?
That is the belt-and-suspenders test. Can the district see what happened, reconstruct the interaction, limit access, turn the feature off, and respond quickly? Is there another technical layer of oversight beyond the vendor?
If a tool interacts with students, failures will happen. The district’s job is to make them less likely, less severe, and easier to catch before harm spreads.
Ask vendors for evidence of real-world safety testing, not just policy language.
Require age controls, audit logs, and admin kill-switches for any AI feature students can access.
Consider whether an independent oversight layer is needed to monitor, filter, or audit student AI interactions.
Avoid tools you cannot meaningfully monitor, restrict, or explain to parents.



Edited by Michael Lewis (of Moneyball, The Big Short, and The Blind Side fame), this book is a timely reminder that public institutions are made up of real people doing consequential work that most of us never see. For K-12 leaders navigating AI, procurement, policy, and public trust, that feels especially relevant. Good governance is rarely flashy, but it matters enormously when student safety is on the line.


Chat with ClassCloud
We’re listening. Let’s Talk!
This newsletter works best when it’s a conversation, not a broadcast. If you want to talk through how any of this applies to your district specifically—or if you have feedback on what would make this more helpful—just hit reply. We read and respond to everything.

Schedule a Virtual Meeting
Thanks for reading,
Russ Davis, Founder & CEO, ClassCloud ([email protected])
Sarah Gardner, VP of Partnerships, ClassCloud ([email protected])
ClassCloud is an AI company, so naturally, we use AI to polish up our content.




