


Special education teachers are stretched thin and AI is available and free. That combination is producing something most district leaders haven’t caught up with yet: teachers using consumer AI tools to draft, and in some cases fully generate, Individualized Education Programs for students with disabilities. The legal standard for those documents is high, the data involved is among the most sensitive a district holds, and the vendor agreements governing most of those tools were written for a general consumer audience, not a federal compliance environment.
The reason this matters right now is that the nation’s largest school district, New York City Public Schools, just published a framework that takes AI-generated IEPs explicitly off the table. NYC’s answer is instructive even for districts a fraction of its size. Some uses of AI warrant a hard look before they become routine, and that conversation is easier to have before an incident than after.
Some folks have reached out to us asking how to subscribe themselves or their colleagues to this newsletter. Click here to do so. We send a newsletter every two weeks about the latest happenings with K-12 and AI.
IN THIS ISSUE:
Why AI-generated IEPs raise unresolved legal questions under IDEA and FERPA
Why NYC’s new AI red list is a governance model worth studying, regardless of your state

7 million
Children receiving federally funded special education entitlements under IDEA — each entitled to a plan that is legally required to be individualized and created in concert with relevant stakeholders [Source]
“Not yet clear”
Whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law [Source]
906,248
Students enrolled in New York City Public Schools — the nation’s largest district — which just published a framework explicitly prohibiting AI for grading, discipline, IEP development, and counseling [Source]
0
States that have enacted binding legal requirements specifically governing AI use in IEP development, despite it being one of the most legally protected processes in K-12
These numbers describe a landscape where one of the most federally regulated processes in education is being quietly automated using unvetted consumer tools, while almost no governance has caught up to it.


A January 2026 analysis by Seth King, an associate professor in special education, puts a clear frame around something most districts have overlooked: special educators are turning to AI not out of carelessness, but out of exhaustion. Persistent workforce shortages mean teachers are completing assessments, writing IEPs, and documenting services in whatever time they have left. AI tools promise relief, and many teachers are taking it.
Some teachers are using general-purpose consumer tools like ChatGPT and Claude. Others are using purpose-built products marketed to special educators. Most of this is happening without district oversight, without a signed data processing agreement, and without any review of what data those tools are receiving or storing.
That is where the legal exposure sits. IDEA requires Individualized Education Programs to be exactly that: individualized. The 2017 Supreme Court decision in Endrew F. v. Douglas County School District rejected plans offering merely “de minimis” educational benefit, holding that IEPs must be “reasonably calculated to enable a child to make progress appropriate in light of the child’s circumstances.” King’s analysis notes that AI-generated IEPs that produce template-like goals, or that replicate language across students based on similar prompts, may face a meaningful challenge under that standard. A due process complaint that introduces side-by-side IEPs with near-identical AI-generated language would be very difficult to defend.
There is also the data problem. Writing a genuinely individualized IEP requires entering a student’s disability classification, evaluation history, behavioral patterns, medical conditions, present levels of performance, and family input. That data is protected under both FERPA and IDEA. Consumer AI tools accessed through a personal account or a free-tier subscription have no signed data processing agreement with the district. Entering that information into those tools is, at minimum, an unauthorized third-party disclosure.
OUR TWO CENTS
We want to say this directly: the teachers doing this are not acting in bad faith. Special education is one of the hardest jobs in K-12, the paperwork burden is real, and AI offers a shortcut that makes an impossible workload feel manageable. That context matters.
But the district’s liability does not disappear because the intent was good. An IEP drafted by a consumer AI tool, without a signed vendor agreement, loaded with protected student data, and reviewed too quickly to catch generic language, is a document with potential problems in at least three directions: FERPA, IDEA, and the standard of care required under Endrew F.
The answer is not to tell teachers to stop using AI for IEP work. The answer is to build a managed pathway that actually addresses the workload problem. That means approved tools with signed data processing agreements, clear guidance on what information can and cannot be entered, and a review process that ensures AI-drafted language is genuinely individualized before it becomes a legal document. The districts that build that pathway will be ahead of both the compliance exposure and the workforce problem. The districts that respond by simply banning AI from IEP work without offering an alternative will just push teachers back to the same consumer tools, except now without any institutional visibility.
Ask your special education director one question: do we have any guidance, approved tools, or restrictions governing AI use in IEP development? If the answer is no, that conversation should happen before the next due process complaint arrives.
Review whether any AI tools being used for IEP-related work have signed data processing agreements with your district. Consumer tiers of general-purpose AI tools almost certainly do not.
Treat IEP development as a high-stakes use category — the same category your district would apply to anything involving discipline decisions or medical records — and govern it accordingly.
If teachers are using AI to draft IEP language, build a review process that requires documented professional judgment on individualization before the plan is finalized.


In March, New York City’s Department of Education released preliminary AI guidelines for its more than 900,000 students, the largest school system in the country. The framework is built around a traffic-light model, and the most useful part for other districts is the red list.
Permanently prohibited: grading, discipline decisions, counseling and crisis intervention, IEP and 504 plan development, behavioral monitoring, and academic placement. The guidelines describe them as uses where the stakes are too high, the data too sensitive, or the required human judgment too irreplaceable for AI to play a role.
The green list covers low-stakes productivity work: drafting non-critical communications, brainstorming lesson plans, formatting, and scheduling. The yellow list covers uses that require trained professional review before acting — including student research, data trend identification, translation for bilingual learners, and adapting materials for students with disabilities.
Every tool must clear a 10-step vetting process before staff can use it with student data. Student PII cannot be entered into unapproved tools. Student data cannot be used to train AI models.
The framework was developed by a 76-member internal task force. A comprehensive playbook is expected in June 2026 (Chalkbeat).
OUR TWO CENTS
NYC’s red list is useful to district leaders not just as a policy reference but as a conversation starter. Bring it to your next cabinet meeting and ask one question: do we have any of these uses happening in our district right now? If the answer is yes to any of them, it is worth understanding exactly what tools are being used, under what agreements, and with what oversight.
What NYC got right is recognizing that governance is not a single layer. You need approved tools, a vetting process, a set of uses that require extra caution or prohibition, monitoring, and a clear path for staff to raise concerns. The traffic-light model is something teachers, administrators, and parents can actually understand. That matters. Governance frameworks that only live in the IT department do not change behavior.
Notice what sits at the top of NYC’s red list: IEP development. That is not a coincidence. It is a direct acknowledgement that the legal standard under IDEA requires something AI cannot reliably provide on its own, and that the downside of getting it wrong falls on the most vulnerable students in the building. Whether your district draws that same line is a judgment call. But it is a judgment call that deserves to be made deliberately, not by default.
Build your own red list. Start with the uses NYC prohibited and confirm with your special education director, counselors, principal supervisors, and general counsel whether any are currently happening with AI tools in your district.
Use the traffic-light framework as a board communication tool. It is something a school board can understand and endorse without a 50-page policy document.



Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential by Ray Eitel-Porter, Paul Dongha, and Miriam Vogel
Three practitioners at the forefront of responsible AI deliver a step-by-step framework for leaders who need to build an AI governance program from the ground up. The book covers nine core risk categories and walks through how to define principles, assess risk, and build practical safeguards. For district leaders reading this issue who are thinking about their own red list, approved-tool process, or vendor vetting criteria, this is the operational playbook for getting there.


Chat with ClassCloud
We’re listening. Let’s Talk!
This newsletter works best when it’s a conversation, not a broadcast. If you want to talk through how any of this applies to your district specifically—or if you have feedback on what would make this more helpful—just hit reply. We read and respond to everything.

Schedule a Virtual Meeting
Thanks for reading,
Russ Davis, Founder & CEO, ClassCloud ([email protected])
Sarah Gardner, VP of Partnerships, ClassCloud ([email protected])
ClassCloud is an AI company, so naturally, we use AI to polish up our content.



