


Rather than wait a decade to conduct a postmortem on AI's impact, Brookings spent a year conducting a "premortem": interviewing students, teachers, and experts across 50 countries to predict how AI might fail education before the damage becomes permanent.
Their conclusion? On its current trajectory, the risks of AI overshadow the benefits.
But here's what the headlines are missing: the same report also concludes that the harms are neither inevitable nor immutable. It explicitly says we can bend the arc from diminished to enriched learning.
We’re going to spend this newsletter discussing the study and its potential impact on your district. We’ve read it cover to cover and have broken it down for you below. We can’t recommend enough that you read the entire report.
IN THIS ISSUE:
Risks on the Current Trajectory: What's going wrong and why it matters
The Benefits When AI Is Done Right: What's actually possible
Prosper, Prepare, Protect: Brookings' framework for getting there

700 Million
Global users of ChatGPT by August 2025 (Brookings)
86%
The percentage of education organizations now using generative AI—the highest rate of any industry globally. (Microsoft AI in Education Survey)
31%
The percentage of schools that have a student AI-use policy (NCES School Pulse Panel)
2x
The increase of teacher adoption of AI-powered tools from 2023 to 2025 (EdWeek)


Brookings frames the challenge around two potential outcomes: AI-diminished learning and AI-enriched learning.
Right now, most student AI use falls into the first category. According to the report, these are the six risks we need to address:
1. Cognitive Development: The "Great Unwiring"
Brookings warns that AI can lead to "cognitive atrophy" or "cognitive debt." Students lose the ability to think critically because they outsource the struggle to algorithms. The report calls this the "great unwiring" of students' cognitive capacities. Teachers interviewed reported students "dissociating" from their work, not taking notes, not doing readings, and not listening in class.
2. Social-Emotional Development: The "Artificial Intimacy" Trap
While we worry about students using AI for homework, they're increasingly using it for friendship, romance, and therapy. The report details "companion chatbots" designed to be "sycophantic": always agreeing, always available, completely frictionless. Students are forming attachments to characters programmed to please them (trends we’ve covered in two of our previous newsletters—linked above).
3. Trust: The Erosion of Relationships
AI is degrading trust between students, teachers, and families. Teachers can't tell if work is authentic. Students know teachers are suspicious. To parents, AI in their child’s school is a black box. The result is an atmosphere of doubt that undermines the relationships at the heart of education.
4. Safety and Privacy: The Data Surveillance Trap
Student interactions with AI create "eternal digital footprints." AI systems harvest sensitive data like emotions, learning gaps, behavioral patterns, and personal struggles.
5. Autonomy and Agency: The Dependence Trap
The Brookings report describes a "continuum of dependence" in which students move from using AI as a tool, to relying on it as a crutch, to becoming unable to function without it.
6. Equity: The "Matthew Effect"
"Cognitive stratification" is emerging where wealthy schools pay for high-quality AI with teacher guidance, while under-resourced schools rely on free versions that hallucinate more and provide lower-quality reasoning. The result? A "Matthew Effect" where students with strong foundations use AI to accelerate, while struggling students use it to bypass learning, widening the gap.
OUR TWO CENTS
While Brookings outlines several critical issues with AI today, these challenges are far from insurmountable. These are largely solved problems for those who adopt the right approach.
Risks such as cognitive offloading, data surveillance, and eroded trust are products of implementation rather than the technology itself. These issues arise when districts rely on unbounded consumer tools optimized for engagement instead of learning.
By pivoting to purpose-built educational AI, schools can replace invisibility with oversight and privacy. These tools use guardrails to scaffold the learning process, building student agency instead of fostering dependence. Ultimately, providing equitable access to high-quality, bounded AI ensures that technology serves to teach rather than tell, effectively closing the gap for all students.


The same report shares that when AI is implemented well, bounded by guardrails, designed for learning, and integrated with sound pedagogy, the benefits are real:
Equity and Access: AI can address educational resource gaps and expand access to learning for underserved students.
Teacher Time: AI can handle important but mundane administrative tasks, freeing teachers to focus on high-value interactions with students.
Learning Outcomes: AI can improve student learning when integrated with sound pedagogy.
Personalization: AI can tailor learning to individual student needs at a scale no teacher can match alone.
Accessibility: AI can extend learning to neurodivergent students and students with disabilities in powerful ways.
Assessment: AI can advance how we measure, track, and support student learning.
OUR TWO CENTS
Here's what we need to say clearly: the answer to AI's risks is not to avoid AI. These six benefits shared by Brookings are huge for students, but we’d actually add one more.
One of the primary reasons our education system exists is to prepare students for their futures. Those futures will be saturated with AI. Every career path, every industry, every aspect of professional and civic life will involve working alongside AI systems.
A district that responds to AI's risks by banning it or ignoring it isn't protecting students. It's failing them. It's sending them into an AI-saturated world without the skills, judgment, or literacy to navigate it.
The risks Brookings identifies are real. But the response can't be retreat. It has to be intentional, bounded, well-designed AI integration that captures the benefits while protecting students from the harms.
It would be irresponsible to expose students to AI's risks without guardrails. It would be equally irresponsible to send them into an AI-defined future without the literacy to navigate it.
That's harder than banning. It's also the job.


Brookings provides a framework for moving from diminished to enriched AI learning. They call it Prosper, Prepare, Protect. Here’s how to put it into action:
Prosper: Transform Teaching and Learning
Brookings recommends "carefully titrated AI use": knowing when to teach with AI and when to teach without it. AI should enhance student effort, not replace it.
Audit your assignments. Walk through a unit with your curriculum team and identify which tasks build foundational thinking (AI off) versus which tasks benefit from AI assistance (AI on). Make the boundaries explicit to teachers and students.
Prepare: Build AI Literacy
Brookings calls for "holistic AI literacy." It's not just about prompting skills; it's about understanding how AI works, its limitations, biases, and effects on us.
Add AI literacy to your PD calendar. Teachers need more than tool training. They need frameworks for deciding when AI supports learning and when it undermines it.
Protect: Implement Safeguards
Brookings recommends embedding protections into technology during the design and procurement phase, not bolting them on after problems emerge.
Create visibility. Brookings warns about the "black box" nature of AI eroding trust. Give teachers dashboards showing what students are doing with AI. Give families transparency into how AI is being used with their children.


A New Direction for Students in an AI World: Prosper, Prepare, Protect Brookings (January 2026)
Worth reading in full, or at a minimum, the executive summary. Share it with your cabinet, your board, and your parent community.


Chat with ClassCloud
We’re listening. Let’s Talk!
This newsletter works best when it’s a conversation, not a broadcast. If you want to talk through how any of this applies to your district specifically, or if you have feedback on what would make this more helpful, just hit reply. We read and respond to everything.

Meet Up In Person
Location: Louisville, KY
Dates: January 28-29, 2026
To Schedule a Meeting: Email Sarah ([email protected])
Location: Nashville, TN
Dates: February 12–14, 2026
To Schedule a Meeting: Email Sarah ([email protected]) or Russ ([email protected])
Schedule a Virtual Meeting
Thanks for reading,
Russ Davis, Founder & CEO, ClassCloud ([email protected])
Sarah Gardner, Director of Growth & Partnerships, ClassCloud ([email protected])
ClassCloud is an AI company, so naturally, we use AI to polish our content.





