Permission to Teach, Not Permission to Burn Out
A scenario to explore how preservice teachers can engage with professional networks and broader communities
How to cite this learning scenario
Arantes, J. (2025). Permission to Teach, Not Permission to Burn Out. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study exposes the dangers of relying on generative AI tools that produce false and misleading educational content. It follows a preservice teacher working under a Permission to Teach (PTT) contract, who, overwhelmed by unsustainable workload demands, uses an AI tool to generate lesson materials. The AI produces a hallucinated slide claiming that “many scientists now believe the Earth may be flat,” complete with fabricated sources and misrepresented scientific consensus. The slide is posted online by a student and quickly goes viral, subjecting the teacher to ridicule and reputational harm. This incident is not the result of teacher error or institutional failure—it is a direct consequence of flawed AI outputs being presented as credible. The case highlights the risks of commercial power, that has effectively normalised AI tools in education, whilst they are neither accurate nor safe, especially when expected to be used by pres service teachers, under time pressure. Ultimately, it challenges the assumption that AI can be trusted in the classroom and calls for scrutiny of the tools themselves—not the humans encouraged to use them.
“When AI makes a mistake, it’s the teacher who’s blamed. Without a professional community, I felt really alone with the pressure to 'innovate with GenAI' while also, just trying to make it through the day.
Not My Words
At Fairfield Secondary College, Amina is a final-year preservice teacher working under a Permission to Teach (PTT) contract. With a full load of Year 7 and 8 humanities, two university assignments due, formal observations approaching, and lesson documentation piling up, she is barely keeping her head above water. There’s no release time, no mentoring structure, and no capacity to pause, reflect, or recover.
Out of necessity—not choice—Amina turns to a generative AI tool recommended by a peer. It promises fast, curriculum-aligned, student-friendly lesson content. It feels like the only way to survive.
The first error is minor: a fabricated quote from a feminist thought leader slips into a civics lesson. A student quietly points it out. Amina apologises and moves on.
The second mistake draws more attention. The AI inserts outdated, misleading statistics on refugee policy, sparking confusion in class and a curt email from a parent. Her mentor sighs: “That’s AI for you.”
The third time, though, makes her want to disappear.
In a Year 8 Geography lesson, the AI-generated slideshow includes a detailed, seemingly credible explanation of why “many scientists now believe the Earth may be flat.” It references non-existent research papers, misquotes real scientists, and even includes a fabricated NASA controversy. A student records the slide and posts it online with the caption: “My teacher thinks the Earth is flat 💀.”
It spreads.
Amina’s inbox floods with screenshots, memes, and sarcasm. Her professional credibility dissolves in real time. She tries to explain—to her mentor, to the principal, to herself—that she was trying to stay afloat. That the LLM hallucinated it. That she didn’t have the time to cross-check every sentence. But it doesn’t matter. The story has already taken hold.
For a moment, she thinks about quitting.
But later that night, desperate and searching for answers, Amina stumbles into an online GenAI educators’ support group. There, she finds story after story like her own: teachers who unknowingly taught hallucinated content, were humiliated by screenshots, or lost trust over errors they didn’t create. Not one of them failed as educators. The failure was structura; and technological. The GenAI System is erroneous - and expecting a PST to check every sentence on PTT is too much.
She realises that this isn’t just about her. It’s happening everywhere. Overworked teachers, promised AI will 'save them time' but, in this instance - it didn't only not save time - but caused harm.
Potential Research Topics
Potential Research Questions
The impact of generative AI hallucinations on early-career teacher credibility
Permission to Teach (PTT) and the challenges of digital trust in the classroom
GenAI as an unreliable pedagogical partner in teacher education
Professional online communities as support networks for teachers navigating AI challenges
Misalignment between AI-generated educational content and curriculum accuracy
Navigating digital reputational harm during school placements
How do preservice teachers respond when generative AI produces inaccurate or harmful teaching content?
What coping strategies do teachers use when GenAI failures lead to classroom misunderstandings or reputational harm?
How do professional networks and online communities support teachers navigating the risks of using GenAI tools?
In what ways does over-reliance on GenAI reflect broader structural pressures on teachers under PTT arrangements?
What expectations are placed on preservice teachers to evaluate the accuracy of AI-generated materials, and are they realistic?
Data collection Prompts
Activity 1: AI Gone Wrong – Case Reflection
Task: Reflect on Amina’s story.
Where did things go wrong?
What assumptions were made about AI, teaching, and responsibility?
What safeguards—technical, social, or pedagogical—might have helped?
Activity 2: Peer Pulse – Professional Community Analysis
Task: Explore posts or scenarios from an online teacher discussion board (real or simulated) focused on AI-related teaching challenges.
What patterns of experience emerge?
How do teachers express the failures of GenAI?
How are teaching communities responding to GenAI, in terms of halluncations?
Activity 3: Time, Trust, and Tools
Task: In groups, map out the workload of a PTT teacher in a typical week.
Argue - for whether this question is approrpriate to supporting PSTs "Where might GenAI seem helpful? Where is its use risky?"
What systems would need to be in place to ensure that the PST doesn't feel the 'need' to work with GenAI?