• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us


Permission to Teach, Not Permission to Burn Out

A scenario to explore how preservice teachers can engage with professional networks and broader communities

How to cite this learning scenario

Arantes, J. (2025). Permission to Teach, Not Permission to Burn Out. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study exposes the dangers of relying on generative AI tools that produce false and misleading educational content. It follows a preservice teacher working under a Permission to Teach (PTT) contract, who, overwhelmed by unsustainable workload demands, uses an AI tool to generate lesson materials. The AI produces a hallucinated slide claiming that “many scientists now believe the Earth may be flat,” complete with fabricated sources and misrepresented scientific consensus. The slide is posted online by a student and quickly goes viral, subjecting the teacher to ridicule and reputational harm. This incident is not the result of teacher error or institutional failure—it is a direct consequence of flawed AI outputs being presented as credible. The case highlights the risks of commercial power, that has effectively normalised AI tools in education, whilst they are neither accurate nor safe, especially when expected to be used by pres service teachers, under time pressure. Ultimately, it challenges the assumption that AI can be trusted in the classroom and calls for scrutiny of the tools themselves—not the humans encouraged to use them.

“When AI makes a mistake, it’s the teacher who’s blamed. Without a professional community, I felt really alone with the pressure to 'innovate with GenAI' while also, just trying to make it through the day.

Not My Words

At Fairfield Secondary College, Amina is a final-year preservice teacher working under a Permission to Teach (PTT) contract. With a full load of Year 7 and 8 humanities, two university assignments due, formal observations approaching, and lesson documentation piling up, she is barely keeping her head above water. There’s no release time, no mentoring structure, and no capacity to pause, reflect, or recover. Out of necessity—not choice—Amina turns to a generative AI tool recommended by a peer. It promises fast, curriculum-aligned, student-friendly lesson content. It feels like the only way to survive. The first error is minor: a fabricated quote from a feminist thought leader slips into a civics lesson. A student quietly points it out. Amina apologises and moves on. The second mistake draws more attention. The AI inserts outdated, misleading statistics on refugee policy, sparking confusion in class and a curt email from a parent. Her mentor sighs: “That’s AI for you.” The third time, though, makes her want to disappear. In a Year 8 Geography lesson, the AI-generated slideshow includes a detailed, seemingly credible explanation of why “many scientists now believe the Earth may be flat.” It references non-existent research papers, misquotes real scientists, and even includes a fabricated NASA controversy. A student records the slide and posts it online with the caption: “My teacher thinks the Earth is flat 💀.” It spreads. Amina’s inbox floods with screenshots, memes, and sarcasm. Her professional credibility dissolves in real time. She tries to explain—to her mentor, to the principal, to herself—that she was trying to stay afloat. That the LLM hallucinated it. That she didn’t have the time to cross-check every sentence. But it doesn’t matter. The story has already taken hold. For a moment, she thinks about quitting. But later that night, desperate and searching for answers, Amina stumbles into an online GenAI educators’ support group. There, she finds story after story like her own: teachers who unknowingly taught hallucinated content, were humiliated by screenshots, or lost trust over errors they didn’t create. Not one of them failed as educators. The failure was structura; and technological. The GenAI System is erroneous - and expecting a PST to check every sentence on PTT is too much. She realises that this isn’t just about her. It’s happening everywhere. Overworked teachers, promised AI will 'save them time' but, in this instance - it didn't only not save time - but caused harm.

Potential Research Topics

Potential Research Questions

The impact of generative AI hallucinations on early-career teacher credibility Permission to Teach (PTT) and the challenges of digital trust in the classroom GenAI as an unreliable pedagogical partner in teacher education Professional online communities as support networks for teachers navigating AI challenges Misalignment between AI-generated educational content and curriculum accuracy Navigating digital reputational harm during school placements
How do preservice teachers respond when generative AI produces inaccurate or harmful teaching content? What coping strategies do teachers use when GenAI failures lead to classroom misunderstandings or reputational harm? How do professional networks and online communities support teachers navigating the risks of using GenAI tools? In what ways does over-reliance on GenAI reflect broader structural pressures on teachers under PTT arrangements? What expectations are placed on preservice teachers to evaluate the accuracy of AI-generated materials, and are they realistic?

Data collection Prompts

Activity 1: AI Gone Wrong – Case Reflection Task: Reflect on Amina’s story. Where did things go wrong? What assumptions were made about AI, teaching, and responsibility? What safeguards—technical, social, or pedagogical—might have helped? Activity 2: Peer Pulse – Professional Community Analysis Task: Explore posts or scenarios from an online teacher discussion board (real or simulated) focused on AI-related teaching challenges. What patterns of experience emerge? How do teachers express the failures of GenAI? How are teaching communities responding to GenAI, in terms of halluncations? Activity 3: Time, Trust, and Tools Task: In groups, map out the workload of a PTT teacher in a typical week. Argue - for whether this question is approrpriate to supporting PSTs "Where might GenAI seem helpful? Where is its use risky?" What systems would need to be in place to ensure that the PST doesn't feel the 'need' to work with GenAI? 

Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.