GenAI Tells a Different Story Teaching Science when AI Hallucinates
ETHICAL AI
How to cite this learning scenario
Arantes, J. (2025). GenAI Tells a Different Story. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This scenario explores the epistemic and ethical tensions arising as GenAI becomes embedded in pre-service science education. When a GenAI system presents fabricated or biased scientific claims as fact, pre-service teachers must critically interrogate the outputs, confronting the shifting nature of ‘truth’ and ‘objectivity’ in science teaching. The scenario foregrounds technical agonism as a lens to examine how future educators negotiate competing accounts of knowledge, evidence, and authority. It invites inquiry into the evolving skills, literacies, and ethical dispositions required to teach science in a post-truth, GenAI-mediated world.
“In a world where even our machines can invent convincing falsehoods, the mark of a true science educator is not in knowing all the answers, but in teaching students to question, verify, and think together.”
Teaching Science when AI Hallucinates
In a methods class, pre-service science teachers are tasked with designing lesson plans using a popular GenAI tool. Maria, a student, prompts the GenAI to generate an explanation of photosynthesis suitable for Year 7. The output is polished but subtly incorrect, attributing a fictional compound as critical to plant energy conversion. During a class review, some students trust the output because “AI knows more than us,” while others spot inconsistencies based on their foundational knowledge.
Before the lesson, the class had read Arantes (2024), "Understanding Intersections Between GenAI and Pre-Service Teacher Education: What Do We Need to Understand About the Changing Face of Truth in Science Education? "(Journal of Science Education and Technology, 1-12). The article introduced the concept of technical agonism—a way of describing the ongoing struggle or debate between different sources of knowledge, especially when technology is involved. In simple terms, technical agonism means not taking any one answer (even from AI) at face value, but instead actively questioning, comparing, and weighing competing explanations and sources. It’s about embracing disagreement and discussion as a natural part of understanding, rather than looking for one easy ‘correct’ answer.
Prompted by this reading, the lecturer intervenes, guiding the group to use technical agonism in practice:
“How do we verify scientific ‘facts’ in an era when GenAI can confidently fabricate plausible-sounding content? Who, or what, becomes the arbiter of truth when machine-generated authority challenges established knowledge and evidence?”
The group debates objectivity, the limits of evidence, and the role of critical discussion when GenAI is part of classroom knowledge. They negotiate their responsibilities as future science educators, learning that healthy skepticism, dialogue, and collaborative inquiry are essential skills—not only for themselves, but for the students they will one day teach.
Research Topics
Research Questions
Epistemic agency and critical literacy in GenAI-mediated science education
Technical agonism and its role in teacher preparation for AI-rich classrooms
The impact of GenAI hallucinations on pre-service teachers’ understanding of scientific objectivity
Gender, authority, and trust in AI-generated science content
Pedagogical frameworks for teaching ‘post-truth’ science in teacher education programs
The ethics of GenAI use in curriculum design and knowledge validation
How do pre-service teachers evaluate and respond to GenAI-generated scientific inaccuracies in lesson planning?
What skills and dispositions do teacher educators identify as essential for navigating epistemic uncertainty introduced by GenAI?
How does technical agonism shape discussions of evidence, authority, and objectivity among pre-service science teachers?
In what ways do GenAI hallucinations affect pre-service teachers’ trust in technology versus traditional sources of scientific knowledge?
How are gendered perceptions of expertise and authority reinforced or challenged in AI-mediated science education settings?
Data collection:
Facilitated discussions using vignettes (like the above) to prompt reflection on truth, evidence, and authority.
Systematic examination of lesson plans created with and without GenAI assistance to identify patterns in fact-checking and citation.
Pre-service teachers keep diaries detailing their decision-making processes when confronting conflicting or inaccurate AI-generated content.
In-depth interviews with pre-service teachers and educators about their experiences, beliefs, and strategies for navigating GenAI in science education.
Quantitative measures to assess changes in epistemic beliefs, trust in AI, and critical digital literacy before and after exposure to GenAI-based tasks.