MONITORING AI IN EDUCATION
AI Myths
Objectivity
Unmasking AI: Can It Really Know Everything?
How to cite this learning scenario
Arantes, J. (2025). AI Myths: Objectivity. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This scenario-based learning activity invites K–12 students to investigate the myth that Artificial Intelligence (AI) is always right. Through a guided exploration, students will learn that AI tools are trained on historical data, can make mistakes (including hallucinations), and may reinforce existing biases. The scenario encourages critical thinking, digital literacy, and ethical awareness. Students engage in a classroom activity where AI tools are used to answer questions, then evaluate those answers through discussion and comparison with trusted sources. The goal is to foster a balanced understanding of AI’s capabilities and limitations in education.
Just because it’s fast and sounds
confident doesn’t mean it’s right.
The Case of the Misinformed Machine
Your Year 8 class is preparing a presentation on Australian history. Your teacher introduces a new AI-powered assistant to help with research. The AI seems amazing—it answers questions instantly. One student asks, “Who discovered Australia?” The AI replies, “Captain James Cook discovered Australia in 1770.” Everyone nods, until Maya, a First Naitons student, raises her hand. “That’s not quite right. My elders told me our people have lived here for tens of thousands of years.”
The teacher pauses and asks the class to dig deeper. You work in groups to verify the AI’s claims. You consult books, academic websites, and talk to your local Aboriginal liaison officer. Your class identifies that the AI was perpetuating colonial thinking, and learn about Aboriginal and Torres Strait Islander history. You class engages with the ways First Nations peoples have lived in Australia for over 60,000 years. Captain Cook did not “discover” Australia—he arrived during a long history of colonisation. The AI’s answer wasn’t just wrong; it erased a whole culture’s history.
Later, another student uses the AI to get ideas for a science project. The tool recommends using plastic straws to show chemical reactions. The student is surprised—it contradicts the school’s sustainability policy. Again, the AI failed to understand local context.
Your class realises AI gives fast answers, but not always right ones. It doesn’t know who is in the room, what is culturally appropriate, or whether its facts are up to date or respectful.
Together, you make a class poster: “AI is a tool—not a teacher. Ask. Check. Think.”
research topics
research questions
How do students interpret the accuracy of AI-generated information in class?
What critical thinking skills can be developed by challenging AI-generated outputs?
How do cultural perspectives influence what is considered “correct” in educational content?
Critical digital literacy in K–12 education
Ethical AI use in the classroom
Cultural representation and AI bias
Student agency in AI-mediated learning
data collection
- Student Reflection Journals (1.4, 2.6, 3.3, 4.1) Responses to guided questions post-activity.
- Classroom Observation Notes (1.6, 2.4, 3.5, 4.4) Teacher records of student engagement and discussion.
- Focus Groups (1.3, 1.4, 2.6, 6.3) Small student group interviews on AI use and trust.
- Artifact Analysis (2.1, 3.4, 5.2) Posters, presentations, and worksheets produced during the activity