AI Hallucination in Search Results
Turning algorithmic misrepresentation into a real-time lesson in digital safety
How to cite this learning scenario
Arantes, J. (2025). AI Hallucination in Search Results. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
Warning: This scenario contains references to AI-generated misinformation, identity misrepresentation, and discrimination based on gender and sexuality. All scenarios on AIGE are fictitious.
Abstract
A queer university student from a low socio-economic background relies on free-tier commercial large language models (LLMs) for coursework, unable to afford subscription features with enhanced safeguards. During a digital literacy class, a peer searches their name and discovers a Google knowledge panel populated by a fabricated statement generated by the free LLM, falsely suggesting the student endorsed exclusionary views about queer identities. The statement had been indexed from a public activity of the AI tool. Academic evidence shows that hallucinations, which are confident but false AI outputs, remain a persistent issue even in advanced language models (Gao et al., 2024). And notes, that LGBTQ+ individuals are disproportionately targeted by identity-based disinformation and technology-facilitated abuse (UNESCO, 2023; eSafety Commissioner, 2024a). Moreover, students from lower socio-economic backgrounds are more exposed to harms due to reliance on free tools with weaker safety protocols and generally lower digital literacy (Fraillon et al., 2023; OECD, 2023). Equipped by recent AI literacy training, the student and peer quickly document the hallucination, collect evidence, issue takedown and correction requests to both the search engine and the AI provider, and notify their lecturer. The student then draws on this experience for their master research, into intersecitonality and gender, and socio-economic inequities intersect with algorithmic risks. This scenario illustrates how preparedness through AI literacy, combined with peer support, can empower students to respond decisively to emerging digital threats.
“When the search popped up, I was shocked - but felt prepared. I pulled up the checklist we practised in class, and we handled it straight away.”
When algorithmic misrepresentation triggers agency through AI literacy
Week four of a university placement, a pre-service teacher (PST) enrolled in an English methods unit participates in a digital literacy class. The task involves creating a shared online resource for the cohort that links to each participant’s professional profile. A peer offers to find the PST’s details via a quick Google search to speed up hyperlinking. Moments later, the peer pauses and reads aloud a Google knowledge panel quote: “As a queer educator, I believe queer students should keep their identities out of the classroom.” The quote is attributed by name to the PST, with a fabricated publication date and fictitious conference listing. The comment fundamentally goes against their beliefs.
The PST immediately recognises this as a hallucination, which had generated false biographical data during a previous interaction and which appears to have been scraped and indexed. Research shows that hallucinations in generative AI are common, particularly in free-tier systems that lack advanced fact-checking or content moderation layers (Gao et al., 2024; Maynez et al., 2023). Marginalised communities, including queer students, are disproportionately targeted in AI-generated misinformation due to biases in training data and gaps in algorithmic safeguards (UNESCO, 2023; eSafety Commissioner, 2023). Drawing on prior lectures and university guidelines on digital harms, the PST takes immediate action. They screenshot the search results, copy the URLs, and record the date and time of capture. They use a reporting template from their coursework to submit correction requests to both the AI provider and Google, citing relevant misinformation and defamation policies. Their peer offers to confirm that the content is fabricated. They discuss equity in AI access, the heightened vulnerability of low SES students, and practical steps to mitigate harm: documentation, platform reporting, and engaging institutional support.
By the end of the week, both Google and the AI provider have removed the false attribution. The PST reflects that the experience, though confronting, strengthened their resolve to advocate for AI governance that addresses both economic inequality and identity-based algorithmic harm. They propose integrating rapid-response protocols and equity discussions into all initial teacher education programs to prepare PSTs for the digital dimensions of professional risk.
Research Topics
Research Questions
How may AI hallucinations disproportionately target or harm queer students in university placement contexts?
What role does socio-economic status play in students’ ability to mitigate or prevent algorithmic reputational harm?
How can peer intervention and AI literacy training reduce the impact of harmful hallucinations in academic environments?
Algorithmic harms and misinformation targeting queer identities in higher education contexts
The socio-economic divide in access to safer AI tools and its role in exposure to reputational risks
Student-led interventions and peer-supported rapid response to AI-mediated discrimination
Data Collection
Students search for their own or a fictionalised identity online and identify AI-generated or indexed falsehoods, documenting the process.
Students test free-tier AI tools for factual accuracy on biographical prompts, comparing hallucination frequency to paid tiers (no real identities used).
References
Australian Curriculum, Assessment and Reporting Authority. (2022). Teacher workforce data. https://www.acara.edu.au/reporting/teacher-workforce-data
Australian eSafety Commissioner. (2023). Technology-facilitated abuse: Research insights. https://www.esafety.gov.au/research/technology-facilitated-abuse
Gao, C., Lee, J., & Cai, W. (2024). Hallucination in large language models: A survey. arXiv. https://arxiv.org/abs/2404.01331
UNESCO. (2023). Technology-facilitated gender-based violence: A global review. https://unesdoc.unesco.org/ark:/48223/pf0000385344
West, S., & Allen, C. (2023). Socio-economic inequality in AI access: Risks for marginalised communities. AI and Society. https://doi.org/10.1007/s00146-023-01756-2
August 2025
August 2025