• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us

AI Hallucination in Search Results



Turning algorithmic misrepresentation into a real-time lesson in digital safety

How to cite this learning scenario

Arantes, J. (2025). AI Hallucination in Search Results. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
Warning: This scenario contains references to AI-generated misinformation, identity misrepresentation, and discrimination based on gender and sexuality. All scenarios on AIGE are fictitious. Abstract A queer university student from a low socio-economic background relies on free-tier commercial large language models (LLMs) for coursework, unable to afford subscription features with enhanced safeguards. During a digital literacy class, a peer searches their name and discovers a Google knowledge panel populated by a fabricated statement generated by the free LLM, falsely suggesting the student endorsed exclusionary views about queer identities. The statement had been indexed from a public activity of the AI tool. Academic evidence shows that hallucinations, which are confident but false AI outputs, remain a persistent issue even in advanced language models (Gao et al., 2024). And notes, that LGBTQ+ individuals are disproportionately targeted by identity-based disinformation and technology-facilitated abuse (UNESCO, 2023; eSafety Commissioner, 2024a). Moreover, students from lower socio-economic backgrounds are more exposed to harms due to reliance on free tools with weaker safety protocols and generally lower digital literacy (Fraillon et al., 2023; OECD, 2023). Equipped by recent AI literacy training, the student and peer quickly document the hallucination, collect evidence, issue takedown and correction requests to both the search engine and the AI provider, and notify their lecturer. The student then draws on this experience for their master research, into intersecitonality and gender, and socio-economic inequities intersect with algorithmic risks. This scenario illustrates how preparedness through AI literacy, combined with peer support, can empower students to respond decisively to emerging digital threats.

“When the search popped up, I was shocked - but felt prepared. I pulled up the checklist we practised in class, and we handled it straight away.”

When algorithmic misrepresentation triggers agency through AI literacy

Week four of a university placement, a pre-service teacher (PST) enrolled in an English methods unit participates in a digital literacy class. The task involves creating a shared online resource for the cohort that links to each participant’s professional profile. A peer offers to find the PST’s details via a quick Google search to speed up hyperlinking. Moments later, the peer pauses and reads aloud a Google knowledge panel quote: “As a queer educator, I believe queer students should keep their identities out of the classroom.” The quote is attributed by name to the PST, with a fabricated publication date and fictitious conference listing. The comment fundamentally goes against their beliefs. The PST immediately recognises this as a hallucination, which had generated false biographical data during a previous interaction and which appears to have been scraped and indexed. Research shows that hallucinations in generative AI are common, particularly in free-tier systems that lack advanced fact-checking or content moderation layers (Gao et al., 2024; Maynez et al., 2023). Marginalised communities, including queer students, are disproportionately targeted in AI-generated misinformation due to biases in training data and gaps in algorithmic safeguards (UNESCO, 2023; eSafety Commissioner, 2023). Drawing on prior lectures and university guidelines on digital harms, the PST takes immediate action. They screenshot the search results, copy the URLs, and record the date and time of capture. They use a reporting template from their coursework to submit correction requests to both the AI provider and Google, citing relevant misinformation and defamation policies. Their peer offers to confirm that the content is fabricated. They discuss equity in AI access, the heightened vulnerability of low SES students, and practical steps to mitigate harm: documentation, platform reporting, and engaging institutional support. By the end of the week, both Google and the AI provider have removed the false attribution. The PST reflects that the experience, though confronting, strengthened their resolve to advocate for AI governance that addresses both economic inequality and identity-based algorithmic harm. They propose integrating rapid-response protocols and equity discussions into all initial teacher education programs to prepare PSTs for the digital dimensions of professional risk.

Research Topics

Research Questions

How may AI hallucinations disproportionately target or harm queer students in university placement contexts? What role does socio-economic status play in students’ ability to mitigate or prevent algorithmic reputational harm? How can peer intervention and AI literacy training reduce the impact of harmful hallucinations in academic environments?
Algorithmic harms and misinformation targeting queer identities in higher education contexts The socio-economic divide in access to safer AI tools and its role in exposure to reputational risks Student-led interventions and peer-supported rapid response to AI-mediated discrimination

Data Collection

Students search for their own or a fictionalised identity online and identify AI-generated or indexed falsehoods, documenting the process. Students test free-tier AI tools for factual accuracy on biographical prompts, comparing hallucination frequency to paid tiers (no real identities used). References Australian Curriculum, Assessment and Reporting Authority. (2022). Teacher workforce data. https://www.acara.edu.au/reporting/teacher-workforce-data Australian eSafety Commissioner. (2023). Technology-facilitated abuse: Research insights. https://www.esafety.gov.au/research/technology-facilitated-abuse Gao, C., Lee, J., & Cai, W. (2024). Hallucination in large language models: A survey. arXiv. https://arxiv.org/abs/2404.01331 UNESCO. (2023). Technology-facilitated gender-based violence: A global review. https://unesdoc.unesco.org/ark:/48223/pf0000385344 West, S., & Allen, C. (2023). Socio-economic inequality in AI access: Risks for marginalised communities. AI and Society. https://doi.org/10.1007/s00146-023-01756-2
August 2025

Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.