• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us

‘Flagged and Forgotten’ A case to explore Bias, Harm, and Risk Mitigation in AI for Education

ENGAGEMENT & DEMOCRATIC GOVERNANCE IN AI

How to cite this learning scenario

Arantes, J. (2025). Bias. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores how AI tools used in education can produce unintended but serious harms when bias is embedded in their design, data, or deployment. It centers around a predictive risk assessment platform used in secondary schools to identify students “at risk” of disengagement or failure. While intended as an early intervention tool, the system disproportionately flagged students from low-income backgrounds, First Nations communities, and those in out-of-home care. Educators began to notice patterns of systemic bias, where interventions were based on flawed predictions and students were labeled rather than supported. This case considers what ethical, technical, and governance safeguards are needed to prevent AI from reinforcing existing inequalities in education.

Bias in AI is not neutral—it reflects the values, assumptions, and exclusions of those who create it. If we don’t interrogate the logic behind the algorithm, we risk replicating the very harms education seeks to undo.

Flagged and Forgotten

In 2023, an education department rolled out a predictive analytics platform, FutureTrack, across high schools to identify students at risk of academic disengagement. The platform used historical data—attendance, grades, behavior logs, and socio-economic indicators—to generate risk profiles. These profiles were shared with teachers and wellbeing teams to inform intervention planning. However, it quickly became clear that the system disproportionately flagged students from marginalized backgrounds—particularly those from First Nations communities, refugee families, and students in foster care. Teachers reported that the platform lacked context; students who missed school due to cultural obligations or trauma-related absences were flagged as high-risk with no opportunity for nuanced understanding or student voice. In some cases, the label followed students across schools, even when their situation had improved. A group of teachers and researchers conducted an independent review, revealing the algorithm had been trained on biased historical data—data shaped by systemic disadvantage and punitive disciplinary practices. It had no safeguards to detect or correct for these embedded inequities. Parents were never consulted. Students, once labeled, struggled to shake the stigma. Some withdrew from school altogether, citing increased surveillance and pressure. Following community outcry, the education department suspended the program and convened an inquiry into ethical oversight. The review led to stronger governance frameworks requiring bias audits, community consultation, and redress mechanisms for affected students. This case offers an urgent reminder of how data-driven tools—without critical reflection—can exacerbate harm instead of supporting change.

ResearchTopics

Research Questions

Identify how AI bias can originate in data, design, and deployment. Analyze how predictive tools may unintentionally profile students and reinforce inequality. Develop strategies for conducting bias audits and community consultations before adopting AI systems. Consider mechanisms for redress, oversight, and transparency when harm occurs.
What are the sources of bias in AI systems, and how might they go unnoticed in educational contexts? How can historical data reinforce systemic inequities when used to train AI models? What red flags should educators, leaders, and developers look for when reviewing AI tools for school use? How can schools and systems balance early intervention goals with the rights and dignity of students? What ethical review processes should be mandatory before AI tools are approved for use in schools? 

Data collection

Review technical documentation and interview IT or data management staff to determine whether current student data systems include bias detection or correction protocols. Conduct co-design sessions with students and educators to explore how predictive tools can be adapted to incorporate student voice and lived experience in assessments.

Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.