‘Not Meant to Exclude’ A case to explore Addressing Biases and Unintended Discrimination in Educational AI Systems
ETHICAL AI
How to cite this learning scenario
Arantes, J. (2025). Not Meant to Exclude. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study investigates how well-meaning AI systems can produce discriminatory outcomes when they are designed or deployed without a critical understanding of social context, structural bias, and inclusion. Set in a culturally and linguistically diverse school, the fictionalised—but research-informed—scenario traces the rollout of an AI-powered career pathways tool. Despite aiming to promote equity, the system reinforced stereotypes about gender, race, and disability in its recommendations. The case illustrates the importance of equity audits, inclusive datasets, and collaborative design processes to reduce harm and create AI systems that genuinely support all learners.
AI doesn’t have to mean to discriminate for discrimination to occur. Equity must be designed into every stage of AI development—because intention is not the same as impact.
Not Meant to Exclude
In 2024, Sunrise District School Board introduced PathwayAI, a career exploration tool using AI to match students with potential post-school destinations based on academic history, behaviour reports, and interests. Marketed as a tool to “unlock every student’s potential,” the algorithm quickly became a central part of student wellbeing conversations and subject selection.
But troubling patterns began to emerge. Students with disabilities were frequently steered toward low-skill, manual careers—even when their academic results were strong. Girls were rarely recommended for STEM careers. Students from refugee backgrounds were advised against university, based on incomplete or misunderstood educational histories. When challenged, developers explained that the system was trained on “success patterns” from previous student cohorts—cohorts shaped by historic inequities, migration barriers, and exclusionary policies.
Teachers reported frustration at having to “undo” the assumptions students internalised from the tool. Parents raised concerns about algorithmic bias reinforcing disadvantage rather than challenging it. A public investigation followed, which identified discriminatory outcomes and recommended the urgent redesign of PathwayAI using inclusive design principles, intersectional audit processes, and stakeholder co-design.
Following these findings, the district adopted an equity-by-design framework for AI in education. This included partnerships with equity advisors, culturally and linguistically diverse communities, and disability advocates to review datasets and reframe system logic.
This case reinforces the need for proactive identification and remediation of bias in AI—and for educators, developers, and policymakers to centre inclusion from the outset.
Overview
discussion and application
Discussion Questions
Discussion Questions
This case challenges stakeholders to confront the often invisible ways AI can reinforce discrimination—and explore what it takes to design for justice, not just convenience.
Learning Objectives Participants will: Understand how algorithmic bias can emerge from both design and deployment choices. Identify examples of unintended discrimination in educational AI systems. Explore frameworks and tools to audit and redress bias in existing AI technologies. Develop strategies to embed equity, inclusion, and anti-discrimination principles in AI planning and implementation.
Learning Objectives Participants will: Understand how algorithmic bias can emerge from both design and deployment choices. Identify examples of unintended discrimination in educational AI systems. Explore frameworks and tools to audit and redress bias in existing AI technologies. Develop strategies to embed equity, inclusion, and anti-discrimination principles in AI planning and implementation.
How can biases become embedded in AI tools, even when the intention is to promote equity?
What steps can institutions take to uncover and address discriminatory outcomes in AI systems?
How can educators support students who experience exclusion or stereotyping through AI-based tools?
What role can inclusive datasets and lived experience play in shaping ethical AI?
How might an “equity-by-design” approach reshape the development and use of AI in schools?
Suggested activity:
Conduct a sample equity audit of an AI or data tool currently used in your school or institution. Whose data is represented? Whose isn't?