Stakeholder Engagement & Inclusive AI Design
- Engaging students, educators, parents, and policymakers in AI governance.
- Ensuring AI tools promote diversity, inclusion, and accessibility.
- Identifying and mitigating potential biases and harms from AI use.
Whose Voice Counts?
This case study highlights the consequences of excluding key stakeholders—students, educators, families, and policymakers—from decisions about AI implementation in schools. It focuses on a fictionalized but research-informed scenario in which a government department partners with a private AI vendor to roll out a predictive learning analytics tool. Without co-design or community input, the tool is deployed across public schools. Issues quickly emerge: student data is misinterpreted, cultural and contextual knowledge is ignored, and teachers feel disempowered. The case asks what it means to “govern with” rather than “govern over,” and calls for participatory frameworks to ensure that AI systems reflect the values and needs of those most affected.
The Algorithm Didn’t See Me
Flagged and Forgotten
This case study investigates how AI tools in education can unintentionally exclude or harm students when diversity, inclusion, and accessibility are not built into their design. Set in a multicultural urban school district, the fictionalized but research-informed narrative follows the deployment of a generative AI tool used for academic writing support. While intended to improve student outcomes, the tool failed to accommodate multilingual learners, neurodivergent students, and those with disabilities. Teachers reported bias in the AI’s feedback, with student writing penalized for non-standard English or divergent thought patterns. The case asks: who is AI really designed for—and who gets left behind?
This case study explores how AI tools used in education can produce unintended but serious harms when bias is embedded in their design, data, or deployment. It centers around a predictive risk assessment platform used in secondary schools to identify students “at risk” of disengagement or failure. While intended as an early intervention tool, the system disproportionately flagged students from low-income backgrounds, First Nations communities, and those in out-of-home care. Educators began to notice patterns of systemic bias, where interventions were based on flawed predictions and students were labeled rather than supported. This case considers what ethical, technical, and governance safeguards are needed to prevent AI from reinforcing existing inequalities in education.