‘Justice Deferred’ A case to explore Equity and Justice in AI-Driven Educational Decision-Making
ACCOUNTABILITY & LEADERSHIP IN AI
How to cite this learning scenario
Arantes, J. (2025). Justice Deferred. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores how AI-driven decisions—while efficient and scalable—can contradict the principles of educational equity and justice if not designed and implemented with care. Set in a national education reform initiative, this fictionalised case illustrates how a centralised AI tool designed to automate school resourcing and teacher allocation inadvertently entrenched inequalities in low-income, remote, and historically underfunded communities. Despite being data-informed, the system failed to acknowledge structural disadvantage, cultural context, and Indigenous self-determination. This case highlights the need for AI systems to be accountable to justice—not just to performance metrics.
Equity isn’t just a principle—it’s a responsibility. If AI decisions reproduce structural injustice, they are not neutral. They are complicit.
Justice Deferred
As part of a 2024 national reform initiative, the Department of Education implemented Resourcely, an AI-driven system to allocate teaching staff and funding based on enrolment numbers, student performance, and historic attendance trends. The tool was pitched as a way to improve fairness and efficiency by removing human discretion and bias from decision-making.
But within the first year of deployment, several First Nations communities and rural schools reported sharp drops in support. The AI had interpreted historic underperformance and absenteeism as signs of reduced need—rather than as symptoms of entrenched disadvantage and systemic neglect. These schools were allocated fewer staff, received less learning support, and were deprioritised in digital upgrades.
Community leaders, educators, and local principals pushed back, arguing that the system rewarded past advantage while penalising those already marginalised. The AI couldn’t read cultural context, intergenerational trauma, or the need for community-led recovery. Teachers reported burnout, students disengaged, and trust in the system deteriorated.
An inquiry found that while the algorithm had functioned as designed, it failed to centre the principles of educational justice. A new framework was introduced, requiring all AI-based education reforms to undergo an Equity Impact Review, co-designed with communities, culturally informed stakeholders, and rights-based organisations. Resourcely was updated to include contextual indicators—like remoteness, cultural load, and histories of funding exclusion—and to ensure that AI-supported decisions helped to close gaps, not widen them.
This case reminds us that justice must be intentionally designed into AI systems. Efficiency alone is not enough—especially in systems meant to serve the public good.
Overview
discussion and application
Discussion Questions How can AI systems designed for efficiency unintentionally create or reinforce injustice? What does it mean to embed equity and justice into the logic of an algorithm? Who should be involved in designing and reviewing AI decision-making tools in education? What frameworks exist to help evaluate AI systems from a social justice lens? How can institutions balance data-driven governance with the need for human judgement, lived experience, and cultural wisdom?
Discussion Questions How can AI systems designed for efficiency unintentionally create or reinforce injustice? What does it mean to embed equity and justice into the logic of an algorithm? Who should be involved in designing and reviewing AI decision-making tools in education? What frameworks exist to help evaluate AI systems from a social justice lens? How can institutions balance data-driven governance with the need for human judgement, lived experience, and cultural wisdom?
This case challenges education leaders and technologists to consider how AI decisions can either support or undermine justice—and what it means to design with equity at the centre.
Learning Objectives Participants will: Understand how AI systems can reproduce or amplify structural inequities without deliberate intervention. Identify strategies to evaluate and redesign AI decision-making processes to uphold equity and justice. Explore participatory and rights-based approaches to AI governance in education. Develop capacity to assess AI decisions through a social justice lens.
Learning Objectives Participants will: Understand how AI systems can reproduce or amplify structural inequities without deliberate intervention. Identify strategies to evaluate and redesign AI decision-making processes to uphold equity and justice. Explore participatory and rights-based approaches to AI governance in education. Develop capacity to assess AI decisions through a social justice lens.
Activity suggestion:
Apply an “Equity Impact Review” to one AI system currently used in your context. Who benefits? Who is left behind?