• AIGE
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • AIGE in Action
    • Gallery: But did they actually write it?
    • Services
  • Scenarios AI Governance
    • About the case studies and scenarios?
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
  • Mitigating Risks of AI in Education
    • Deepfakes
    • Still Learning, Still Failing?
    • Optimised for Inequity
    • The Pilot that Went too far
    • Lessons from the NewCo Chatbot Example
    • The Los Angelese School Chatbot Debacle
  • Academic and Research Integrity
    • Mirror, Mask, or Misdirection?
    • Assessment Reform
    • did a human write this
    • it just said no
  • Leadership
    • Balancing workload and Assessment reform
    • Programmatic Possibilities
    • Automation and Abdication
    • The Global Influence of Big Tech
    • Who's in Charge here?
    • It Works, But Does It Belong?
    • Everyone and No One
  • Human Oversight
    • Hands Off Learning
    • Click to comprehend
    • Marked by the Machine
    • Just Follow the System
    • Implementing AI-Driven Recommender Engin
    • Facial Recognition Technology in educati
  • Engagement
    • Whose Voice Counts?
    • The Algorithm Didn’t See Me
    • Flagged and Forgotten
    • The library as a central hub
    • Accredited programs
  • Ethical AI
    • GenAI Hallucinates
    • The System Said So
    • Not Meant to Exclude
    • Justice Deferred
  • Compliance
    • Scan First, Act Later
    • Lost in the System
    • We Never Looked Under the Hood
    • Show Us the Proof
  • Monitoring
    • Aligning AI Tools with Educational Value
    • It wasn't ready
    • It Drifted
    • It solved the wrong problem
  • Transparency
    • It was a black box
    • we signed before we asked
    • behind closed algorithms
  • About Us

‘The System Said So’ A case to explore the Right to Contest and Challenge AI Outcomes in Education

ETHICAL AI

How to cite this learning scenario

Arantes, J. (2025). The System Said So. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores the ethical and procedural gaps that emerge when students and educators are given no avenue to contest or appeal AI-generated decisions. Set in a senior secondary college, the fictionalised scenario describes the rollout of an AI-powered assessment moderation tool. Despite its promise to ensure consistency and fairness, the tool began generating questionable grades and behaviour alerts—without clear pathways for students or teachers to challenge the outcomes. This case highlights the fundamental need for transparent contestability, human review, and accountability frameworks that uphold due process in education systems using AI.

AI is not infallible. When students and teachers cannot question decisions made by algorithms, we risk turning education into an automated injustice system. Contestability is not optional—it’s a right.

The System Said So

In 2024, Everton Senior College implemented an AI-based assessment moderation system that analysed student essays, marked them using predictive grading, and flagged any anomalies for review. The tool was designed to ensure equity across subjects and reduce teacher bias. However, over time, concerns grew. Students began receiving grades that did not align with their past performance or teacher expectations. Appeals were dismissed on the basis that the algorithm had “detected deviation patterns.” Teachers were instructed not to override the system unless a formal error could be proven—but the system's logic was not explainable, and its decisions were opaque. When one student, Zara, received a failing mark on a major assessment—despite positive feedback from her teacher—the family attempted to appeal. There was no process in place. The AI’s decision was final. Frustrated and disillusioned, Zara withdrew from the subject. Her teacher later discovered that the system had flagged her essay due to “high lexical similarity” with public texts, though it wasn’t plagiarised—it referenced published research. The situation sparked widespread concern. A collective of staff and students demanded the establishment of a right to contest AI decisions. The school responded by creating an independent review process, introducing a transparent appeals policy, and mandating human moderation for all critical decisions. They also initiated workshops to help students and staff better understand how AI assessments work—and how to challenge them if needed. This case underscores the need for explainability, transparency, and accessible redress in all AI systems used in education—particularly those that impact grades, wellbeing, or progression.

Research Topics

Research Questions

Understand the critical importance of contestability and human oversight in AI decision-making. Identify gaps in current institutional procedures related to AI transparency and redress. Explore strategies to build formal, accessible, and inclusive pathways for appeal and review. Consider how to communicate student and educator rights clearly within AI-integrated systems.
What risks arise when students and staff cannot challenge AI-generated outcomes? How can educational institutions build fair and transparent appeal processes for AI-based decisions? What role should human review play in moderating automated outputs? What mechanisms can ensure students understand their rights when affected by AI tools? How can schools and universities foster a culture of accountability and responsiveness in AI use?

Data collection:

Conduct a policy audit and staff-student interviews to assess whether clear pathways exist for challenging decisions made by digital or automated systems. Facilitate a collaborative writing workshop with students, staff, and administrators to draft a “Right to Review” statement for inclusion in institutional handbooks or manuals. 

Do you want to know more?
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.
Subscribe to the AIGE Newsletter
Acknowledgement of Country We acknowledge the Ancestors, Elders and families of the Kulin Nation (who are the traditional owners of the land. where this work has bene predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their ownership of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation and renewal, and that the Traditional Owners' living culture and practices have a unique role in the life of this region.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.