• AIGE
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • AIGE in Action
    • Gallery: But did they actually write it?
    • Services
  • Scenarios AI Governance
    • About the case studies and scenarios?
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
  • Mitigating Risks of AI in Education
    • Deepfakes
    • Still Learning, Still Failing?
    • Optimised for Inequity
    • The Pilot that Went too far
    • Lessons from the NewCo Chatbot Example
    • The Los Angelese School Chatbot Debacle
  • Academic and Research Integrity
    • Mirror, Mask, or Misdirection?
    • Assessment Reform
    • did a human write this
    • it just said no
  • Leadership
    • Balancing workload and Assessment reform
    • Programmatic Possibilities
    • Automation and Abdication
    • The Global Influence of Big Tech
    • Who's in Charge here?
    • It Works, But Does It Belong?
    • Everyone and No One
  • Human Oversight
    • Hands Off Learning
    • Click to comprehend
    • Marked by the Machine
    • Just Follow the System
    • Implementing AI-Driven Recommender Engin
    • Facial Recognition Technology in educati
  • Engagement
    • Whose Voice Counts?
    • The Algorithm Didn’t See Me
    • Flagged and Forgotten
    • The library as a central hub
    • Accredited programs
  • Ethical AI
    • GenAI Hallucinates
    • The System Said So
    • Not Meant to Exclude
    • Justice Deferred
  • Compliance
    • Scan First, Act Later
    • Lost in the System
    • We Never Looked Under the Hood
    • Show Us the Proof
  • Monitoring
    • Aligning AI Tools with Educational Value
    • It wasn't ready
    • It Drifted
    • It solved the wrong problem
  • Transparency
    • It was a black box
    • we signed before we asked
    • behind closed algorithms
  • About Us

AI Supply Chain Transparency & Collaboration

  • Ensuring education institutions understand AI models and data sources.
  • Engaging with third-party vendors to manage risks in AI procurement.
  • Promoting open collaboration on AI safety and ethical considerations.
It Was a Black Box
This case study explores the consequences of adopting AI systems in education without sufficient understanding of how they work—or where the data comes from. The fictionalised but research-informed scenario follows a school system that implemented a predictive analytics tool to support student interventions. Despite widespread use, no one on staff understood how the model made its decisions, what data it used, or whether the data was ethically sourced. When harm emerged, the lack of transparency made it impossible to respond effectively. This case highlights the need for model explainability, data literacy, and procurement due diligence across all educational AI use.
We Signed Before We Asked
Behind Closed Algorithms
This case study explores the risks educational institutions face when AI tools are procured from third-party vendors without sufficient due diligence or safeguards. The fictionalised scenario follows a school district that purchased an AI learning analytics platform with limited understanding of the tool’s functionality, data handling practices, or alignment with educational values. When privacy concerns and algorithmic bias emerged, the lack of contractual protections or accountability clauses left the district exposed. This case highlights the importance of proactive vendor engagement, clear risk management strategies, and education-specific procurement protocols when working with AI providers.
This case study explores the consequences of siloed decision-making in AI adoption and the missed opportunities that arise when educational institutions fail to collaborate openly on AI safety and ethics. In this fictionalised scenario, a university deployed a suite of AI tools across its student services and academic platforms without consulting students, staff, or external experts. When problems emerged, there was no shared understanding—or shared solution. The case highlights the need for cross-sector collaboration, participatory design, and open dialogue to ensure AI systems in education are not only technically effective, but socially responsible.
Do you want to know more?
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.
Subscribe to the AIGE Newsletter
Acknowledgement of Country We acknowledge the Ancestors, Elders and families of the Kulin Nation (who are the traditional owners of the land. where this work has bene predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their ownership of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation and renewal, and that the Traditional Owners' living culture and practices have a unique role in the life of this region.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.