• AIGE
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • AIGE in Action
    • Gallery: But did they actually write it?
    • Services
  • Scenarios AI Governance
    • About the case studies and scenarios?
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
  • Mitigating Risks of AI in Education
    • Deepfakes
    • Still Learning, Still Failing?
    • Optimised for Inequity
    • The Pilot that Went too far
    • Lessons from the NewCo Chatbot Example
    • The Los Angelese School Chatbot Debacle
  • Academic and Research Integrity
    • Mirror, Mask, or Misdirection?
    • Assessment Reform
    • did a human write this
    • it just said no
  • Leadership
    • Balancing workload and Assessment reform
    • Programmatic Possibilities
    • Automation and Abdication
    • The Global Influence of Big Tech
    • Who's in Charge here?
    • It Works, But Does It Belong?
    • Everyone and No One
  • Human Oversight
    • Hands Off Learning
    • Click to comprehend
    • Marked by the Machine
    • Just Follow the System
    • Implementing AI-Driven Recommender Engin
    • Facial Recognition Technology in educati
  • Engagement
    • Whose Voice Counts?
    • The Algorithm Didn’t See Me
    • Flagged and Forgotten
    • The library as a central hub
    • Accredited programs
  • Ethical AI
    • GenAI Hallucinates
    • The System Said So
    • Not Meant to Exclude
    • Justice Deferred
  • Compliance
    • Scan First, Act Later
    • Lost in the System
    • We Never Looked Under the Hood
    • Show Us the Proof
  • Monitoring
    • Aligning AI Tools with Educational Value
    • It wasn't ready
    • It Drifted
    • It solved the wrong problem
  • Transparency
    • It was a black box
    • we signed before we asked
    • behind closed algorithms
  • About Us

AI Risk Management in Education

  • Identifying and mitigating risks associated with AI in educational settings.
  • Conducting ongoing risk and impact assessments.
  • Ensuring AI deployment does not amplify harm or reinforce systemic biases.

Establish and Implement a Risk Management Process to Identify and Mitigate Risks

Context of Use
  • Have you clearly defined what the AI system will be used for (e.g., student feedback, predicting dropout, chatbot support)
  • Does the intended use relate to decision-making about people (e.g., assessment, support services, wellbeing)?
  • Have you considered how the tool might be used beyond its intended scope (e.g., students using a feedback tool to write assignments)?
Risk Visibility
Stakeholders and Exposure
  • What types of risks might emerge (e.g., bias, misdiagnosis, hallucinated feedback, data misuse)?
  • Are these risks likely to be visible to users, or hidden within the system (e.g., quietly misclassifying students)?
  • Have these risks been communicated clearly to stakeholders?
  • Who will be directly or indirectly affected by the system (e.g., students, teachers, researchers)?
  • Have you considered power imbalances between users (e.g., between staff and students)?
  • Could marginalised or equity groups be disproportionately impacted?
AI Characteristics
  • What data was used to train the system? Is it appropriate for your context?
  • Is the AI system explainable? Can users understand how it works and how outputs are generated?
  • Does the system allow human override or review?
Documentation and Decision-Making
  • Have you documented this use-case and any known risks for internal review?
  • Has this assessment been shared with ethics, supervisors, or institutional governance?
  • Is the AI system being piloted in a controlled way before wider rollout?
AI impact and risk management processes in educational institutions must consider how AI systems are deployed across research, teaching, and governance. Begin with a full assessment of potential harms using stakeholder-informed impact assessments. These processes must align with institutional values, research ethics protocols, and the organisation’s risk appetite. Assessments must be ongoing—spanning the entire lifecycle of an AI system—to ensure mitigation strategies remain effective and responsive to evolving risks. When using or introducing an AI system (either developed in-house or sourced externally), complete the following checks. These questions help identify use-case specific risks and promote responsible, ethical deployment. 


Using the Case Studies to support research into Risk Management

Research Integrity
Case Study: The Pilot That Went Too Far This case exposes the risks of unchecked predictive analytics labelling students from marginalised backgrounds as "at-risk." It encourages students to critically assess the ethical dimensions of research design, especially when using data-driven tools.
Scholarship of Teaching and Learning (SoTL)
Policy
  • Prompt critical reflection on bias, consent, and ethical review.
  • Scaffold learning about harm minimisation and cultural responsiveness in educational research.
  • Support tasks that evaluate methodological decisions using real-world risks.
Case Study: LA School Chatbot Debacle A chatbot designed to support wellbeing failed due to lack of testing and oversight. This case fosters critical evaluation of digital tools within teaching contexts.
  • Support analysis of how research informs classroom practice.
  • Encourage design of risk-aware teaching interventions using emerging technologies.
  • Prompt critique of how educational tools reinforce or challenge equity in practice.
Case Study: NewCo Chatbot Example A university chatbot designed to triage student queries was later found to discriminate based on race and language due to non-representative training data.
  • Highlights the need for clear policies on AI use that ensure transparency, equity, and oversight.
  • Promotes critical analysis of digital education policies and prepares future educators to engage with and shape policy in tech-integrated learning environments.
Case studies are valuable tools for supporting AI risk management in education by offering concrete examples of where systems have failed, succeeded, or produced unintended consequences. They help identify common risk patterns, reveal gaps in oversight, and highlight the need for stakeholder engagement, transparency, and mitigation strategies. By embedding case studies into policy development, research training, and curriculum design, institutions can build a more responsive and accountable approach to AI governance. A wider range of examples is available in the drop-down menu on the website.
Do you want to know more?
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.
Subscribe to the AIGE Newsletter
Acknowledgement of Country We acknowledge the Ancestors, Elders and families of the Kulin Nation (who are the traditional owners of the land. where this work has bene predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their ownership of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation and renewal, and that the Traditional Owners' living culture and practices have a unique role in the life of this region.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.