AI Risk Management in Education
- Identifying and mitigating risks associated with AI in educational settings.
- Conducting ongoing risk and impact assessments.
- Ensuring AI deployment does not amplify harm or reinforce systemic biases.
Establish and Implement a Risk Management Process to Identify and Mitigate Risks
Context of Use
- Have you clearly defined what the AI system will be used for (e.g., student feedback, predicting dropout, chatbot support)
- Does the intended use relate to decision-making about people (e.g., assessment, support services, wellbeing)?
- Have you considered how the tool might be used beyond its intended scope (e.g., students using a feedback tool to write assignments)?
Risk Visibility
Stakeholders and Exposure
- What types of risks might emerge (e.g., bias, misdiagnosis, hallucinated feedback, data misuse)?
- Are these risks likely to be visible to users, or hidden within the system (e.g., quietly misclassifying students)?
- Have these risks been communicated clearly to stakeholders?
- Who will be directly or indirectly affected by the system (e.g., students, teachers, researchers)?
- Have you considered power imbalances between users (e.g., between staff and students)?
- Could marginalised or equity groups be disproportionately impacted?
AI Characteristics
- What data was used to train the system? Is it appropriate for your context?
- Is the AI system explainable? Can users understand how it works and how outputs are generated?
- Does the system allow human override or review?
Documentation and Decision-Making
- Have you documented this use-case and any known risks for internal review?
- Has this assessment been shared with ethics, supervisors, or institutional governance?
- Is the AI system being piloted in a controlled way before wider rollout?
AI impact and risk management processes in educational institutions must consider how AI systems are deployed across research, teaching, and governance. Begin with a full assessment of potential harms using stakeholder-informed impact assessments. These processes must align with institutional values, research ethics protocols, and the organisation’s risk appetite. Assessments must be ongoing—spanning the entire lifecycle of an AI system—to ensure mitigation strategies remain effective and responsive to evolving risks. When using or introducing an AI system (either developed in-house or sourced externally), complete the following checks. These questions help identify use-case specific risks and promote responsible, ethical deployment.
Using the Case Studies to support research into Risk Management
Research Integrity
Case Study: The Pilot That Went Too Far
This case exposes the risks of unchecked predictive analytics labelling students from marginalised backgrounds as "at-risk." It encourages students to critically assess the ethical dimensions of research design, especially when using data-driven tools.
Scholarship of Teaching and Learning (SoTL)
Policy
- Prompt critical reflection on bias, consent, and ethical review.
- Scaffold learning about harm minimisation and cultural responsiveness in educational research.
- Support tasks that evaluate methodological decisions using real-world risks.
Case Study: LA School Chatbot Debacle
A chatbot designed to support wellbeing failed due to lack of testing and oversight. This case fosters critical evaluation of digital tools within teaching contexts.
- Support analysis of how research informs classroom practice.
- Encourage design of risk-aware teaching interventions using emerging technologies.
- Prompt critique of how educational tools reinforce or challenge equity in practice.
Case Study: NewCo Chatbot Example
A university chatbot designed to triage student queries was later found to discriminate based on race and language due to non-representative training data.
- Highlights the need for clear policies on AI use that ensure transparency, equity, and oversight.
- Promotes critical analysis of digital education policies and prepares future educators to engage with and shape policy in tech-integrated learning environments.
Case studies are valuable tools for supporting AI risk management in education by offering concrete examples of where systems have failed, succeeded, or produced unintended consequences. They help identify common risk patterns, reveal gaps in oversight, and highlight the need for stakeholder engagement, transparency, and mitigation strategies. By embedding case studies into policy development, research training, and curriculum design, institutions can build a more responsive and accountable approach to AI governance. A wider range of examples is available in the drop-down menu on the website.