AI Risk Management in Education
- Identifying and mitigating risks associated with AI in educational settings.
- Conducting ongoing risk and impact assessments.
- Ensuring AI deployment does not amplify harm or reinforce systemic biases.
This case study explores the significant failure of the Los Angeles School District's AI chatbot initiative, which was intended to support students but ultimately failed to meet its intended objectives. Despite millions of dollars in public funding, the chatbot did not perform as promised, potentially exposing sensitive student data due to inadequate risk management and vendor oversight. The founder of the company providing the chatbot was later charged with fraud, highlighting critical gaps in AI governance, accountability, and transparency within educational technology initiatives. This case study is relevant for K-12 teachers, educational administrators, policymakers, and pre-service teachers, offering insights into the complexities of integrating AI in education, including the ethical, legal, and practical considerations needed to safeguard student well-being and data privacy.
Incident: Nov 2024 Case Study Construction: March 2025
Incident: Nov 2024 Case Study Construction: March 2025
The Pilot That Went Too Far
This case study explores the implementation of a generative AI chatbot by NewCo, a fast-growing B2C company, highlighting the challenges and risks associated with AI deployment without adhering to governance standards. The study contrasts two scenarios: one where NewCo does not follow the Voluntary AI Safety Standard, resulting in discrimination, privacy breaches, and reputational damage, and another where adherence to safety standards ensures a successful and ethical deployment of AI. The findings underscore the importance of risk assessments, stakeholder engagement, and continuous monitoring in AI governance. This case study is particularly relevant for educators in higher education, pre-service teachers, and policymakers interested in integrating AI responsibly within educational settings.
Incident: Feb 2024Case Study Construction: March 2025
Incident: Feb 2024Case Study Construction: March 2025
This case study investigates a fictionalized—but research-informed—scenario in which an AI pilot program, designed to enhance personalized learning in a cluster of schools, unintentionally exposed students and staff to significant psychosocial, privacy, and workload risks. The platform, introduced without thorough testing or consent processes, created uneven learning experiences, eroded trust between students and teachers, and led to emotional burnout among staff. This case examines how educational institutions can better identify, assess, and mitigate AI-related risks before, during, and after implementation, and why ethical foresight must be part of every innovation roadmap.
This case study examines the risks of AI deployment that, while marketed as “neutral” or “efficient,” may reinforce systemic inequalities and institutional harm. Set in a diverse school district, the fictionalised—but research-informed—scenario explores how an AI-driven admissions algorithm unintentionally disadvantaged students from minoritised and low-income backgrounds. The system’s training data encoded historic patterns of exclusion, leading to biased recommendations and enrolment decisions. This case prompts reflection on the myth of AI objectivity and calls for intersectional, justice-oriented approaches to AI design, deployment, and governance in education.
coming soon
This case study addresses the critical need for continuous monitoring, evaluation, and adaptive risk management when using AI in educational settings. It follows a scenario in which a learning management system (LMS) with embedded AI features was introduced across a university, only to produce unforeseen risks—such as algorithmic drift, privacy breaches, and exclusion of students with accessibility needs—over time. Although the platform was compliant at launch, a lack of ongoing assessment meant emerging issues went unnoticed until serious harm had occurred. The case emphasizes that responsible AI use in education requires not just a strong start, but sustained oversight.
Abstract coming soon