• AIGE
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • AIGE in Action
    • Gallery: But did they actually write it?
    • Services
  • Scenarios AI Governance
    • About the case studies and scenarios?
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
  • Mitigating Risks of AI in Education
    • Deepfakes
    • Still Learning, Still Failing?
    • Optimised for Inequity
    • The Pilot that Went too far
    • Lessons from the NewCo Chatbot Example
    • The Los Angelese School Chatbot Debacle
  • Academic and Research Integrity
    • Mirror, Mask, or Misdirection?
    • Assessment Reform
    • did a human write this
    • it just said no
  • Leadership
    • Balancing workload and Assessment reform
    • Programmatic Possibilities
    • Automation and Abdication
    • The Global Influence of Big Tech
    • Who's in Charge here?
    • It Works, But Does It Belong?
    • Everyone and No One
  • Human Oversight
    • Hands Off Learning
    • Click to comprehend
    • Marked by the Machine
    • Just Follow the System
    • Implementing AI-Driven Recommender Engin
    • Facial Recognition Technology in educati
  • Engagement
    • Whose Voice Counts?
    • The Algorithm Didn’t See Me
    • Flagged and Forgotten
    • The library as a central hub
    • Accredited programs
  • Ethical AI
    • GenAI Hallucinates
    • The System Said So
    • Not Meant to Exclude
    • Justice Deferred
  • Compliance
    • Scan First, Act Later
    • Lost in the System
    • We Never Looked Under the Hood
    • Show Us the Proof
  • Monitoring
    • Aligning AI Tools with Educational Value
    • It wasn't ready
    • It Drifted
    • It solved the wrong problem
  • Transparency
    • It was a black box
    • we signed before we asked
    • behind closed algorithms
  • About Us

The Los Angeles School Chatbot Debacle:



A case to explore the Governance of ai associated with Student Data Privacy

AI RISK MANAGEMENT IN EDUCATION

How to cite this learning scenario

Arantes, J. (2025). The Los Angeles School Chatbot Debacle. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores the significant failure of the Los Angeles School District's AI chatbot initiative, which was intended to support students but ultimately failed to meet its intended objectives. Despite millions of dollars in public funding, the chatbot did not perform as promised, potentially exposing sensitive student data due to inadequate risk management and vendor oversight. The founder of the company providing the chatbot was later charged with fraud, highlighting critical gaps in AI governance, accountability, and transparency within educational technology initiatives. This case study is relevant for K-12 teachers, educational administrators, policymakers, and pre-service teachers, offering insights into the complexities of integrating AI in education, including the ethical, legal, and practical considerations needed to safeguard student well-being and data privacy.

In a scathing announcement, federal prosecutors said the founder deliberately misled investors, school districts and students, resulting in a chatbot that not only failed to deliver promised educational support but also potentially exposed sensitive student data to risk.

The Los Angeles School Chatbot Debacle

In early 2023, the Los Angeles School District proudly announced the deployment of an AI-powered chatbot intended to revolutionize student support services. The chatbot was promoted as a digital assistant capable of answering student questions, offering mental health support, and guiding academic inquiries. The promise of this technology was particularly appealing in a post-pandemic era where mental health concerns and learning gaps had significantly increased. However, as students and teachers began to interact with the chatbot, cracks quickly appeared. The AI often generated irrelevant or confusing responses, failed to understand context, and, in some cases, provided inaccurate or harmful advice. Behind the scenes, it was discovered that the chatbot's learning algorithms were not adequately trained on diverse student needs or aligned with district policies. Moreover, reports emerged suggesting that student data was not securely managed, leading to fears of a potential data breach. The situation escalated when an internal audit revealed that the AI vendor had misrepresented the chatbot’s capabilities and compliance with data protection standards. Investigations found that the vendor had falsified performance reports and inflated the chatbot's success metrics. The discovery led to legal action, with the founder of the AI company being charged with fraud. The fallout from this scandal damaged the district's reputation, eroded public trust, and left educators and students without a functional support tool. In response to the crisis, the district suspended the chatbot project and implemented emergency data protection measures. A task force was created to review the district's technology procurement processes, emphasizing the need for stringent vendor vetting, clear accountability mechanisms, and continuous monitoring of AI tools. The incident also sparked wider discussions within the education sector about the ethical implications of AI in schools, particularly around data privacy and student safety. The Los Angeles School chatbot failure serves as a powerful reminder of the risks associated with unchecked technological innovation in education. Effective AI governance requires more than ambitious promises; it demands rigorous evaluation, transparency, and an unwavering commitment to student welfare. The case study advocates for stronger policies and practices that ensure AI tools are not only effective but also safe and ethically sound.

Overview

discussion and application

The LA schools chatbot case illustrates the urgent need for educational institutions to implement robust AI governance frameworks that prioritize student safety, ethical use of data, and accountability from technology vendors. It calls for educational leaders, policymakers, and practitioners to critically assess the risks and benefits of AI tools and develop comprehensive guidelines to prevent similar failures.
Discussion Questions
In 2023, the Los Angeles School District introduced an AI chatbot designed to support students academically and emotionally. The initiative was part of a broader trend of integrating AI-driven tools into K-12 education, with the promise of personalized learning experiences and efficient administrative support. However, despite substantial public investment, the chatbot failed to deliver its intended benefits. The technology not only underperformed but also exposed sensitive student data due to a lack of rigorous testing, monitoring, and compliance with data protection regulations. The scandal reached its peak when the founder of the AI company was charged with fraud, casting a shadow over AI governance practices within the education sector. This case study aims to dissect the decisions and oversights that led to this failure, providing critical insights into the governance of AI in education, particularly in terms of risk management, human oversight, and ethical compliance.
Who might be interested in this case? This case study can be embedded in teacher education programs to illustrate the complexities of AI integration in schools, provide a foundation for research on ethical AI use, and serve as a resource for professional development in educational leadership and policy.
1. How can educational institutions ensure vendor accountability when integrating AI technologies? 2. What are the ethical implications of using AI chatbots in K-12 settings, particularly concerning student data privacy? 3. How might a risk management framework have prevented the failures observed in this case? 4. In what ways can educators contribute to monitoring and evaluating AI tools used in their schools?

supplementary materials

Websites: https://www.the74million.org/article/chatbot-los-angeles-whistleblower-allhere-ai/https://www.latimes.com/california/story/2024-11-19/founder-of-company-that-created-lausd-chatbot-charted-with-fraud
Author, Dr Janine Arantes Academic and Researcher at Victoria University
This case study was written by Dr. Janine Arantes after reading the two media articles, "L.A. Schools Probe Charges its Hyped, Now-Defunct AI Chatbot Misused Student Data" by Mark Keierleber and "Founder of Company That Created LAUSD Chatbot Charged with Fraud" by Howard Blume. This case study is therefore grounded in actual events as reported by these sources and akcnowledges the source of the prompt.
Do you want to know more?
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.
Subscribe to the AIGE Newsletter
Acknowledgement of Country We acknowledge the Ancestors, Elders and families of the Kulin Nation (who are the traditional owners of the land. where this work has bene predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their ownership of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation and renewal, and that the Traditional Owners' living culture and practices have a unique role in the life of this region.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.