‘It Works, But Does It Belong?’ A case to explore the Alignment of AI Governance with Educational Values and Institutional Vision
ACCOUNTABILITY & LEADERSHIP IN AI
How to cite this learning scenario
Arantes, J. (2025). It Works, But Does It Belong? Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores the risks of adopting AI in education without ensuring alignment with institutional values, pedagogical philosophy, and broader educational goals. Based on real-world observations, this fictionalised case follows a values-led school that implemented AI tools for assessment and classroom management—only to find that the technologies contradicted their commitments to holistic learning, relational pedagogy, and student empowerment. The case highlights the importance of value-driven governance, inclusive consultation, and critically reflective practice when adopting emerging technologies in education.
Just because AI tools are efficient doesn’t mean they’re right for your school. Governance must be grounded not only in risk and compliance—but in values, ethics, and educational purpose.
It Works, But Does It Belong?
In 2024, GreenRiver College—a progressive secondary school known for its commitment to student agency, wellbeing, and inquiry-based learning—adopted an AI-enhanced classroom management system. The tool promised to track student behaviour, flag disengagement, and automate feedback based on facial recognition, biometric data, and attention scoring.
Within weeks, teachers and students reported a mismatch between the technology’s logic and the school’s ethos. Staff felt pressured to conform to the algorithm’s interpretation of “on-task” behaviour, which didn’t account for neurodivergent learning styles or the school’s flexible, project-based environment. Students described feeling “watched” and “judged” by a system that had no understanding of context or intent.
The school leadership team realised they had implemented a solution optimised for control and compliance—not one that aligned with their vision for inclusive, compassionate education. Despite functioning as designed, the system undermined student trust, teacher discretion, and the school’s own identity. A series of student-led roundtables and staff workshops revealed the urgency of developing an AI strategy grounded in GreenRiver’s core values.
The school dismantled the system and replaced it with a participatory governance process that embedded student voice, ethical review, and pedagogical alignment into every stage of AI decision-making. From procurement to classroom use, technologies were now evaluated not only for functionality, but for their fit with the school’s educational philosophy.
This case demonstrates the need for AI governance to be a values-led endeavour—one that serves learning, rather than merely managing it.
Overview
discussion and application
This case encourages institutions to ask not just “can we use this AI tool?”—but “should we?” and “does it serve our vision of education?”
Discussion Questions
Discussion Questions
Learning Objectives
Participants will:
Understand the importance of aligning AI governance with institutional values and goals.
Evaluate the risks of adopting AI that is pedagogically misaligned or culturally incompatible.
Explore participatory methods to ensure stakeholder values are reflected in AI selection and use.
Develop frameworks for critically reviewing AI tools through the lens of educational mission and vision.
What happens when AI tools conflict with an institution’s values or pedagogical philosophy?
How can schools and universities assess the alignment between technological tools and their educational goals?
Who should be involved in defining what “fit for purpose” means in AI implementation?
What processes can help institutions surface and protect their core values during innovation?
How can AI governance go beyond risk management to support educational transformation?
Reflection activity:
Identify your institution’s top 3 educational values. Does your current use of AI reflect or challenge these values?
Develop a “values-alignment checklist” to assess new tools before they are adopted.