‘Everyone and No One’ A case to explore Responsibility for AI Strategy, Training, and Regulatory Compliance in Education
ACCOUNTABILITY & LEADERSHIP IN AI
How to cite this learning scenario
Arantes, J. (2025). Everyone and No One. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores what happens when responsibility for AI strategy, professional learning, and regulatory compliance is fragmented—or worse, absent. Based on patterns seen across global education systems, this fictionalised scenario tracks a university's integration of generative AI tools in teaching and assessment. Despite enthusiasm for innovation, no individual or team was clearly responsible for developing a cohesive AI strategy, guiding staff training, or ensuring compliance with emerging data, privacy, and education laws. The result: inconsistent practices, legal vulnerability, and growing mistrust. This case underscores the importance of leadership accountability and coordinated action when deploying AI in educational institutions.
When everyone assumes someone else is in charge, no one is accountable. AI in education demands clear strategy, empowered roles, and system-wide responsibility—not ambiguity.
Everyone and No One
In 2024, a mid-sized university in Australia introduced a policy encouraging the use of generative AI tools to enhance learning, streamline marking, and support student writing. The tools were widely adopted across faculties—but without a coordinated implementation plan. Different departments interpreted the policy differently. Some encouraged full integration, while others banned AI outright. Academic staff were left to navigate the risks and responsibilities on their own.
There was no clear strategy on staff training, and no central point of contact for compliance questions. Students received conflicting information about what was considered ethical or permissible. A breach occurred when one faculty used a third-party AI platform that stored student data on servers in jurisdictions that did not meet local privacy standards. When complaints reached the Office of the Information Commissioner, the university struggled to respond—it was unclear who was responsible for data governance in AI contexts, and no AI strategy document had been formally endorsed.
An internal review found that while many staff were excited about AI, few understood their obligations under privacy legislation or intellectual property law. No training had been mandated. There was no position description for AI oversight, nor a designated officer for aligning practice with national and institutional policy. The incident prompted the creation of a new cross-functional AI Taskforce, along with designated roles for AI Strategy Lead, Compliance Officer, and AI Pedagogy Coordinator. Staff training modules were developed, and annual audits of AI use became mandatory.
This case demonstrates that enthusiasm alone is not enough—institutions must assign clear responsibilities to ensure AI is used safely, legally, and equitably.
Overview
discussion and application
This case prompts educational leaders to reflect on what structures, roles, and responsibilities are needed to safely and successfully manage AI use across their institutions.
Discussion Questions
Discussion Questions
Learning Objectives
Participants will:
Understand the critical need for assigned roles and responsibilities in AI strategy and oversight.
Explore the risks of fragmented implementation and inconsistent training.
Identify key leadership roles and structures that support ethical, compliant, and informed AI deployment.
Reflect on their own institutional readiness and gaps in AI-related capability and governance.
Who is currently responsible for AI strategy, training, and compliance in your school, university, or education system?
What risks emerge when responsibilities are not clearly defined?
What specific roles (e.g., compliance officer, AI pedagogy lead) might support coordinated, ethical AI adoption?
How can institutions balance innovation with the need for robust training and regulatory compliance?
What frameworks or policies are needed to ensure shared understanding across teams?
Prompts for leadership teams:
Map your current organisational structure. Where is AI responsibility located—or missing?
Draft a proposal for three key roles that would strengthen your institution’s AI governance, training, and compliance framework.