• AIGE
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • AIGE in Action
    • Gallery: But did they actually write it?
    • Services
  • Scenarios AI Governance
    • About the case studies and scenarios?
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
  • Mitigating Risks of AI in Education
    • Deepfakes
    • Still Learning, Still Failing?
    • Optimised for Inequity
    • The Pilot that Went too far
    • Lessons from the NewCo Chatbot Example
    • The Los Angelese School Chatbot Debacle
  • Academic and Research Integrity
    • Mirror, Mask, or Misdirection?
    • Assessment Reform
    • did a human write this
    • it just said no
  • Leadership
    • Balancing workload and Assessment reform
    • Programmatic Possibilities
    • Automation and Abdication
    • The Global Influence of Big Tech
    • Who's in Charge here?
    • It Works, But Does It Belong?
    • Everyone and No One
  • Human Oversight
    • Hands Off Learning
    • Click to comprehend
    • Marked by the Machine
    • Just Follow the System
    • Implementing AI-Driven Recommender Engin
    • Facial Recognition Technology in educati
  • Engagement
    • Whose Voice Counts?
    • The Algorithm Didn’t See Me
    • Flagged and Forgotten
    • The library as a central hub
    • Accredited programs
  • Ethical AI
    • GenAI Hallucinates
    • The System Said So
    • Not Meant to Exclude
    • Justice Deferred
  • Compliance
    • Scan First, Act Later
    • Lost in the System
    • We Never Looked Under the Hood
    • Show Us the Proof
  • Monitoring
    • Aligning AI Tools with Educational Value
    • It wasn't ready
    • It Drifted
    • It solved the wrong problem
  • Transparency
    • It was a black box
    • we signed before we asked
    • behind closed algorithms
  • About Us

‘Everyone and No One’ A case to explore Responsibility for AI Strategy, Training, and Regulatory Compliance in Education

ACCOUNTABILITY & LEADERSHIP IN AI

How to cite this learning scenario

Arantes, J. (2025). Everyone and No One. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores what happens when responsibility for AI strategy, professional learning, and regulatory compliance is fragmented—or worse, absent. Based on patterns seen across global education systems, this fictionalised scenario tracks a university's integration of generative AI tools in teaching and assessment. Despite enthusiasm for innovation, no individual or team was clearly responsible for developing a cohesive AI strategy, guiding staff training, or ensuring compliance with emerging data, privacy, and education laws. The result: inconsistent practices, legal vulnerability, and growing mistrust. This case underscores the importance of leadership accountability and coordinated action when deploying AI in educational institutions.

When everyone assumes someone else is in charge, no one is accountable. AI in education demands clear strategy, empowered roles, and system-wide responsibility—not ambiguity.

Everyone and No One

In 2024, a mid-sized university in Australia introduced a policy encouraging the use of generative AI tools to enhance learning, streamline marking, and support student writing. The tools were widely adopted across faculties—but without a coordinated implementation plan. Different departments interpreted the policy differently. Some encouraged full integration, while others banned AI outright. Academic staff were left to navigate the risks and responsibilities on their own. There was no clear strategy on staff training, and no central point of contact for compliance questions. Students received conflicting information about what was considered ethical or permissible. A breach occurred when one faculty used a third-party AI platform that stored student data on servers in jurisdictions that did not meet local privacy standards. When complaints reached the Office of the Information Commissioner, the university struggled to respond—it was unclear who was responsible for data governance in AI contexts, and no AI strategy document had been formally endorsed. An internal review found that while many staff were excited about AI, few understood their obligations under privacy legislation or intellectual property law. No training had been mandated. There was no position description for AI oversight, nor a designated officer for aligning practice with national and institutional policy. The incident prompted the creation of a new cross-functional AI Taskforce, along with designated roles for AI Strategy Lead, Compliance Officer, and AI Pedagogy Coordinator. Staff training modules were developed, and annual audits of AI use became mandatory. This case demonstrates that enthusiasm alone is not enough—institutions must assign clear responsibilities to ensure AI is used safely, legally, and equitably.

Research Topics

Research questions

Understand the critical need for assigned roles and responsibilities in AI strategy and oversight. Explore the risks of fragmented implementation and inconsistent training. Identify key leadership roles and structures that support ethical, compliant, and informed AI deployment. Reflect on their own institutional readiness and gaps in AI-related capability and governance.
Who is currently responsible for AI strategy, training, and compliance in your school, university, or education system? What risks emerge when responsibilities are not clearly defined? What specific roles (e.g., compliance officer, AI pedagogy lead) might support coordinated, ethical AI adoption? How can institutions balance innovation with the need for robust training and regulatory compliance? What frameworks or policies are needed to ensure shared understanding across teams?

Data collection:

Map your current organisational structure. Where is AI responsibility located—or missing? Draft a proposal for three key roles that would strengthen your institution’s AI governance, training, and compliance framework. and seek feedback about it 

Do you want to know more?
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.
Subscribe to the AIGE Newsletter
Acknowledgement of Country We acknowledge the Ancestors, Elders and families of the Kulin Nation (who are the traditional owners of the land. where this work has bene predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their ownership of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation and renewal, and that the Traditional Owners' living culture and practices have a unique role in the life of this region.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.