• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us
ACCOUNTABILITY & LEADERSHIP IN AI



Fragmented Leadership



‘Everyone and No One’: A case to explore Responsibility for AI Strategy, Training, and Regulatory Compliance in Education

How to cite this learning scenario

Arantes, J. (2025). Fragmented Leadership. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study explores what happens when responsibility for AI strategy, professional learning, and regulatory compliance is fragmented—or worse, absent. Based on patterns seen across global education systems, this fictionalised scenario tracks a university's integration of generative AI tools in teaching and assessment. Despite enthusiasm for innovation, no individual or team was clearly responsible for developing a cohesive AI strategy, guiding staff training, or ensuring compliance with emerging data, privacy, and education laws. The result: inconsistent practices, legal vulnerability, and growing mistrust. This case underscores the importance of leadership accountability and coordinated action when deploying AI in educational institutions.

When everyone assumes someone else is in charge, no one is accountable. AI in education demands clear strategy, empowered roles, and system-wide responsibility—not ambiguity.

Everyone and No One

In 2024, a mid-sized university in Australia introduced a policy encouraging the use of generative AI tools to enhance learning, streamline marking, and support student writing. The tools were widely adopted across faculties—but without a coordinated implementation plan. Different departments interpreted the policy differently. Some encouraged full integration, while others banned AI outright. Academic staff were left to navigate the risks and responsibilities on their own. There was no clear strategy on staff training, and no central point of contact for compliance questions. Students received conflicting information about what was considered ethical or permissible. A breach occurred when one faculty used a third-party AI platform that stored student data on servers in jurisdictions that did not meet local privacy standards. When complaints reached the Office of the Information Commissioner, the university struggled to respond—it was unclear who was responsible for data governance in AI contexts, and no AI strategy document had been formally endorsed. An internal review found that while many staff were excited about AI, few understood their obligations under privacy legislation or intellectual property law. No training had been mandated. There was no position description for AI oversight, nor a designated officer for aligning practice with national and institutional policy. The incident prompted the creation of a new cross-functional AI Taskforce, along with designated roles for AI Strategy Lead, Compliance Officer, and AI Pedagogy Coordinator. Staff training modules were developed, and annual audits of AI use became mandatory. This case demonstrates that enthusiasm alone is not enough—institutions must assign clear responsibilities to ensure AI is used safely, legally, and equitably.

Research Topics

Research questions

Understand the critical need for assigned roles and responsibilities in AI strategy and oversight. Explore the risks of fragmented implementation and inconsistent training. Identify key leadership roles and structures that support ethical, compliant, and informed AI deployment. Reflect on their own institutional readiness and gaps in AI-related capability and governance.
Who is currently responsible for AI strategy, training, and compliance in your school, university, or education system? What risks emerge when responsibilities are not clearly defined? What specific roles (e.g., compliance officer, AI pedagogy lead) might support coordinated, ethical AI adoption? How can institutions balance innovation with the need for robust training and regulatory compliance? What frameworks or policies are needed to ensure shared understanding across teams?

Data collection:

Map your current organisational structure. (7.2, 7.4) Where is AI responsibility located—or missing? Draft a proposal for three key roles that would strengthen your institution’s AI governance, training, and compliance framework. and seek feedback about it 

Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.