• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us

‘Balancing Workload and Reform: Good Governance in Practice’ Embedding Sustainable Change Without Exhausting the System

ACCOUNTABILITY & LEADERSHIP IN AI

How to cite this learning scenario

Arantes, J. (2025). Workload. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This scenario explores how institutions can use good governance principles to manage the complexities of GenAI-driven assessment reform. As GenAI tools reshape traditional assessment design, marking, and student engagement, strong governance is needed to balance innovation with sustainability, protect academic integrity, and prevent hidden workload intensification. The scenario invites critical reflection on building transparent, participatory governance structures that centre staff and student trust during rapid technological change.

"Without governance rooted in trust and care, GenAI assessment reforms risk becoming just another invisible burden on those who teach and learn."

Building Reform with, not on Top of, Staff

Your institution has announced a major reform of assessment practices, integrating Generative AI (GenAI) tools to streamline the creation of rubrics, automate feedback, and support academic integrity checks. The reform is framed as future-focused and necessary to adapt to an AI-driven educational landscape. However, early feedback from staff raises critical concerns: while GenAI promises efficiency, it also risks introducing hidden labour — requiring manual validation of AI outputs, redesign of tasks vulnerable to AI misuse, and increased academic misconduct investigations. Leadership, having place emphasis on collaboration and consultation, establishes a strong governance framework prioritising relational accountability, participatory co-design, and workload sustainability. You are invited to join the GenAI Assessment Reform Taskforce, composed equally of teaching staff, students, digital learning specialists, and academic integrity officers. The group’s first decision is to implement phased pilots, with mandatory workload impact reviews and transparent publication of results. As pilots roll out, it becomes clear that while some efficiencies are gained, significant staff time is redirected towards rethinking assessment design principles, student education on AI ethics, and moderation of AI-influenced work. In response, governance mechanisms are adapted: additional training time is funded, GenAI moderation is explicitly recognised in workload models, and cross-disciplinary consultation groups are established to co-create new assessment standards. You must now consider: how do you continue to govern GenAI assessment reforms so that sustainability, academic integrity, and educational equity are embedded? How can governance structures adapt dynamically as both GenAI capabilities — and risks — evolve?

Potential Research Topics

Potential Research Questions

How does participatory governance influence staff trust and workload during GenAI-driven assessment reform? What governance mechanisms are most effective for mitigating academic integrity risks introduced by GenAI? How can institutions embed relational accountability into GenAI assessment reforms? What new professional roles or workload models are needed to support GenAI assessment moderation? How do governance structures adapt as GenAI capabilities — and student behaviours — evolve?
Governance models for AI-enabled assessment reform Managing hidden workloads in GenAI-integrated assessment practices Ethical governance frameworks for academic integrity and AI Participatory governance in educational technology innovation Redesigning assessment principles in the GenAI era

Data collection Prompts

Practicing teachers could collect data by keeping a reflective journal on time spent adapting assessments and validating GenAI-generated student outputs. TAFE teachers could collect data by hosting feedback circles where learners and teachers collaboratively map how GenAI influences assessment perceptions and workload. Higher education academics could collect data by analysing assessment redesign iterations and moderation adjustments before and after GenAI implementation. Researchers could collect data through longitudinal interviews with educators monitoring shifts in assessment labour, academic misconduct cases, and governance responsiveness. Leaders could collect data by administering staff surveys linked to GenAI workload impacts, academic integrity case trends, and perceptions of governance transparency.
Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.