‘Balancing Workload and Reform: Good Governance in Practice’ Embedding Sustainable Change Without Exhausting the System
ACCOUNTABILITY & LEADERSHIP IN AI
How to cite this learning scenario
Arantes, J. (2025). Balancing Workload and Reform: Good Governance in Practice. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This scenario explores how institutions can use good governance principles to manage the complexities of GenAI-driven assessment reform. As GenAI tools reshape traditional assessment design, marking, and student engagement, strong governance is needed to balance innovation with sustainability, protect academic integrity, and prevent hidden workload intensification. The scenario invites critical reflection on building transparent, participatory governance structures that centre staff and student trust during rapid technological change.
"Without governance rooted in trust and care, GenAI assessment reforms risk becoming just another invisible burden on those who teach and learn."
Building Reform with, not on Top of, Staff
Your institution has announced a major reform of assessment practices, integrating Generative AI (GenAI) tools to streamline the creation of rubrics, automate feedback, and support academic integrity checks. The reform is framed as future-focused and necessary to adapt to an AI-driven educational landscape. However, early feedback from staff raises critical concerns: while GenAI promises efficiency, it also risks introducing hidden labour — requiring manual validation of AI outputs, redesign of tasks vulnerable to AI misuse, and increased academic misconduct investigations.
Leadership, having place emphasis on collaboration and consultation, establishes a strong governance framework prioritising relational accountability, participatory co-design, and workload sustainability. You are invited to join the GenAI Assessment Reform Taskforce, composed equally of teaching staff, students, digital learning specialists, and academic integrity officers. The group’s first decision is to implement phased pilots, with mandatory workload impact reviews and transparent publication of results.
As pilots roll out, it becomes clear that while some efficiencies are gained, significant staff time is redirected towards rethinking assessment design principles, student education on AI ethics, and moderation of AI-influenced work. In response, governance mechanisms are adapted: additional training time is funded, GenAI moderation is explicitly recognised in workload models, and cross-disciplinary consultation groups are established to co-create new assessment standards.
You must now consider: how do you continue to govern GenAI assessment reforms so that sustainability, academic integrity, and educational equity are embedded? How can governance structures adapt dynamically as both GenAI capabilities — and risks — evolve?
Potential Research Topics
Potential Research Questions
How does participatory governance influence staff trust and workload during GenAI-driven assessment reform?
What governance mechanisms are most effective for mitigating academic integrity risks introduced by GenAI?
How can institutions embed relational accountability into GenAI assessment reforms?
What new professional roles or workload models are needed to support GenAI assessment moderation?
How do governance structures adapt as GenAI capabilities — and student behaviours — evolve?
Governance models for AI-enabled assessment reform
Managing hidden workloads in GenAI-integrated assessment practices
Ethical governance frameworks for academic integrity and AI
Participatory governance in educational technology innovation
Redesigning assessment principles in the GenAI era
Data collection Prompts
Practicing teachers could collect data by keeping a reflective journal on time spent adapting assessments and validating GenAI-generated student outputs. TAFE teachers could collect data by hosting feedback circles where learners and teachers collaboratively map how GenAI influences assessment perceptions and workload. Higher education academics could collect data by analysing assessment redesign iterations and moderation adjustments before and after GenAI implementation. Researchers could collect data through longitudinal interviews with educators monitoring shifts in assessment labour, academic misconduct cases, and governance responsiveness. Leaders could collect data by administering staff surveys linked to GenAI workload impacts, academic integrity case trends, and perceptions of governance transparency.