Case studies and Scenarios in AI Governance for Education
Transforming education through advancing research integrity, curriculum innovation, and global collaboration.
What are you researching?
Establishing AI governance structures, oversight, and internal capability.
Assigning responsibility for AI strategy, training, and regulatory compliance.
Aligning AI governance with institutional values and educational goals.
Ensuring education institutions understand AI models and data sources.
Engaging with third-party vendors to manage risks in AI procurement.
Promoting open collaboration on AI safety and ethical considerations.
Ensuring meaningful human control throughout AI system lifecycles.
Defining educator roles in AI decision-making and student support.
Managing AI interactions to prevent automation bias in assessments.
Developing testing protocols for AI models before deployment.
Monitoring AI performance and unintended consequences over time.
Ensuring AI tools remain aligned with educational objectives and values.
The AI Governance in Education (AIGE) initiative supports researchers, educators, and students to engage with the ethical and governance challenges of Generative AI (GenAI) in education.
It provides adaptable case studies to inform research questions, data collection methods, and learning activities. These resources support curriculum design, research training, and Research Integrity development.
How can I use these case studies?
As an HDR supervisor: To guide students in formulating research questions about AI ethics, or to discuss responsible use of AI in thesis development.
In research training: To design scenario-based activities that prompt critical reflection on bias, transparency, and data privacy.
As a researcher: To frame studies on AI implementation in education, and to analyse the implications of AI governance in policy contexts.
In SoTL work: To explore how AI impacts pedagogy, and to integrate ethical considerations into digital literacy or assessment design.
As a K–12 teacher: To help students critically explore AI in their research projects, using case studies to frame questions on ethics, bias, and social impact. Please cite the original resource when using or adapting the case studies.
As a K–12 teacher: To help students critically explore AI in their research projects, using case studies to frame questions on ethics, bias, and social impact. Please cite the original resource when using or adapting the case studies.
Disclosing AI use in educational processes to students and stakeholders.
Ensuring AI-generated content is clearly identified.
Providing accessible information about how AI decisions are made.
Establishing processes for students and educators to challenge AI outcomes.
Addressing biases and unintended discrimination in AI systems.
Ensuring AI-driven decisions align with principles of equity and justice.
Identifying and mitigating risks associated with AI in educational settings.
Conducting ongoing risk and impact assessments.
Ensuring AI deployment does not amplify harm or reinforce systemic biases.
Engaging students, educators, parents, and policymakers in AI governance.
Ensuring AI tools promote diversity, inclusion, and accessibility.
Identifying and mitigating potential biases and harms from AI use.
Maintaining AI inventories and governance records.
Establishing protocols for internal and external AI audits.
Ensuring educational institutions can demonstrate AI compliance.
Subscribe
Stay updated with the latest news by subscribing to our newsletter.
Contact Us
Contact Info
Address: Victoria University, Ballarat Road, Footscray AUSTRALIAEmail: janine.arantes@vu.edu.au