Transparency & Explainability: Academic and Research Integrity
- Disclosing AI use in educational processes to students and stakeholders.
- Ensuring AI-generated content is clearly identified.
- Providing accessible information about how AI decisions are made.
Disclosure of clear guidleines around AI Use to Protect Research and Academic Integrity
Transparent Use and Contestability in Academic Contexts
- Have we developed clear procedures for disclosing AI use in academic and research settings?
- Do these procedures specify how, when, and to what level of detail disclosures should be made?
- Have we tailored communication about AI use to match stakeholders’ AI literacy, accessibility needs, and disciplinary norms?
- Are AI disclosures embedded into key academic processes (e.g. ethics applications, assessment rubrics, student declarations, supervisor guides)?
- Is it clear where, how, and why AI tools were used (e.g. writing assistance, data analysis, grading)?
- Have we documented the level of human oversight applied to AI-supported decisions?
- Is there a system in place to document and retain AI interactions (e.g. prompt logs, version histories, metadata)?
- Can we confidently audit AI involvement throughout the full academic or research lifecycle?
- Are transparency obligations clearly defined in our contracts with third-party vendors?
- Have we ensured that tools used in learning, teaching, or research (e.g. analytics, generative AI) meet our transparency standards?
- Have we reviewed and aligned our processes with any relevant legal or safety reporting requirements, such as obligations related to AI-generated content in public-facing or youth-related contexts?
Systemic Transparency Measures to Safeguard Integrity
- Have we evaluated the required level of transparency for each AI system used in research or education?
- Do high-stakes tools (e.g. grading systems, summary generators) meet higher thresholds for explainability and contestability?
- Have we clearl communicated when AI is involved in generating feedback, decisions, or recommendations?
- Have we considered how transparency expectations differ for undergraduate students, HDR candidates, research staff, or examiners?
- Do our materials acknowledge disciplinary and institutional power dynamics?
- Is AI transparency embedded in academic integrity training, research development programs, and learning design? Are these materials kept up to date as AI capabilities evolve? Have we asked third-party developers to meet the same transparency expectations?
- Do we choose AI systems that support human oversight, provide accessible explanations, and allow for informed contestability? Are we documenting this?
- Are we actively monitoring developments in audit tools and synthetic content detection?
- Have we identified what level of technical explanation is required for each stakeholder group to understand AI use?
- Are we providing appropriate examples, templates, or guidance to support comprehension?
Maintaining transparency about AI systems in academic and research settings is critical for upholding trust, contestability, and institutional integrity. Disclosing when and how AI is used—whether in research decision-making, assessment processes, or content generation—supports ethical practice and stakeholder confidence.
Using the Case Studies as prompts for Research into Integrity
Research Integrity
Case Study: It Just Said No
An AI-based admissions system makes unexplained decisions, leading to confusion, complaints, and loss of trust.
Scholarship of Teaching and Learning (SoTL)
Policy
- Explores the importance of explainability and transparency in AI-supported research participant selection or peer review.
- Can inform the development of audit trails and human oversight in research settings using algorithmic decision-making.
- Encourages reflection on how opaque AI tools may compromise the ethical principles of accountability and fairness in research.
Case Study: Did a Human Write This?
Students unknowingly receive AI-generated feedback and learning materials, raising concerns about authorship and trust.
- Supports research into how AI-generated content affects student engagement, learning outcomes, and perceptions of educator credibility.
- Encourages investigation into student understanding of feedback authenticity and the role of labelling in teaching practice.
- Highlights the need to embed AI literacy and critical engagement with machine-generated content into curriculum design.
Case Study: Nobody Told Us
A school implements AI in assessment and behaviour management without informing students, staff, or families.
- Demonstrates the need for institutional policies requiring timely disclosure and consent around AI deployment.
- Can inform the creation of governance frameworks that prioritise transparency and stakeholder communication.
- Reinforces the importance of aligning AI implementation with legal and ethical obligations in educational contexts.
Case studies are valuable tools for strengthening academic and research integrity in the age of AI. They provide concrete examples where integrity has been compromised, upheld, or unintentionally challenged due to AI use. These scenarios expose gaps in authorship attribution, consent, explainability, and institutional oversight. By incorporating case studies into research training, research integrity modules, and policy development, institutions can foster more ethical, transparent, and accountable practices. A wider range of examples is available in the drop-down menu on the website.
Nobody Told Us
This case study explores what happens when educational institutions fail to inform students, families, and staff about the presence and role of AI in learning environments. In this fictionalised but research-informed scenario, a secondary school quietly integrated AI into its assessment, attendance, and behavioural management processes—without notifying the school community. When issues emerged, students and parents were shocked to learn decisions had been partially automated. The case highlights the ethical and legal imperative of disclosure, informed consent, and transparent communication when AI is used in educational settings.
Did a Human Write This?
It Just Said No
This case study explores the ethical, pedagogical, and trust-related implications of using AI-generated content in education without clearly labelling its origin. In this fictionalised scenario, a university unknowingly circulated AI-authored feedback and learning materials without disclosing their source. Students and staff were misled, and confusion around authorship compromised academic integrity. The case highlights the growing need for transparent labelling of AI-generated content across all educational touchpoints, including feedback, teaching resources, assessments, and policy documents.
This case study explores the risks and frustrations that arise when AI-driven decisions in education are opaque and unexplained. In this fictionalised scenario, a TAFE institution uses an AI-based admissions and placement system that issues seemingly arbitrary decisions—without providing applicants or staff with reasons. As complaints increase, trust in the system diminishes. This case highlights the need for explainability frameworks, accessible communication strategies, and a human-first approach to AI deployment in education.