• AIGE
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • AIGE in Action
    • Gallery: But did they actually write it?
    • Services
  • Scenarios AI Governance
    • About the case studies and scenarios?
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
  • Mitigating Risks of AI in Education
    • Deepfakes
    • Still Learning, Still Failing?
    • Optimised for Inequity
    • The Pilot that Went too far
    • Lessons from the NewCo Chatbot Example
    • The Los Angelese School Chatbot Debacle
  • Academic and Research Integrity
    • Mirror, Mask, or Misdirection?
    • Assessment Reform
    • did a human write this
    • it just said no
  • Leadership
    • Balancing workload and Assessment reform
    • Programmatic Possibilities
    • Automation and Abdication
    • The Global Influence of Big Tech
    • Who's in Charge here?
    • It Works, But Does It Belong?
    • Everyone and No One
  • Human Oversight
    • Hands Off Learning
    • Click to comprehend
    • Marked by the Machine
    • Just Follow the System
    • Implementing AI-Driven Recommender Engin
    • Facial Recognition Technology in educati
  • Engagement
    • Whose Voice Counts?
    • The Algorithm Didn’t See Me
    • Flagged and Forgotten
    • The library as a central hub
    • Accredited programs
  • Ethical AI
    • GenAI Hallucinates
    • The System Said So
    • Not Meant to Exclude
    • Justice Deferred
  • Compliance
    • Scan First, Act Later
    • Lost in the System
    • We Never Looked Under the Hood
    • Show Us the Proof
  • Monitoring
    • Aligning AI Tools with Educational Value
    • It wasn't ready
    • It Drifted
    • It solved the wrong problem
  • Transparency
    • It was a black box
    • we signed before we asked
    • behind closed algorithms
  • About Us

Balancing Safety and Privacy:

A case to prompt debate about whether we Should have Facial Recognition Technology in Educational Settings?

Human Oversight & Intervention in AI

how to cite this learning scenario

Arantes, J. (2025). Facial Recognition Technology in educational contexts. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study examines the ethical and practical challenges of implementing facial recognition technology (FRT) in schools, TAFE, and higher education institutions, emphasizing the critical role of human oversight and intervention. It draws on the example of EcoRetail’s decision-making process to explore how educational institutions can apply robust governance frameworks to assess AI technologies intended to enhance safety and streamline administrative tasks. The study demonstrates how proactive stakeholder engagement, rigorous risk assessment, and a focus on ethical considerations led to the decision not to implement FRT. It emphasizes the need for specific frameworks that safeguard the rights of minors, ensure the privacy of all students, and maintain safe and supportive working conditions for teachers and staff. This narrative offers practical insights for school leaders, educators, policymakers, and IT professionals on how to integrate AI responsibly while balancing innovation with ethical and legal responsibilities in educational contexts.

Effective AI governance in education requires not only robust frameworks but also active human oversight to ensure technologies align with ethical values and safeguard student and staff well-being.

The Global Influence of Big EdTech

A secondary school considered implementing FRT to automate attendance and enhance security. School leaders engaged with an AI vendor, similar to FRTCo Ltd, which proposed using facial recognition to monitor who entered and exited the campus. The system promised improved safety by identifying unauthorized visitors and streamlining administrative processes. During stakeholder consultations with staff, parents, and students, concerns emerged about privacy, potential biases, and the psychological impact of surveillance on students and staff. The AI vendor reported a 99% accuracy rate overall, but this dropped to 95% for specific cultural groups, raising concerns about equity and discrimination. Additionally, the vendor could not provide transparency about how biases were managed or how the biometric data of minors would be stored and accessed, raising further ethical issues. The school applied the Voluntary AI Safety Standard, focusing on human oversight to evaluate the technology's impact. School leadership conducted a thorough risk assessment, engaged with stakeholders to gather diverse perspectives, and tested the system under controlled conditions. They identified that the risks of misidentification, privacy breaches, and potential discrimination outweighed the intended safety benefits. The decision-making process also considered the specific needs of minors, who require additional legal and ethical protections, and the rights of teachers and staff to work in a supportive and non-intrusive environment. Ultimately, the school chose not to implement FRT, opting instead for human-led security protocols and clear communication strategies that aligned with the school's values of safety, inclusivity, and respect for both students and staff. This case study demonstrates the importance of human oversight and intervention when deploying AI technologies in educational contexts, ensuring that innovation does not compromise ethical standards or the rights of vulnerable groups. It connects to broader systemic issues in education, such as the need for specific governance frameworks that address the dual responsibilities of protecting minors and maintaining fair and safe workplaces for educators.

Research Topics

Research Questions

  • How would you respond to this concern while aligning with their local context's transparency and data governance requirements? What do you need?
  • What steps would you take to ensure equity and inclusivity, and how can they address students' and parents' concerns about potential biases in the technology?
  • How would you apply the Voluntary AI Safety Standard to balance safety objectives with ethical considerations. Consider in your context, and consider the context of minors and vulnerable groups in your context?
  • What human oversight mechanisms would you put in place to reduce the risk of misidentification and ensure a supportive and non-intrusive school environment?
  • What advice would you provide institutions wanting to incorporate diverse stakeholder perspectives into its decision-making process, and what alternative solutions might you suggest to address safety concerns without compromising students' mental health and sense of belonging?
  • Understand the ethical, legal, and governance challenges of deploying AI technologies like facial recognition in educational settings involving both students and staff.
  • Analyze how human oversight and intervention can guide ethical decision-making and ensure compliance with specific frameworks that protect minors and support safe workplaces for educators.

Data collection

Collect data through document review and stakeholder interviews to identify or evaluate resources—such as governance framework templates, AI ethics lesson plans, and consultation tools—that address student safety and staff well-being in your school or initial teacher education program.
https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf
Author, Dr Janine Arantes, Academic and Researcher at Victoria University
This case study was written by Dr. Janine Arantes after reading Example 2: Facial recognition technology in Australia's Voluntary AI Safety Guidelines. This case study is therefore grounded in actual events as reported by these sources and the original prompt for the case study is acknowledged.
Do you want to know more?
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.
Subscribe to the AIGE Newsletter
Acknowledgement of Country We acknowledge the Ancestors, Elders and families of the Kulin Nation (who are the traditional owners of the land. where this work has bene predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their ownership of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation and renewal, and that the Traditional Owners' living culture and practices have a unique role in the life of this region.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.