Balancing Safety and Privacy:
A case to prompt debate about whether we Should have Facial Recognition Technology in Educational Settings?
Human Oversight & Intervention in AI
how to cite this learning scenario
Arantes, J. (2025). Facial Recognition Technology in educational contexts. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study examines the ethical and practical challenges of implementing facial recognition technology (FRT) in schools, TAFE, and higher education institutions, emphasizing the critical role of human oversight and intervention. It draws on the example of EcoRetail’s decision-making process to explore how educational institutions can apply robust governance frameworks to assess AI technologies intended to enhance safety and streamline administrative tasks. The study demonstrates how proactive stakeholder engagement, rigorous risk assessment, and a focus on ethical considerations led to the decision not to implement FRT. It emphasizes the need for specific frameworks that safeguard the rights of minors, ensure the privacy of all students, and maintain safe and supportive working conditions for teachers and staff. This narrative offers practical insights for school leaders, educators, policymakers, and IT professionals on how to integrate AI responsibly while balancing innovation with ethical and legal responsibilities in educational contexts.
Effective AI governance in education requires not only robust frameworks but also active human oversight to ensure technologies align with ethical values and safeguard student and staff well-being.
The Global Influence of Big EdTech
A secondary school considered implementing FRT to automate attendance and enhance security. School leaders engaged with an AI vendor, similar to FRTCo Ltd, which proposed using facial recognition to monitor who entered and exited the campus. The system promised improved safety by identifying unauthorized visitors and streamlining administrative processes.
During stakeholder consultations with staff, parents, and students, concerns emerged about privacy, potential biases, and the psychological impact of surveillance on students and staff. The AI vendor reported a 99% accuracy rate overall, but this dropped to 95% for specific cultural groups, raising concerns about equity and discrimination. Additionally, the vendor could not provide transparency about how biases were managed or how the biometric data of minors would be stored and accessed, raising further ethical issues.
The school applied the Voluntary AI Safety Standard, focusing on human oversight to evaluate the technology's impact. School leadership conducted a thorough risk assessment, engaged with stakeholders to gather diverse perspectives, and tested the system under controlled conditions. They identified that the risks of misidentification, privacy breaches, and potential discrimination outweighed the intended safety benefits. The decision-making process also considered the specific needs of minors, who require additional legal and ethical protections, and the rights of teachers and staff to work in a supportive and non-intrusive environment. Ultimately, the school chose not to implement FRT, opting instead for human-led security protocols and clear communication strategies that aligned with the school's values of safety, inclusivity, and respect for both students and staff.
This case study demonstrates the importance of human oversight and intervention when deploying AI technologies in educational contexts, ensuring that innovation does not compromise ethical standards or the rights of vulnerable groups. It connects to broader systemic issues in education, such as the need for specific governance frameworks that address the dual responsibilities of protecting minors and maintaining fair and safe workplaces for educators.
Research Topics
Research Questions
- How would you respond to this concern while aligning with their local context's transparency and data governance requirements? What do you need?
- What steps would you take to ensure equity and inclusivity, and how can they address students' and parents' concerns about potential biases in the technology?
- How would you apply the Voluntary AI Safety Standard to balance safety objectives with ethical considerations. Consider in your context, and consider the context of minors and vulnerable groups in your context?
- What human oversight mechanisms would you put in place to reduce the risk of misidentification and ensure a supportive and non-intrusive school environment?
- What advice would you provide institutions wanting to incorporate diverse stakeholder perspectives into its decision-making process, and what alternative solutions might you suggest to address safety concerns without compromising students' mental health and sense of belonging?
- Understand the ethical, legal, and governance challenges of deploying AI technologies like facial recognition in educational settings involving both students and staff.
- Analyze how human oversight and intervention can guide ethical decision-making and ensure compliance with specific frameworks that protect minors and support safe workplaces for educators.
Data collection
Collect data through document review and stakeholder interviews to identify or evaluate resources—such as governance framework templates, AI ethics lesson plans, and consultation tools—that address student safety and staff well-being in your school or initial teacher education program.
https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf
https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf
Author, Dr Janine Arantes, Academic and Researcher at Victoria University
This case study was written by Dr. Janine Arantes after reading Example 2: Facial recognition technology in Australia's Voluntary AI Safety Guidelines. This case study is therefore grounded in actual events as reported by these sources and the original prompt for the case study is acknowledged.