Balancing Safety and Privacy:
A case to prompt debate about whether we Should have Facial Recognition Technology in Educational Settings?
Human Oversight & Intervention in AI
how to cite this learning scenario
Arantes, J. (2025). Facial Recognition Technology in educational contexts. Case Studies in AI Governance for Education. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case study examines the ethical and practical challenges of implementing facial recognition technology (FRT) in schools, TAFE, and higher education institutions, emphasizing the critical role of human oversight and intervention. It draws on the example of EcoRetail’s decision-making process to explore how educational institutions can apply robust governance frameworks to assess AI technologies intended to enhance safety and streamline administrative tasks. The study demonstrates how proactive stakeholder engagement, rigorous risk assessment, and a focus on ethical considerations led to the decision not to implement FRT. It emphasizes the need for specific frameworks that safeguard the rights of minors, ensure the privacy of all students, and maintain safe and supportive working conditions for teachers and staff. This narrative offers practical insights for school leaders, educators, policymakers, and IT professionals on how to integrate AI responsibly while balancing innovation with ethical and legal responsibilities in educational contexts.
Effective AI governance in education requires not only robust frameworks but also active human oversight to ensure technologies align with ethical values and safeguard student and staff well-being.
The Global Influence of Big EdTech
A secondary school considered implementing FRT to automate attendance and enhance security. School leaders engaged with an AI vendor, similar to FRTCo Ltd, which proposed using facial recognition to monitor who entered and exited the campus. The system promised improved safety by identifying unauthorized visitors and streamlining administrative processes.
During stakeholder consultations with staff, parents, and students, concerns emerged about privacy, potential biases, and the psychological impact of surveillance on students and staff. The AI vendor reported a 99% accuracy rate overall, but this dropped to 95% for specific cultural groups, raising concerns about equity and discrimination. Additionally, the vendor could not provide transparency about how biases were managed or how the biometric data of minors would be stored and accessed, raising further ethical issues.
The school applied the Voluntary AI Safety Standard, focusing on human oversight to evaluate the technology's impact. School leadership conducted a thorough risk assessment, engaged with stakeholders to gather diverse perspectives, and tested the system under controlled conditions. They identified that the risks of misidentification, privacy breaches, and potential discrimination outweighed the intended safety benefits. The decision-making process also considered the specific needs of minors, who require additional legal and ethical protections, and the rights of teachers and staff to work in a supportive and non-intrusive environment. Ultimately, the school chose not to implement FRT, opting instead for human-led security protocols and clear communication strategies that aligned with the school's values of safety, inclusivity, and respect for both students and staff.
This case study demonstrates the importance of human oversight and intervention when deploying AI technologies in educational contexts, ensuring that innovation does not compromise ethical standards or the rights of vulnerable groups. It connects to broader systemic issues in education, such as the need for specific governance frameworks that address the dual responsibilities of protecting minors and maintaining fair and safe workplaces for educators.
Overview
DISCUSSION QUESTIONS
This case study highlights the need for educational institutions to apply human oversight and robust governance when integrating AI technologies such as facial recognition. It underscores the importance of aligning technological use with ethical and legal frameworks, particularly those that provide specific protections for minors and safeguard teachers' rights as workers.
Discussion Questions
Discussion Questions
Facial recognition technology (FRT) offers potential benefits in educational settings, such as enhancing campus security, monitoring attendance, and identifying potential threats. However, implementing such technology also presents significant ethical, privacy, and safety challenges, particularly concerning the rights of minors and the working conditions of educators and staff. Drawing on EcoRetail's example, this case study explores how educational institutions can use human oversight, ethical governance, and intervention to evaluate and manage the risks associated with FRT. The study aims to highlight the importance of aligning AI use with the dual priorities of protecting vulnerable students and ensuring a safe, fair workplace for educational staff.
Keywords
facial recognition technology, ethical AI, education, privacy risks, school safety, human oversight
Learning Objectives
Keywords
facial recognition technology, ethical AI, education, privacy risks, school safety, human oversight
Learning Objectives
Practical Applications:This case study can be used in professional development programs for educators and administrators, providing practical insights into the responsible adoption of AI technologies. It is also relevant for curriculum design in teaching AI ethics and governance, promoting a deeper understanding of how to maintain ethical standards and prioritize human oversight in educational contexts.
- Privacy and Data Management: Scenario: During a school assembly, a parent asks how the biometric data collected by the FRT system will be stored, who will have access to it, and how long it will be retained. The parent also raises concerns about data being shared with third parties. Question: How would you respond to this concern while aligning with their local context's transparency and data governance requirements? What do you need?
- Addressing Bias and Discrimination: Scenario: During testing, the school notices that the FRT system has a higher error rate for identifying students from specific cultural backgrounds. Some students feel targeted by the system. Question: What steps would you take to ensure equity and inclusivity, and how can they address students' and parents' concerns about potential biases in the technology?
- Balancing Safety with Ethical Concerns: Scenario: The school board is split on whether the potential safety benefits of the FRT system outweigh the ethical risks of surveillance and privacy invasion. Some argue that it could prevent unauthorized access, other argue it could enhance security. Question: How would you apply the Voluntary AI Safety Standard to balance safety objectives with ethical considerations. Consider in your context, and consider the context of minors and vulnerable groups in your context?
- Human Oversight vs. Automated Decision-Making: Scenario: The FRT system is proposed to automatically alert security staff if a potential threat is detected. However, during testing, the system generated several false positives, causing unnecessary distress among students. Question: What human oversight mechanisms would you put in place to reduce the risk of misidentification and ensure a supportive and non-intrusive school environment?
- Stakeholder Engagement and Decision-Making: Scenario: Following stakeholder consultations, some staff express concerns that FRT could undermine trust between students and teachers. They worry about the psychological impact of constant surveillance on students' well-being. Question: What advice would you provide institutions wanting to incorporate diverse stakeholder perspectives into its decision-making process, and what alternative solutions might you suggest to address safety concerns without compromising students' mental health and sense of belonging?
- Understand the ethical, legal, and governance challenges of deploying AI technologies like facial recognition in educational settings involving both students and staff.
- Analyze how human oversight and intervention can guide ethical decision-making and ensure compliance with specific frameworks that protect minors and support safe workplaces for educators.
supplementary materials
Within the context of your own school or intital teacher educaiton program, consider including:
Additional resources could include governance framework templates, lesson plans on AI ethics, and tools for conducting stakeholder consultations that consider both student safety and staff well-being.
https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf
Additional resources could include governance framework templates, lesson plans on AI ethics, and tools for conducting stakeholder consultations that consider both student safety and staff well-being.
https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf
Author, Dr Janine Arantes, Academic and Researcher at Victoria University
This case study was written by Dr. Janine Arantes after reading Example 2: Facial recognition technology in Australia's Voluntary AI Safety Guidelines. This case study is therefore grounded in actual events as reported by these sources and the original prompt for the case study is acknowledged.