MONITORING AI IN EDUCATION
AI Myths
Neutrality
Racial and gender bias in AI hiring systems
How to cite this learning scenario
Arantes, J. (2025). AI Myths: Neutrality. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This classroom scenario challenges the myth that AI systems are neutral. Students engage in a simulated career-matching activity that mirrors real-world research from the University of Washington, which found AI hiring systems show significant racial and gender bias. Students use fictional profiles to test how an AI ranks candidates, then critically examine the outcomes. They explore how bias enters AI through training data and design, and reflect on how this shapes opportunities. The activity builds digital literacy and awareness of intersectionality, with an emphasis on justice, fairness, and the role of human decision-making in education and employment.
AI systems don’t just reflect inequality—they automate it.
Who Gets the Job?
Your class is studying future careers, and your teacher introduces a new AI-powered tool designed to help students find career paths based on their interests and “background.” You’re given fictional student profiles with different names, hobbies, and grades. Each group feeds a profile into the tool and records which careers the AI recommends.
Quickly, strange patterns emerge.
Riley (a white-sounding name) gets recommendations for lawyer, engineer, and CEO. Aaliyah (a Black-sounding name) gets nursing aide, receptionist, and cleaner. Marcus, whose profile says he uses a wheelchair, is recommended only remote work. Students with male-coded names consistently receive higher-paying career suggestions than female ones.
The class discusses the results, and your teacher presents new research from the University of Washington (Wilson & Caliskan, 2024). It shows that real AI hiring systems preferred white-sounding names 85% of the time and never ranked Black male names above white male names across over three million resume-job comparisons.
You learn that these systems don’t "see people"—they process patterns from data that reflect historical discrimination. You also learn that intersectionality—how race and gender combine—matters. The system treated Black women differently than Black men, and neither group fared well compared to their white peers.
Students reflect on how AI can reinforce inequality even when it claims to be “objective.” In groups, you brainstorm how to make the AI tool fairer—adding transparency, community oversight, and cultural understanding.
To close, your class writes a shared statement: “Bias isn’t just in the past—it’s in the code. AI needs rules, not just data.”
research topics
research questions
How do students recognise bias in AI recommendations?
What role does intersectionality play in how bias is experienced?
What accountability mechanisms do students propose for AI systems?
Algorithmic bias and discrimination
Intersectionality and digital fairness
Critical digital literacy for social justice
Equity and technology in career education
data collection
Profile Outcome Logs (1.6, 2.6, 3.6, 5.4)
Documented AI career recommendations by profile.
Group Debrief Notes (1.4, 2.4, 3.3, 6.3)
Small-group discussions on perceived fairness.
Student Position Statements (2.6, 4.1, 5.5, 7.1)
Written reflections or statements to policymakers.
Comparative Posters (2.4, 3.4, 4.4, 5.2)
Visual artifacts mapping equity vs bias in AI tools
Article: Milne, S. (2024, October 31). AI tools show biases in ranking job applicants’ names according to perceived race and gender. UW News. University of Washington. https://www.washington.edu/news/2024/10/31/ai-tools-show-biases-in-ranking-job-applicants-names/
Article: Milne, S. (2024, October 31). AI tools show biases in ranking job applicants’ names according to perceived race and gender. UW News. University of Washington. https://www.washington.edu/news/2024/10/31/ai-tools-show-biases-in-ranking-job-applicants-names/