• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us
MONITORING AI IN EDUCATION

AI Myths

Neutrality



Racial and gender bias in AI hiring systems

How to cite this learning scenario

Arantes, J. (2025). AI Myths: Neutrality. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This classroom scenario challenges the myth that AI systems are neutral. Students engage in a simulated career-matching activity that mirrors real-world research from the University of Washington, which found AI hiring systems show significant racial and gender bias. Students use fictional profiles to test how an AI ranks candidates, then critically examine the outcomes. They explore how bias enters AI through training data and design, and reflect on how this shapes opportunities. The activity builds digital literacy and awareness of intersectionality, with an emphasis on justice, fairness, and the role of human decision-making in education and employment.

AI systems don’t just reflect inequality—they automate it.

Who Gets the Job?

Your class is studying future careers, and your teacher introduces a new AI-powered tool designed to help students find career paths based on their interests and “background.” You’re given fictional student profiles with different names, hobbies, and grades. Each group feeds a profile into the tool and records which careers the AI recommends. Quickly, strange patterns emerge. Riley (a white-sounding name) gets recommendations for lawyer, engineer, and CEO. Aaliyah (a Black-sounding name) gets nursing aide, receptionist, and cleaner. Marcus, whose profile says he uses a wheelchair, is recommended only remote work. Students with male-coded names consistently receive higher-paying career suggestions than female ones. The class discusses the results, and your teacher presents new research from the University of Washington (Wilson & Caliskan, 2024). It shows that real AI hiring systems preferred white-sounding names 85% of the time and never ranked Black male names above white male names across over three million resume-job comparisons. You learn that these systems don’t "see people"—they process patterns from data that reflect historical discrimination. You also learn that intersectionality—how race and gender combine—matters. The system treated Black women differently than Black men, and neither group fared well compared to their white peers. Students reflect on how AI can reinforce inequality even when it claims to be “objective.” In groups, you brainstorm how to make the AI tool fairer—adding transparency, community oversight, and cultural understanding. To close, your class writes a shared statement: “Bias isn’t just in the past—it’s in the code. AI needs rules, not just data.”

research topics

research questions

How do students recognise bias in AI recommendations? What role does intersectionality play in how bias is experienced? What accountability mechanisms do students propose for AI systems?
Algorithmic bias and discrimination Intersectionality and digital fairness Critical digital literacy for social justice Equity and technology in career education

data collection

Profile Outcome Logs (1.6, 2.6, 3.6, 5.4) Documented AI career recommendations by profile. Group Debrief Notes (1.4, 2.4, 3.3, 6.3) Small-group discussions on perceived fairness. Student Position Statements (2.6, 4.1, 5.5, 7.1) Written reflections or statements to policymakers. Comparative Posters (2.4, 3.4, 4.4, 5.2) Visual artifacts mapping equity vs bias in AI tools
Article: Milne, S. (2024, October 31). AI tools show biases in ranking job applicants’ names according to perceived race and gender. UW News. University of Washington. https://www.washington.edu/news/2024/10/31/ai-tools-show-biases-in-ranking-job-applicants-names/
Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.