• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us

Doxxed at a Glance



Smart Glasses, Women’s Safety, and the Normalisation of Instant Surveillance

How to cite this learning scenario

Smart Glasses Lab (2025). Doxxed at a glance. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
Trigger: Doxxing, stalking, harrassment
This scenario is based on NBC News (2024) and Hill (2024, The New York Times) who report on two Harvard students who demonstrated how AI-enhanced smart glasses could identify strangers in public and retrieve personal data within seconds. By combining Ray-Ban Meta glasses, facial search engines like PimEyes, and a ChatGPT-style assistant, the students showed how names, occupations, addresses, and relatives could be extracted at a glance. While the students framed the project as a warning, its implications for women’s safety are profound. Stalking, harassment, and intimate partner violence already disproportionately affect women; wearable recognition devices collapse the remaining boundaries between private life and public exposure. The ease with which identity can be revealed raises psychosocial and physical risks, from doxxing and reputational harm to real-time targeting in unsafe spaces. In universities, women staff and students are especially vulnerable. A lecturer could be tracked from a classroom to her home address without consent. Female students might self-censor or avoid campus activities for fear of exposure or unwanted contact. The threat is not speculative, as viral video demonstrated how quickly everyday technologies can be reconfigured for invasive surveillance. This scenario explores how open-source surveillance tools amplify gendered vulnerabilities. It asks whether higher education institutions are prepared to confront the intersection of accessibility, privacy, and women’s safety, or whether inaction will normalise a culture where surveillance and doxxing become inevitable features of academic life.

"This incident shows how easy it is to secretly surveil people with these glasses"

Doxing, surveillance, and identity exposure on campus

On a busy evening in 2026, a female postgraduate student left her university library after a late study session. As she crossed the campus courtyard, she noticed a man lingering nearby, wearing glasses that looked ordinary. What she did not know was that, like the Harvard experiment reported by NBC News and The New York Times (Hill, 2024), his glasses were connected to an AI assistant. Within 90 seconds, her name, academic program, and Instagram profile were pulled up. A moment later, her home address was displayed on his phone. The student had no idea she had been identified. Days later, she began receiving anonymous messages referencing her research, her neighbourhood, and her family. This kind of doxxing is not new, but the integration of facial recognition into everyday eyewear lowers the barrier to entry: stalking no longer requires premeditation, only opportunism. The Harvard students behind the viral project admitted it took just four days to code and worked a third of the time. But, the chilling novelty lies in how accessible the tools were, could any individual with cheap smart glasses could replicate it. For women, this normalises new risks. A woman speaking out in class could be instantly identified and targeted online. Female academics could be deepfaked into compromising contexts using images harvested without consent. People who have experienced domestic violence could be found despite protective orders. The psychosocial impacts were not abstract: they were immediate. Women on campus reported altering their routines, avoiding public debates, limiting their online presence, or taking different travel routes home , simply to minimise risk. These responses echoed broader research on cyber abuse: as Arantes (2024) notes, once personal data or manipulated images go viral, “it’s too late.” The post or video cannot be pulled back, and the harm multiplies through algorithmic amplification. Smart glasses collapse the distinction between private and public, making doxxing instantaneous. For women, particularly women of colour, LGBTQ+ staff, and those in visible roles like teaching, the risk is compounded. A single glance through AI-enabled eyewear can expose addresses, relatives, or prior publications, opening the door to seemingly 'knowing the person', stalking, gender-based harassment, or reputational attacks. The link with deepfakes is especially dangerous. By 2030, without anticipatory governance, these devices became normalised. A new form of workplace hazard, emerged - wearable technologies that enable harassment in real time. Women left the workforce in droves. Although legislation evolved, the constant fear meant that wokrplaces, educaiotnal institions, hospitals and outdoor spaces needed a cultural change. One that spoke back to the normalisation of smart glasses in society.

Research Topics

Research Questions

How do AI-enabled wearables contribute to new forms of cyber abuse, including doxxing and deepfakes, within higher education? What governance models can universities adopt to align with national legislation (e.g., doxxing laws, Online Safety Act) to protect women staff and students? In what ways can trauma-informed approaches reshape digital safety policies in academia?
Doxxing as a Workplace Hazard in Higher Education: Linking emerging legislation to campus governance. Gendered Dimensions of AI Abuse: How smart glasses amplify stalking, harassment, and deepfake risks for women educators. Trauma-Informed Policy and AI Governance: Embedding psychosocial safety into institutional responses to wearable technologies.

Data Collection

Activity 1: Policy and Legislation Alignment Task: Compare your institution’s current policies on harassment and digital safety with national legislation (e.g., Online Safety Act 2021, proposed anti-doxxing laws). Where do gaps remain? Activity 2: Trauma-Informed Case Mapping Task: Collect anonymised narratives from women staff and students who have experienced online harassment or doxxing. Analyse how institutional responses either mitigated or exacerbated harm. Activity 3: Safety Simulation Exercise Task: Facilitate a role-play where staff experience how quickly smart glasses could expose personal data. Debrief with participants to identify governance needs and immediate safeguards.
Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.