Governing Through a Transfeminist Lens
Smart Glasses, Safety, and Inclusive Futures in Higher Education
How to cite this learning scenario
Smart Glasses Lab (2025). Governing through a transfeminist lens. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This scenario is informed by transfeminist approaches to AI governance (Attard-Frost, 2025), which stress plurality, accountability, and the centring of marginalised voices in the design of technology policies. Rather than framing smart glasses solely as a risk, it helps us to ask how universities might seize the current moment to implement anticipatory, inclusive governance structures before wearable AI becomes entrenched.
Smart glasses in higher education present profound contradictions: they can enable students with disabilities to participate more fully, yet they also open the door to surveillance, harassment, and identity exposure. A transfeminist governance lens suggests we must refuse either/or framings of safety and accessibility. Instead, universities can build governance that is participatory, relational, and protective of those most exposed to harm. Key opportunities explored in this scenario include embedding consent protocols in classroom practice, mandating opt-in visibility for wearable use, and establishing independent oversight bodies that include students, staff, and community advocates. Rather than retrofitting policies after harms occur, transfeminist governance emphasises proactive safeguards, transparency, and mechanisms for collective refusal: the ability for communities to say “no” to unsafe deployments. This scenario imagines a university that implements these principles to protect women, queer, and trans students and staff from doxxing, stalking, and deepfakes while also securing the accessibility gains smart glasses can bring. The goal is to show how higher education can lead not only in adopting technology, but in governing it ethically, equitably, and inclusively.
“The question is not if we will govern them, but how, and whose voices will shape that governance.”
All the tools are already here.
In 2028, a large university in Australia faced mounting pressure to respond to the rapid uptake of smart glasses on campus. Students with vision impairments, dyslexia, and hearing loss were reporting transformative benefits: real-time transcription, instant translation, and text-to-speech functions had improved learning experiences dramatically. At the same time, concerns about surveillance and abuse were escalating.
Student activists, many from queer and feminist collectives, demanded not only protections from stalking and doxxing but also governance that went beyond prohibition. Drawing on transfeminist principles (Attard-Frost, 2025), they argued that governance should not be about banning technologies outright, but about building structures of care, accountability, and refusal. The university convened a Smart Glasses Governance Council. Unlike typical top-down committees, it included women, queer and trans students, disability advocates, academic staff, and representatives from the eSafety Commissioner’s office. The council’s first act was to co-design Consent in the Classroom protocols: lecturers and students could clearly signal when recording was prohibited, and wearable devices were programmed to display a visible consent light (beyond the glasses themselves) only if permission was given.
Second, the council implemented a Digital Safety Charter, mandating that any data collected through wearables could not be stored or transmitted without explicit opt-in. Students were trained in their rights under Australia’s Online Safety Act 2021 and the proposed anti-doxxing bill. Trauma-informed workshops prepared staff to respond to incidents of deepfake abuse, ensuring psychosocial safety was not sidelined. Further, the council also established a Right of Refusal mechanism. Departments could collectively decide not to permit wearable AI in sensitive contexts such as counselling, seminars on controversial topics, or examinations. Importantly, these decisions were documented and communicated transparently, creating a culture where refusal was not punitive but protective.
By 2030, the university had become a model of anticipatory governance. While risks remained, incidents of abuse were met with swift, coordinated responses. Crucially, marginalised students and women reported feeling safer, knowing their voices had shaped the governance structures and that incident repsonses were in place. This transfeminist approach did not eliminate harm but shifted the burden of safety from individuals. It shifted to collective systems of accountability. Instead of asking vulnerable people and women already being deepfaked and doxxed, to adapt their behaviour (avoiding debates, changing travel routes, reducing visibility), governance required the institution to adapt. The scenario illustrates how higher education can lead by embedding transfeminist ethics into technology policy, ensuring both inclusivity and protection.
Research Topics
Research Questions
How can transfeminist principles reshape governance of wearable AI in higher education?
What role should consent, refusal, and participatory decision-making play in regulating smart glasses use?
How can institutions ensure that marginalised groups are not disproportionately burdened with managing risk?
Transfeminist Governance in Practice and Embedding collective care and refusal into smart glasses policy.
Balancing Accessibility and Safety, so that we focus on designing policies that protect against gendered harassment while supporting inclusion.
Institutional Accountability, to ask how universities can move from reactive responses to proactive, trauma-informed governance.
Data Collection
Activity 1: Co-Designing Safety and Access
Task: Facilitate workshops where women, queer, trans, and disabled students co-design “rules of use” for smart glasses in classrooms. Document how their proposals balance accessibility gains with protections from surveillance and harassment.
Activity 2: Mapping Consent Practices
Task: Test visible consent mechanisms for wearable AI (e.g., lights, signals, opt-in agreements). Gather reflections from staff and students on whether these practices feel empowering, tokenistic, or genuinely protective.
Activity 3: Collective Refusal Scenarios
Task: Use role-play to explore contexts (e.g., counselling sessions, viva exams, activist classrooms) where smart glasses should be refused. Analyse how transfeminist principles of refusal and accountability can be embedded into institutional governance.