Viral deepfake during a remote placement
When digital exclusion magnifies image-based abuse
How to cite this learning scenario
Arantes, J. (2025). Viral deepfake during a remote placement. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
Warning: This scenario mentions sexualised deepfake abuse and image-based harm. All scenarios on AIGE are fictitious.
Abstract
A pre-service teacher on placement in a very remote community discovers a sexually explicit deepfake that uses her likeness and the school’s identifiable setting. The image spreads quickly via peer-to-peer apps. Understanding that sexually explicit deepfakes make up 98 percent of the millions of deepfake videos online and that 99 percent of the people depicted were women (NSW Parliamentary Research Service, 2025), she feels vulnerable. But she is also aware through her teaching course, that the eSafety Commissioner have warned that explicit deepfakes have increased by up to 550 percent since 2019 and provides a means to report incidents, as cases are escalating (eSafety Commissioner, 2025a). But it feels different, on her remote placement - because she doesn't have access to Wifi, and as such can't report it. She doesn't have access to timely support, due to the gaps in infrastructure and reliance on expensive mobile-only services (Thomas et al., 2023). For her, as a student on placement, this means slower takedown, limited connectivity to document and report, and fewer local referral options. The scenario invites analysis using the Transactional Model of Stress and Coping alongside psychosocial hazard codes, focusing on high job demands, low role clarity, and the emotional toll of image-based abuse. It closes by asking you to design governance, documentation, and referral pathways that are feasible in low-connectivity environments and that align with evolving guidelines.
“I could not even load the reporting page—standing in the school yard with one bar of signal while the fake kept spreading.”
When digital exclusion magnifies image-based abuse
The final week of her six-week placement had started well. The PST, a third-year Bachelor of Education student, was working with her mentor on a science unit about renewable energy. The school was small, compirsed of two composite classes and set in a community more than 300 kilometres from the nearest regional centre.
At lunch, two Year 9 students in the playground looked up from a phone and laughed uncomfortably, turning the screen away as the PST approached. That afternoon, the mentor teacher received a call from a parent who had seen an explicit video circulating on Instagram. The person in the video looked exactly like the PST, wearing the same clothes she had on that morning, in a location identical to the school staffroom doorway.
The PST’s initial reaction was disbelief, quickly replaced by shock. She had never shared intimate images. The mentor, aware of recent eSafety alerts about the rise of deepfakes in school contexts (eSafety Commissioner, 2025b), accessed the Responding to Image-Based Abuse: Educator Toolkit (eSafety Commissioner, 2025c) for immediate guidance. They began collecting evidence, screenshots, account handles, timestamps, without resharing the material.
But connectivity became a barrier. The school’s internet was unreliable, and the PST’s prepaid mobile data was nearly exhausted. Research from the Mapping the Digital Gap project shows that in many remote First Nations communities, prepaid data is the norm, with poor coverage a daily reality (Thomas et al., 2023). The principal allocated funds for the ICT officer to set up a stable connection to lodge reports with eSafety and the university’s placement office.
The university’s incident reporting process was long, and because the connection kept dropping out, it took the PST two days to complete her statement. She was sending partial drafts, but only when a connection was available. She then called and asked the placement coordinator to log a report with eSafety and she accessed the counselling services on site. The delay in communication however, had not only heightened her anxiety, but had allowed time for it to go viral. A dynamic that could be considered within Safe Work Australia’s psychosocial hazard framework.
In thinking about how to improve processes, the university used Lazarus and Folkman's Transactional Model of Stress and Coping to explain the situation to the leadership team. The PST’s appraisal identified the harm as both reputational and personal safety risk, compounded by uncertainty about roles: Should the school contact the platform? Should the PST? Should the university liaise with police? The mentor and principal clarified immediate steps: platform reporting, evidence preservation, communication (as appropriate), and incident logging.
In the debrief, the PST suggested that universities prepare placement students for image-based abuse risks, especially in rural contexts, before departure. She recommended offline-capable incident reporting templates, a list of relevant contacts, and local support networks.
By the end of the week, the clip had been removed from several platforms, though its digital footprint remained uncertain. The PST finished her placement, but described feeling a lingering loss of safety. The experience informed the university’s update to its Professional Experience Handbook, adding a section on generative AI harms and low-connectivity response plans.
Research Topics
Research Questions
How does digital exclusion affect the timeliness of AI-related harm reporting for university placement students?
What governance structures best support PSTs experiencing technology-facilitated abuse during placement?
How are universities integrating low-connectivity harm mitigation into placement preparation?
University governance of AI-facilitated image-based abuse in placements
Digital exclusion and incident reporting in higher education fieldwork
Psychosocial safety of PSTs in rural and remote school contexts
Data Collection
Policy scan and codeframe
Collect current university and placement-school policies or public webpages on AI and image-based abuse. In small groups, students code for presence/absence of key elements like consent, reporting steps, and timelines. Aggregate counts on a shared sheet to surface gaps and quick wins.
Mock incident timeline audit
Give students short, fictional incident cards for deepfakes, covert recording, or hallucinated content. Each group maps a timeline from discovery to resolution using only publicly available processes. Collect durations and bottlenecks on a whiteboard to identify systemic delays and possible mitigations.
Reporting pathway usability test
Provide three public reporting routes, for example eSafety forms, platform abuse pages, and the university’s visible webpages. Students time how long it takes to find the correct form, the required evidence list, and a human contact. Record steps and clicks; tally averages to produce a class “friction index.”
Resources NSW Parliamentary Research Service. (2025). Sexually explicit deepfakes eSafety Commissioner. (2025a). Deepfake damage in schools eSafety Commissioner. (2025c). Responding to image-based abuse: Educator toolkit Thomas, J., Barraket, J., Wilson, C., Rennie, E., Kennedy, J., & MacDonald, T. (2023). Australian Digital Inclusion Index 2023: Summary report
Resources NSW Parliamentary Research Service. (2025). Sexually explicit deepfakes eSafety Commissioner. (2025a). Deepfake damage in schools eSafety Commissioner. (2025c). Responding to image-based abuse: Educator toolkit Thomas, J., Barraket, J., Wilson, C., Rennie, E., Kennedy, J., & MacDonald, T. (2023). Australian Digital Inclusion Index 2023: Summary report