Deepfakes
It's too late - the post has gone viral already
AI RISK MANAGEMENT IN EDUCATION
How to cite this learning scenario
Arantes, J. (2025).Deepfakes. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
This case explores a hypothetical scenario in which a teacher is secretly filmed in class, the footage manipulated into a deepfake, and the video circulated online in real time. Within hours, it goes viral—provoking abusive commentary, threats, and severe emotional harm. The teacher is doxxed and placed on stress leave, while colleagues experience secondary trauma. With no clear pathway for institutional redress, the incident highlights Platform-Enabled Teacher-Targeted Adult Cyber Abuse (PETTACA). Drawing on the Online Safety Act 2021 and guidance from the Australian eSafety Commissioner, this case reframes viral abuse as a psychosocial and structural risk, algorithmically amplified by platform design. It advocates for trauma-informed research methodologies that account for both direct and vicarious harm.
“When a teacher is the subject of a viral post—potentially manipulated through deepfake technologies—the trauma is not limited to the original act. It is algorithmically amplified. Platforms that monetize engagement perpetuate and escalate the harm, making the abuse not only more visible but more enduring. This is not just about individual behaviour—it’s about platform-enabled adult cyber abuse.”
Deepfakes: It's too late the post has gone viral already.
Ms. Taylor, a respected Year 10 English teacher at a suburban Australian high school, begins her Wednesday morning as usual—preparing for her double-period class on Macbeth. Unbeknownst to her, a student has been covertly recording snippets of her lessons over several weeks using a smartwatch. These clips are later fed into freely available generative AI tools to produce a deepfake video. In the fabricated footage, Ms. Taylor appears to mock students, make politically charged comments, and use inappropriate language—none of which she ever said. The video, set convincingly within her classroom, even includes the school’s logo and visible student work in the background, adding a layer of false authenticity. That evening, the deepfake is uploaded to a private student Discord server, then rapidly disseminated across TikTok, Snapchat, and Instagram with the caption: “Can’t believe this is our teacher. Disgusting.” By morning, the video has gone viral—shared by meme pages, local gossip accounts, and even a fringe tabloid website.
The school’s phone lines flood with complaints from outraged parents. Ms. Taylor wakes up to a barrage of abusive messages, including threats and calls for her resignation. Her personal details are exposed online. Her children’s names and photos are circulated in forums. An anonymous email is sent to the Department of Education demanding disciplinary action.
Although a process of digital forensics quickly proves the video is a deepfake, the reputational and psychological damage is immediate and far-reaching. Ms. Taylor takes extended stress leave. Some students feel betrayed and confused. Others treat the incident as a joke. Her colleagues begin to question their own digital safety—worried they might be next.
The school’s leadership team, although strong equipped to address bullying and other concerns, the virality of the deepfake left them without a framework to address such a crisis. They hesitate to respond publicly. They fear reputational damage, legal implications, and backlash. The deepfake continues to circulate online—amplified by algorithms that prioritise engagement. It becomes monetised content. A teacher’s distress becomes a platform’s profit.
But the harm doesn’t end with Ms. Taylor.
Teachers in her department report heightened anxiety. A colleague breaks down after seeing comments under the video that mirror abuse she once experienced. New graduates on staff begin scrubbing their social media profiles. A pre-service teacher on placement withdraws mid-week, shaken by what she’s witnessed and unsure whether she wants to stay in the profession. The trauma is both direct and vicarious, rippling across a school community and reinforcing a culture of fear and silence. What eventuates, is a call from the government for these insights to be used to inform Initial Teacher Education (ITE) programs—embedding critical digital literacy, trauma-informed practice, and acknowledgment of platform accountability into teacher training. By teaching future educators to anticipate the psychosocial impact of synthetic media, institutions shifted the narrative from individual blame to systemic responsibility - because when the harm is viral, the response must be structural.
Research Topics
Research Questions
Deepfake Abuse and Teacher Wellbeing: Investigate the psychological impact of deepfake-driven online abuse on teacher mental health and workplace retention.
Platform Algorithms and Viral Harm: Examine how social media algorithms contribute to the amplification of teacher-targeted abuse through engagement-based content promotion.
Vicarious Trauma in School Staff: Explore the experiences of colleagues who witness viral abuse incidents and their susceptibility to secondary trauma and compassion fatigue.
Platform enabled teacher-targeted adult cyber abuse (PETTACA) as a Framework in ITE: Evaluate how the PETTACA framework can be integrated into Initial Teacher Education to prepare pre-service teachers for digital safety and resilience.
Policy Gaps in Deepfake Regulation for Educators: Analyse existing online safety legislation to identify shortcomings in protecting educators from synthetic media manipulation.
Who might be interested in this case? This case would interest HDR students exploring digital harm, researchers examining AI and trauma, SoLT practitioners designing safer learning environments, and policymakers developing frameworks to regulate platform-enabled abuse in education.
What are the short- and long-term psychosocial impacts of deepfake-enabled abuse on teachers’ mental health and professional engagement?
How do algorithmic recommendation systems on social media platforms contribute to the virality and persistence of teacher-targeted abusive content?
In what ways do school staff experience vicarious trauma after witnessing or supporting colleagues affected by viral deepfake abuse?
How can the PETTACA framework be effectively embedded into Initial Teacher Education programs to enhance pre-service teachers’ preparedness for platform-enabled cyber abuse?
To what extent do current Australian online safety laws and institutional policies address the risks posed by deepfakes and platform-enabled abuse targeting educators?
Data Collection
Conduct trauma-informed interviews with teachers who have experienced viral online abuse to explore its psychosocial impact.
Facilitate reflective focus groups with educators to examine how fears of deepfake abuse influence pedagogical choices.
Collect anonymised staff incident reports and conduct a short survey on digital safety concerns to inform school-wide policy gaps.Conduct narrative interviews with diverse educators (eg LGBTQI+) to explore how experiences of platform-enabled abuse intersect with gender expression, sexuality, and digital visibility.
Support (blog): https://blog.aare.edu.au/cyberabuse-its-too-late-the-post-has-gone-viral-already/Support (paper): Arantes, J. (2023). It's too late–the post has gone viral already: a novel methodological stance to explore K-12 teachers' lived experiences of adult cyber abuse. Qualitative Research Journal.https://www.emerald.com/insight/content/doi/10.1108/qrj-01-2023-0014/full/pdf?title=it039s-too-late-the-post-has-gone-viral-already-a-novel-methodological-stance-to-explore-k-12-teachers039-lived-experiences-of-adult-cyber-abuseVicars, Mark, and Aldous Arantes. "Deepfakes and generative AI." In Elgar Encyclopedia of Queer Studies, pp. 86-88. Edward Elgar Publishing, 2025.
Support (blog): https://blog.aare.edu.au/cyberabuse-its-too-late-the-post-has-gone-viral-already/Support (paper): Arantes, J. (2023). It's too late–the post has gone viral already: a novel methodological stance to explore K-12 teachers' lived experiences of adult cyber abuse. Qualitative Research Journal.https://www.emerald.com/insight/content/doi/10.1108/qrj-01-2023-0014/full/pdf?title=it039s-too-late-the-post-has-gone-viral-already-a-novel-methodological-stance-to-explore-k-12-teachers039-lived-experiences-of-adult-cyber-abuseVicars, Mark, and Aldous Arantes. "Deepfakes and generative AI." In Elgar Encyclopedia of Queer Studies, pp. 86-88. Edward Elgar Publishing, 2025.