Mirror, Mask, or Misdirection? A Scenario-Based Inquiry into the Identification of AI-Generated Content
EXPLAINABILITY
How to cite this learning scenario
Arantes, J. (2025). Mirror, Mask, or Misdirection? A Scenario-Based Inquiry into the Identification of AI-Generated Content. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
As generative AI becomes increasingly integrated into education, journalism, marketing, and public discourse, the ability to distinguish human-generated from AI-generated content has become both a practical and ethical imperative. This scenario-based learning activity explores the psychosocial, institutional, and governance risks of failing to clearly identify AI-generated content. Drawing on Shannon Vallor’s The AI Mirror and current advocacy debates, this scenario invites participants to interrogate how blurred authorship destabilizes trust, accountability, and informed decision-making. Through narrative-based inquiry, learners engage with the tensions between freedom of expression, technological innovation, and the right to know the origin of content. The scenario emphasizes the necessity of regulatory compliance, particularly in high-trust environments like education and healthcare, and asks: What happens when the mirror hides its maker?
"If AI reflects not just what we think, but what we refuse to think about, then clear labelling isn't optional—it’s ethical disclosure."
Mirror, Mask, or Misdirection?
It dropped on a Monday morning, splashed across the front page of the university's internal news portal. The article was warm, reassuring, and persuasive—written in accessible language that subtly validated a new policy rolling out campus-wide surveillance through AI-powered behavioural analytics. Students were told the system would “detect early signs of distress” and “support intervention before crisis hits.” Faculty and staff were framed as enthusiastic partners. Experts were quoted. A student named “Emily,” who had experienced a panic attack on campus, was included in the story, describing how “just knowing someone was watching out for me made all the difference.”
It was shared widely in internal newsletters, posted on the student union bulletin, and picked up by a national education blog. Within hours, it sparked debate. But something felt off. First came the critiques from postgraduate media students: the tone was “too polished,” the quotes “too clean.” Journalism students cross-referenced Emily’s story—she didn’t exist. The faculty member quoted had no recollection of giving a statement. By Wednesday, the Student Representative Council (SRC) formally requested the article be retracted, claiming it was “algorithmically deceptive.”
An internal inquiry revealed that the article had been generated almost entirely by a proprietary generative AI system licensed by the university’s Communications and Engagement Office. The AI had been prompted to write a persuasive article drawing on policy documents, anonymised student wellbeing data, de-identified survey responses, and existing university promotional material. A junior staff member reviewed and lightly edited the piece before publication. There was no byline. No disclaimer. The Communications Office defended the decision. “We’re using AI to streamline and personalise content delivery,” the Director said. “The goal is to connect more meaningfully with our community.”
But scrutiny grew. It emerged that multiple AI-generated communications—ranging from mental health advice to marketing slogans—had been circulating without any indication they were machine-authored. One student had received an email offering academic counselling services based on an AI-generated prediction that they were at risk of disengagement—without ever speaking to a human.
This might have remained internal if not for a disciplinary hearing involving a third-year student in the media studies program. She had submitted an assignment partially written with ChatGPT but failed to declare it. She was formally reprimanded for “academic dishonesty and non-disclosure of AI assistance.” The hypocrisy was glaring. Outrage erupted. The SRC called for a campus-wide AI transparency policy. The media caught wind. A piece appeared in The Guardian Higher Ed, asking: “If institutions demand that students disclose the use of AI, shouldn’t they be held to the same standard?”
The university’s Vice Chancellor was summoned to explain the situation at a Senate hearing on academic integrity and AI. Staff unions raised concerns about being replaced or undermined by algorithmic authorship. Students called for consent and human oversight in all AI-mediated communications. Meanwhile, a leaked document showed the institution had piloted a “content efficiency” campaign aiming to replace 70% of its written student communications with LLM-generated content by the following semester. A final audit revealed the truth: 42% of communications across five departments—including Learning Support, Student Wellbeing, and Equity Services—had been generated or co-written by AI without attribution. The scandal didn’t begin with malicious intent. It began with convenience. And with the quiet erasure of a fundamental ethical principle: the right to know who—or what—is speaking to you.
Research Topics
Research Questions
What constitutes authorship in the age of generative AI?
Should AI-generated content always be disclosed—and if so, to what level of detail?
What are the risks of failing to identify AI-generated content in public-facing communications?
How does the failure to disclose AI authorship impact the relationship between institutions and their publics?
What regulatory or institutional mechanisms could enforce transparency without stifling innovation?
How do different disciplines (law, media, education) define and operationalize AI transparency?
Is there such a thing as "ethical deception" in persuasive content written by AI?
Critically evaluate the ethical implications of AI-generated content in institutional and public settings.
Apply principles of transparency, compliance, and human oversight to content identification practices.
Develop risk awareness of institutional trust erosion when authorship is obscured.
Articulate arguments for and against mandatory disclosure of AI authorship in various contexts (e.g. education, media, government).
Reflect on personal and professional responsibilities when engaging with generative AI.
Data Collection
Document Dissection:
In groups, learners are given anonymized content samples (some AI-generated, some human-written, some hybrid). They must identify and justify which is which—and reflect on how identification shapes trust and interpretation.
Roleplay Ethics Panel:
Students simulate a university ethics board assessing whether the communications department breached transparency obligations. Roles include ethics chair, student activist, AI vendor, legal counsel, and comms director.
Compliance Mapping Exercise:
Participants audit their own institution’s communications or teaching materials to identify where AI is used without disclosure. They align findings with AI governance principles: transparency, explainability, and compliance.
Creative Response:
Students write a manifesto or fictional news report from a near-future where AI content is regulated like pharmaceuticals: with dosage, side effects, and manufacturer disclosed.