• Home
    • Teaching with Responsible AI Network
    • Digital Poverty and Inclusion Research
    • The Educational Research Greenhouse
    • But did they actually write it?
    • AIGE in Action
    • Services
  • The Smartglasses Lab
    • Transfeminist Lens
    • Academic Freedom
    • Doxxed at a Glance
    • Tech, entitlement and equity
    • Covert recording on placement
  • Scenarios about Leadership
    • GBV Series: Sexualised Deepfakes
    • GBV Series: Deepfakes and Credibility
    • Shared Language
    • Accountability
    • Oversight
    • Aligning Values
    • Fragmented Leadership
    • Scan First, Act Later
  • Scenarios about Teaching and Learning
    • AI Myths: Objectivity
    • AI Myths: Neutrality
    • Teaching: Bias in Lesson plans
    • Assessment Reform: Workload
    • Assessment Reform: Trust
    • Assessment Reform: Accreditation
  • Ethical Scenarios
    • Ethical Deployment of AI
    • Student Data Privacy
    • Commercialization
    • Facial Recognition
    • Recommender Systems
    • GenAI Hallucinates
  • Scenarios about Digital Citizenship
    • Whose Voice Counts?
    • Diversity
    • CALD Students
    • Justice Deferred
    • Contesting AI decisions
    • Bias
  • Scenarios about Inclusive Assessment
    • Supporting and Safeguarding
    • Human in the Loop
    • The role of the teacher
    • AI Summaries
    • The Library as a central hub
    • Authorship
  • Placement and Permission to Teach
    • Remote placement and Deepfakes
    • Wellbeing on PTT
    • Professional Risk on PTT
    • AI Hallucination in Search Results
  • About
    • About the scenarios
    • Why Case Studies and Scenarios?
    • Case Study Template
    • Developing AI Literacy
    • About Us
EXPLAINABILITY

Authorship



A Scenario-Based Inquiry into the Identification of AI-Generated Content

How to cite this learning scenario

Arantes, J. (2025). Authorship. A Scenario-Based Inquiry into the Identification of AI-Generated Content. www.AI4education.org. Licensed under a Creative Commons Attribution 4.0 International License.
abstract
As generative AI becomes increasingly integrated into education, journalism, marketing, and public discourse, the ability to distinguish human-generated from AI-generated content has become both a practical and ethical imperative. This scenario-based learning activity explores the psychosocial, institutional, and governance risks of failing to clearly identify AI-generated content. Drawing on Shannon Vallor’s The AI Mirror and current advocacy debates, this scenario invites participants to interrogate how blurred authorship destabilizes trust, accountability, and informed decision-making. Through narrative-based inquiry, learners engage with the tensions between freedom of expression, technological innovation, and the right to know the origin of content. The scenario emphasizes the necessity of regulatory compliance, particularly in high-trust environments like education and healthcare, and asks: What happens when the mirror hides its maker?

"If AI reflects not just what we think, but what we refuse to think about, then clear labelling isn't optional—it’s ethical disclosure."

Mirror, Mask, or Misdirection?

It dropped on a Monday morning, splashed across the front page of the university's internal news portal. The article was warm, reassuring, and persuasive—written in accessible language that subtly validated a new policy rolling out campus-wide surveillance through AI-powered behavioural analytics. Students were told the system would “detect early signs of distress” and “support intervention before crisis hits.” Faculty and staff were framed as enthusiastic partners. Experts were quoted. A student named “Emily,” who had experienced a panic attack on campus, was included in the story, describing how “just knowing someone was watching out for me made all the difference.” It was shared widely in internal newsletters, posted on the student union bulletin, and picked up by a national education blog. Within hours, it sparked debate. But something felt off. First came the critiques from postgraduate media students: the tone was “too polished,” the quotes “too clean.” Journalism students cross-referenced Emily’s story—she didn’t exist. The faculty member quoted had no recollection of giving a statement. By Wednesday, the Student Representative Council (SRC) formally requested the article be retracted, claiming it was “algorithmically deceptive.” An internal inquiry revealed that the article had been generated almost entirely by a proprietary generative AI system licensed by the university’s Communications and Engagement Office. The AI had been prompted to write a persuasive article drawing on policy documents, anonymised student wellbeing data, de-identified survey responses, and existing university promotional material. A junior staff member reviewed and lightly edited the piece before publication. There was no byline. No disclaimer. The Communications Office defended the decision. “We’re using AI to streamline and personalise content delivery,” the Director said. “The goal is to connect more meaningfully with our community.” But scrutiny grew. It emerged that multiple AI-generated communications—ranging from mental health advice to marketing slogans—had been circulating without any indication they were machine-authored. One student had received an email offering academic counselling services based on an AI-generated prediction that they were at risk of disengagement—without ever speaking to a human. This might have remained internal if not for a disciplinary hearing involving a third-year student in the media studies program. She had submitted an assignment partially written with ChatGPT but failed to declare it. She was formally reprimanded for “academic dishonesty and non-disclosure of AI assistance.” The hypocrisy was glaring. Outrage erupted. The SRC called for a campus-wide AI transparency policy. The media caught wind. A piece appeared in The Guardian Higher Ed, asking: “If institutions demand that students disclose the use of AI, shouldn’t they be held to the same standard?” The university’s Vice Chancellor was summoned to explain the situation at a Senate hearing on academic integrity and AI. Staff unions raised concerns about being replaced or undermined by algorithmic authorship. Students called for consent and human oversight in all AI-mediated communications. Meanwhile, a leaked document showed the institution had piloted a “content efficiency” campaign aiming to replace 70% of its written student communications with LLM-generated content by the following semester. A final audit revealed the truth: 42% of communications across five departments—including Learning Support, Student Wellbeing, and Equity Services—had been generated or co-written by AI without attribution. The scandal didn’t begin with malicious intent. It began with convenience. And with the quiet erasure of a fundamental ethical principle: the right to know who—or what—is speaking to you.

Research Topics

Research Questions

What constitutes authorship in the age of generative AI? Should AI-generated content always be disclosed—and if so, to what level of detail? What are the risks of failing to identify AI-generated content in public-facing communications? How does the failure to disclose AI authorship impact the relationship between institutions and their publics? What regulatory or institutional mechanisms could enforce transparency without stifling innovation? How do different disciplines (law, media, education) define and operationalize AI transparency? Is there such a thing as "ethical deception" in persuasive content written by AI?
Critically evaluate the ethical implications of AI-generated content in institutional and public settings. Apply principles of transparency, compliance, and human oversight to content identification practices. Develop risk awareness of institutional trust erosion when authorship is obscured. Articulate arguments for and against mandatory disclosure of AI authorship in various contexts (e.g. education, media, government). Reflect on personal and professional responsibilities when engaging with generative AI.

Data Collection

Document Dissection: In groups, learners are given anonymized content samples (some AI-generated, some human-written, some hybrid). They must identify and justify which is which—and reflect on how identification shapes trust and interpretation. Roleplay Ethics Panel: Students simulate a university ethics board assessing whether the communications department breached transparency obligations. Roles include ethics chair, student activist, AI vendor, legal counsel, and comms director. Compliance Mapping Exercise: Participants audit their own institution’s communications or teaching materials to identify where AI is used without disclosure. They align findings with AI governance principles: transparency, explainability, and compliance. Creative Response: Students write a manifesto or fictional news report from a near-future where AI content is regulated like pharmaceuticals: with dosage, side effects, and manufacturer disclosed.

Do you want to know more?
Acknowledgement of CountryWe acknowledge the Ancestors, Elders, and families of the Kulin Nation, who are the Traditional Owners of the land where this work has been predominantly completed. As we share our own knowledge practices, we pay respect to the deep knowledge embedded within the Aboriginal community and recognise their custodianship of Country. We acknowledge that the land on which we meet, learn, and share knowledge is a place of age-old ceremonies of celebration, initiation, and renewal, and that the Traditional Owners’ living culture and practices continue to have a unique role in the life of this region.
Subscribe to the AIGE Newsletter
© Copyright 2024 Web.com Group, Inc. All rights reserved. All registered trademarks herein are the property of their respective owners.

We use cookies to enable essential functionality on our website, and analyze website traffic. By clicking Accept you consent to our use of cookies. Read about how we use cookies.

Your Cookie Settings

We use cookies to enable essential functionality on our website, and analyze website traffic. Read about how we use cookies.

Cookie Categories
Essential

These cookies are strictly necessary to provide you with services available through our websites. You cannot refuse these cookies without impacting how our websites function. You can block or delete them by changing your browser settings, as described under the heading "Managing cookies" in the Privacy and Cookies Policy.

Analytics

These cookies collect information that is used in aggregate form to help us understand how our websites are being used or how effective our marketing campaigns are.