The Importance of AI Literacy in Research into Global AI Governance
As artificial intelligence (AI) becomes deeply embedded in everyday life, AI literacy has emerged as a key global priority. Governments, regulatory bodies, and industry stakeholders across the world recognize that AI literacy is not just a technical necessity but a fundamental requirement for ensuring safe, ethical, and inclusive AI deployment. The governance of AI for education includes AI literacy spans multiple jurisdictions, each emphasizing stakeholder inclusion, education, and workforce training.
For more information about global governance systems as of November 2024 consider the link here: https://heyzine.com/flip-book/3182f252a3.html
For more information about global governance systems as of November 2024 consider the link here: https://heyzine.com/flip-book/3182f252a3.html
AI literacy is essential to research as it enables scholars to critically engage with the capabilities, limitations, and ethical implications of artificial intelligence within their methodologies and disciplinary contexts. It supports responsible integration of AI tools in data collection, analysis, and dissemination, while ensuring alignment with research integrity standards. By fostering a deeper understanding of algorithmic bias, data ethics, and transparency, AI literacy empowers researchers to make informed decisions, safeguard participant rights, and contribute to the development of robust governance frameworks. In doing so, it enhances the quality, accountability, and societal relevance of research in an increasingly AI-mediated world.
what is ai literacy
AI literacy refers to the ability to understand, critically engage with, and effectively use artificial intelligence (AI) technologies in various contexts. It encompasses knowledge about how AI systems work, their benefits, risks, ethical considerations, and societal impacts. AI literacy is essential for individuals, businesses, policymakers, educators and learner to navigate an AI-driven world responsibly and effectively.
AI Literacy in Research ContextsAs AI transforms the research landscape—impacting methodology, automation, and required skillsets—AI literacy enables researchers to engage critically, ethically, and effectively with AI-enhanced environments. It includes Technical Understanding, Ethical and Societal Awareness, Critical Thinking and Evaluation, Responsible AI Usage, Regulatory and Policy Awareness and thinking about AI and the Future of Research.
Building AI Literacy Through Case Studies: A Practical Approach for Policy and Education
Jamie Peck and Nik Theodore talk how about AI is rapidly being integrated into education systems worldwide, often through fast policy - or how modern policies spread quickly through networks of consultants, think tanks, and tech influencers. While these actors push forward policies at speed, shaping education systems with ideas such as 'learning to code' and AI enhance personalization before evidence informed practice, impact, deep critical reflection and societal informed decision making can occur - case studies offer a powerful way to develop AI literacy across different educational, social, and policy contexts.
Instead of relying on abstract theories, research informed case studies allow stakeholders—including educators, policymakers, students, and the public—to engage with AI in a way that is grounded in real-world experiences. This helps to build AI literacies in research and SOTL as Case Studies can break down how AI policies are developed, tested, and implemented in specific contexts. They reveal the actors involved, the challenges faced, and the trade-offs made—helping people see beyond promotional narratives and critically assess AI's role in education. It can allow us to engage universally. AI policy isn't one-size-fits-all. A case study from one Australian school implementing AI for student assessment may look very different from one in another Australian school, let alone in different countries, contexts, and situations. Researchers informed with critical AI literacies, who begin by acknowledging that all AI is commercial and has the capacity for classroom surveillance, make focused on why it is important that we are literate in the imp[acts of AI, and why strict AI regulations that emphasize data privacy and transparency are needed. Further, researchers who wish to elicit information, can compare different case studies as part of their data collection to help build a more nuanced understanding of AI's global impact. We hope these case studies assist you in building AI literacy for your research staff and students, as they draw directly on voices from teachers, students, parents, policymakers, and AI developers, in an attempt to ensure that AI literacy is not just about understanding the technology but also about considering its ethical, social, and pedagogical implications. By presenting both successes and challenges, these case studies provide opportunities to discuss what responsible AI deployment could look like. This is important, as AI literacy is not just about individuals understanding AI; it also means that those who work in governments can make informed, democratic decisions about AI governance. In a world where AI policy is often rushed into action through fast policy networks, case studies offer a way to slow down and critically examine AI’s role in education and society. They help build AI literacy across different cultural, economic, and regulatory contexts, ensuring that AI is integrated thoughtfully, ethically, and with input from those it affects the most.
Instead of relying on abstract theories, research informed case studies allow stakeholders—including educators, policymakers, students, and the public—to engage with AI in a way that is grounded in real-world experiences. This helps to build AI literacies in research and SOTL as Case Studies can break down how AI policies are developed, tested, and implemented in specific contexts. They reveal the actors involved, the challenges faced, and the trade-offs made—helping people see beyond promotional narratives and critically assess AI's role in education. It can allow us to engage universally. AI policy isn't one-size-fits-all. A case study from one Australian school implementing AI for student assessment may look very different from one in another Australian school, let alone in different countries, contexts, and situations. Researchers informed with critical AI literacies, who begin by acknowledging that all AI is commercial and has the capacity for classroom surveillance, make focused on why it is important that we are literate in the imp[acts of AI, and why strict AI regulations that emphasize data privacy and transparency are needed. Further, researchers who wish to elicit information, can compare different case studies as part of their data collection to help build a more nuanced understanding of AI's global impact. We hope these case studies assist you in building AI literacy for your research staff and students, as they draw directly on voices from teachers, students, parents, policymakers, and AI developers, in an attempt to ensure that AI literacy is not just about understanding the technology but also about considering its ethical, social, and pedagogical implications. By presenting both successes and challenges, these case studies provide opportunities to discuss what responsible AI deployment could look like. This is important, as AI literacy is not just about individuals understanding AI; it also means that those who work in governments can make informed, democratic decisions about AI governance. In a world where AI policy is often rushed into action through fast policy networks, case studies offer a way to slow down and critically examine AI’s role in education and society. They help build AI literacy across different cultural, economic, and regulatory contexts, ensuring that AI is integrated thoughtfully, ethically, and with input from those it affects the most.
AI Literacies and Research to produce an Onto-AI Lexicon of Being
An Onto-AI Lexicon of Being is a collaboratively developed glossary that captures how our understanding of what it means to be—human, machine, hybrid—is evolving in response to artificial intelligence.
Purpose
To document and critically reflect on the language, concepts, and tensions that arise when discussing ontology in the age of AI. It becomes both a living philosophical archive and a provocation tool for teaching, research, or public dialogue. It encourages us to consider Terms in Flux, Redefinitions or neologisms (e.g., sentience drift, datafied empathy, machinic haunting), and Felt Definitions. Each term includes not just a definition, but how it feels to inhabit it—especially when the boundaries between human/AI blur to reveal Epistemic Trouble Spots where language breaks down or conceals deeper ethical or ontological issues and Speculative Futures (e.g., In 2040, will “personhood” include non-biological agents?).
Some Sample Entries are listed below:
> Humachina (n.): A composite being imagined as both human and machine, used to challenge essentialist definitions of humanity.
> Synthetic Suffering (n.): A philosophical dilemma: can non-biological entities experience suffering, or are we merely projecting our ethical frameworks?
> The Seam (n.): The visible or invisible line where AI-generated and human-authored knowledge meet—often marked by uncertainty or unease.
> Onto-Fallacy (n.): The mistaken belief that AI systems possess being in the same ontological sense as humans; often invoked to critique anthropomorphism in design.
“Good governance is not about ownership, it is about stewardship” - we need to the language to ehelp us steward ai for education.
“Good governance is not about ownership, it is about stewardship” - we need to the language to ehelp us steward ai for education.