Rethinking Literacy Pedagogy at the Dawn of the Generative AI Revolution by Míchílín ní Threasaigh

 In Blog, Directors, Generative AI, Lessons and Ideas, media literacy, Míchílín Ní Threasaigh, Professional Development, Recommended Resources, Resources, Secondary

The elephant in the English classroom (generated by Copilot)

Unauthorized Generative Artificial Intelligence (GenAI) use first entered my classroom in April 2023, mere months after the launch of ChatGPT. Since then I have been thinking deeply about how my literacy pedagogy needs to shift in response to the biggest revolution in communications technology since the Internet, and how to prepare my students for a GenAI-saturated future. 

Pandora’s Box has already been opened and we cannot close it, much as most of us would like. Our society is already saturated with GenAI, with more apps and plug-ins coming on-line every day. Microsoft’s 2024 Work Trend Report found that 3 out of 4 knowledge workers are already using GenAI at work, and 66% of employers wouldn’t hire someone without AI skills while 71% would hire a less experienced candidate with AI skills over a more experienced candidate without them. Microsoft’s 2025 Work Trend Report likened GenAI’s impact on the workplace to both the Industrial and Internet Revolutions and identified a trend toward “Frontier Firms” where every employee becomes an “agent boss” in human-agent teams. LinkedIn’s most recent Skills on the Rise ranking identified AI literacy as the most in-demand skill of 2025.

Meanwhile, our students are already alone out there in the GenAI wilderness, craving adult supervision and guidance. Common Sense Media’s 2024 study found that youth are far ahead of educators in recognizing a) the revolutionary nature of GenAI and therefore the need to adapt as quickly as possible, and b) experimenting with use cases for GenAI (spoiler alert: most of it is not for cheating). My own students elaborated on this during a Youth Participatory Action Research project I conducted with my grade 9 destreamed English classes last year. You may be surprised to learn that my students are as concerned as we are with cheating and the potential for over-reliance and unethical use to stunt their learning and skill development. But they’re also concerned with how GenAI is accelerating Truth Decay. They do not trust the accuracy of GenAI outputs and want their teachers to teach them how to navigate the AI-generated bias and disinformation flooding our media ecosystem, as well as how to protect their privacy from greedy, unethical, unregulated tech companies, and over-enthusiastic and uncritical institutional adoption of this not-yet-reliable technology. But despite all of their worries, they’re also excited about the potential benefits of learning to harness the power of GenAI to enhance their learning. They are particularly excited by the possibility of every student having a 24/7 digital assistant/tutor available to support their organization, learning, reading, research, and writing processes. Conversation amongst forward-looking educators at my school has already honed in on this potential as an equity issue, with GenAI’s potential to level the playing field for our socio-economically disadvantaged students who cannot afford human tutors and whose parents may not have been educated in Canada or have the time to support their learning at home.

So I’ve begun tinkering with my critical media literacy lessons to incorporate foundational GenAI skills and foster critical thinking about this technology. I’ve added algorithmic literacy to the list of critical literacies I need to teach and my students need to practise. I have found Code.org’s growing database of AI curricula incredibly useful and engaging to teach and learn how GenAI works in order to de-anthropomorphise this technology so my students come to view it as a probability generator. We’ve analysed outputs for bias through image generation activities. Through case studies, we’ve investigated GenAI’s potential to exacerbate social inequities when used uncritically in education, policing, sentencing, hiring, credit/housing approvals etc. I’m working on incorporating tips for spotting deep fakes and GenAI-generated disinformation into our Friday Fact or Fiction viral news activities for the coming school year. I still need to create activities that prompt students to consider ethical dimensions like carbon footprint, exploitative labour practices, and intellectual/creative theft — concerns which have been disappointedly sidelined in the frenzied push to adapt to the disruption this technology is causing. And I’m still thinking about how to approach alarming trends I’ve noticed amongst my students like dishing to ChatGPT like it’s a qualified therapist or engaging with the plethora of “toxic boyfriends” available on Character.AI.

And that’s the easier part because it already fits my critical media literacy pedagogy. The harder part has been honouring my students’ request to be taught how to use GenAI as an organization/learning/reading/research/writing assistant. I took the leap toward the end of last school year and guided my students in experimenting with asking Co-pilot or Gemini (our Board’s two approved apps) to help them brainstorm pros and cons to common debate topics and to give them rubric-based feedback on drafts of their opinion essays during the revision process. I taught them how to be transparent about this use by citing the app and creating comprehensive MLA-style appendices that outlined 1) the app and prompts used, 2) the outputs incorporated into the final product, and 3) the complete transcript of the conversation. Even my most capable students reported feeling much more confident about the quality of the essays they produced by integrating ethical GenAI use into their writing process. And my special education students loved that seeking support from a bot that can’t judge them took away the self-consciousness of “asking stupid questions” or “asking for too much help”. It was an extremely encouraging first foray that I plan to build on in the coming school year.

The most liberating part of this experiment was that, by bringing GenAI use out into the open, I freed myself from the whack-a-mole approach to policing unauthorized GenAI use that was burning me out, and had several GenAI assistants distributing the workload of providing formative feedback to improve my students’ writing. For the first year after the release of ChatGPT, I had spun my wheels chasing the fantasy of AI detectors, hoping these would solve the problem of GenAI cheating. But it soon became evident that AI detectors simply do not work. At all. Worse, they disproportionately generate false-positives for ELL students’ writing. And I have a feeling that if studies were done on Special Education students’ writing with assistive technology they would have similar results. The good news is that many analogue English teacher tricks against plagiarism are still effective: knowing your students’ writing styles, triangulating products with conversations and observations, and focussing on process. So I began by doubling down on these. My classroom policy was already that students must be able to demonstrate clear evidence of their process in a Google Doc I create for them, but clicking through a student’s revision history second by second was tedious and time consuming. Then I was introduced to “revision tracking” tools — Chrome extensions that play a document’s “version history” as video for easy assessment of this process. After some experimentation, Revision History quickly became my favourite. But as the sheer volume of student work that evidenced unauthorized GenAI use increased in the second year, policing unauthorized and undisclosed GenAI use was still an extremely time-consuming task, especially when the investigation was high-stakes in that it was oriented toward discipline. 

The final puzzle pieces slipped into place for me with two further changes to my assessment and evaluation practises: not accepting assignments for evaluation until the process work had been submitted (thus eliminating the majority of investigations), and working within the Ontario Achievement Chart to revising my writing rubric so that process work is worth 30% and GenAI assistance 20% of the assignment mark (Thinking/Inquiry and Application respectively). This simple change maintains academic integrity by ensuring that students who did not engage in process work and/or used GenAI unethically could only receive 50% at best. It also relieved me of “cop duty” and made the “gotcha” conversations with students less high-stakes because they were focussed on teaching rather than discipline. Semester two’s reporting period was a lot less stressful than semester one’s. Both were far less stressful than the year before.

I haven’t got it all figured out yet, but I feel like I’ve got the basics fleshed out and feel more confident and excited to expand my experimentations this year. And I’ve surrounded myself with a small but growing group of colleagues who are eager to collaborate with me on this shared journey. Will you join us?

Leave a Comment

Start typing and press Enter to search