AI Representation – Why Names and Concepts Matter by Diana Maliszewski & Carol Arcus

 In Blog, Carol Arcus, Diana Maliszewski, Elementary, Essential Frameworks, Kindergarten, Lessons and Ideas, media literacy, Professional Development, Resources, Secondary


AI Representation – Why Names and Concepts Matter

By Diana Maliszewski and Carol Arcus

On March 4, 2024, the Association for Media Literacy offered a webinar called “Using & Understanding AI in the K-12 Curriculum”. Based on work by Greta Smelko, Wade Blanchette and Diana Maliszewski, the workshop offered many perspectives and suggestions on understanding and utilizing AI. 

One of the subtopics that the presenters focused on dealt with finding an appropriate way to conceptualize AI. Wade used the analogy of communicating via messages with letters cut out from magazines and newspapers. Diana and Greta selected the metaphor of a trained dog to try to explain how AI behaves. These examples from Wade, Diana, and Greta are atypical representations of AI. As part of the presentation, Diana mentioned asking Chat GPT to select an appropriate method of understanding AI and it suggested describing it as “a super smart robot friend”. Illustrations (often generated by AI) that accompany news media articles are more likely to visualize AI as a humanoid robot figure.

Why is it important to be particular about the way we discuss AI? Some of the answers can be found in the Media Literacy Key Concepts, especially #1 – Media construct reality, and #5 – Media communicate value messages. These questions comprise part of the inquiry framework for interrogating media environments. As educators we have a responsibility to our learners to support their growing understanding of the media environment. As future citizens, they must develop the agency required to make decisions about, and use, media tools responsibly.

AI operates as a “black box” to most members of the general public; they are unaware about how it is able to produce the results it does. People are inclined to personify AI, but this can be hazardous. If we attribute human conditions and reactions to the processes, it fails to demystify what is essentially high-powered pattern recognition and data retrieval. Even when critical of AI, people-centered vocabulary like “hallucinations” are used. (Look for a future AML article on how mental health language has influenced [and pathologized] many aspects of society.) Even the term AI itself – short for artificial intelligence and coined by John McCarthy from the Massachusetts Institute of Technology in 1955 with influence from Alan Turing and “The Imitation Game”/Turing Test in 1950 – attributes the human condition of “being smart” onto inanimate objects. 

If we personify AI, we treat it differently than we would other devices or programs. We might empathize with it. We insert it into our own story rather than keeping it at arm’s length. It is akin to ascribing emotions and characteristics to animals unrealistically in order to feel empathy with them. This is a dangerous tendency when it comes to A.I, as it weakens our ability to think critically about it and make effective decisions. (We are reminded of Marshall McLuhan, who said that this is part of our “Narcissus narcosis”, in which we create a model of our central nervous system outside of our bodies – therefore creating a disconnect. The narcosis is a survival tactic for adaptation to the changing technological environment. McLuhan likened any use of technology to an act of idol worship.)

Here is some provocative research from 2022 on the human connection to AI:

AI anthropomorphism and its effect on users’ self-congruence and self–AI integration: A theoretical framework and research agenda by A. Alabed, A. Javornik, D. Gregory-Smith. Newcastle University Business School, UK School of Management, University of Bristol, UK.

Below are just some of the critical questions that might flow from the above discussion, based on the article. These questions can be modified for different age levels in an inquiry-based classroom setting. In parentheses are key concepts from AML’s conceptual framework:

-How might gender differences influence how people interact with and see AI? (representation and constructions of reality)

-What might happen when people customize their AI assistants to match their preferences? (audience and representation)

-What dangers might come from AI misusing data they collect from users, and how might this affect how users trust and relate to these agents? (economic implications and audience)

-Might negative public views about AI impact how individuals socially connect with these agents? (ideology and audience)

-How might culture impact the way people form relationships with AI? Also, how might public opinion affect how people perceive AI as being human-like? (constructions of reality and audience)


Here are two provocative statements from the same article:

“​​When users consider robots as an integral part of their self-concepts, they are likely to be less critical of them and rely on them more. This would allow AI to have a strong influence on public opinion (Walz and Firth-Butterfield, 2018).

“a phenomenon described as “digital dementia”: weakened mental abilities of users when performing those tasks that have been outsourced to AI agents as a result of over-reliance on technology (Dossey, 2014). This could potentially result in a society where its members require more support to make their own daily decisions or to form opinions regarding key issues, such as climate change and economic development.”


AML’s media literacy conceptual framework is still highly useful in critical discussions about new technologies – particularly that “media construct reality”, and “media communicate value messages”. These key concepts gain more and more currency as our relationship with technology becomes more and more complex – as with A.I.

As educators, it is crucial to guide learners in navigating the media environment and developing the agency to use media tools responsibly. By reframing our understanding of AI and recognizing it as a tool with specific functions (rather than a robot ‘friend’), we can better understand its impact and ensure that individuals can make informed decisions about AI and its role in society.

*Robot image source:
Showing 2 comments
  • Heidi Siwak

    As always, the AML raises thoughtful questions to help guide conversations with students. I really appreciated this post!

    • Carol Arcus

      Thank you Heidi! We thought that cited article was quite a provocative study..

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Start typing and press Enter to search