Skip to Main Content

Media Literacy (Education Outreach): Generative AI

Information for children, teens, parents, and teachers about media literacy in the age of AI.

Generative AI

Generative AI is an AI system that learns to generate text, images, video, and audio based on the data used to train it. Here is a simple explanation of how generative AI works:Graphic depicting a neural network

1. Data is inputted into the system.

2. A large mathematical equation, or algorithm, organizes the data along a series of nodes and pathways, called neural networks.

3. Next, a human enters a question or a prompt.

4. Finally, the system uses the algorithm to pull bits of data from the neural networks, reorganizes it, and generates an output.

Generative AI does not create new content, nor does it understand the content it is generating. It does not know the difference between good and bad, right and wrong, helpful and harmful, true or false, etc. It is merely putting bits of data together it has been taught belong together. So, if the data it was trained on has misinformation, disinformation, bias, or other dangerous content, the system will return generated content full of these problems. 

Large language models (LLMs) are AI systems that can recognize, summarize, translate, predict, and generate text, and there are many tools easily available for using LLMs. As a result, more and more articles and stories found on websites, blogs, and social media are being created by AI.

How to Spot AI-Generated Text

Generative AI has many different helpful uses, but it is important to understand what it is and when it should be used. Generative AI is typically not used by professional journalists, academics, or others reporting original news or research.    

Here are some tips for identifying AI-generated content:

1. Read the article out loud. Does the writing flow or is the language flat? Are certain words or phrases often repeated? Are obvious facts stated? A professional writer's work should be well-written, polished, and vary in word choice.

2. Consider the purpose of the article. Is the article appropriate for the audience, situation, and context? A professional writer knows their audience and understands the nuances of context.

3. Are there factual errors? AI that is trained on bad data will return results that include that bad data.

4. Check the citations. When AI is unable to provide an answer to a question or offer the source from where it pulled its "facts," it will make up content. This is known as a hallucination. Citations that lead nowhere are often generated by AI.

5. Use an AI detection tool or plagiarism checker. There are many freely available tools like this online, but no tool is 100% accurate. Frequently, these tools return false positive or false negative results, so be aware that the tool itself may be inaccurate.

Hoopla

What Cake Can Teach Us about Algorithms from CBC Kids News

LinkedIn Learning

Find a great assortment of video courses on media literacy, misinformation, deepfakes, and artificial intelligence. Below is a sampling of course and video titles to explore:

  • The State of AI and Copyright: Can We Detect when AI Has Been Used?

Libby

Additional Research Guides

Explore more technology libguides.