Digital Literacy for the Age of Deepfakes: Recognizing Misinformation in AI-Generated Media
go.ncsu.edu/readext?1063253
en Español / em Português
El inglés es el idioma de control de esta página. En la medida en que haya algún conflicto entre la traducción al inglés y la traducción, el inglés prevalece.
Al hacer clic en el enlace de traducción se activa un servicio de traducción gratuito para convertir la página al español. Al igual que con cualquier traducción por Internet, la conversión no es sensible al contexto y puede que no traduzca el texto en su significado original. NC State Extension no garantiza la exactitud del texto traducido. Por favor, tenga en cuenta que algunas aplicaciones y/o servicios pueden no funcionar como se espera cuando se traducen.
Português
Inglês é o idioma de controle desta página. Na medida que haja algum conflito entre o texto original em Inglês e a tradução, o Inglês prevalece.
Ao clicar no link de tradução, um serviço gratuito de tradução será ativado para converter a página para o Português. Como em qualquer tradução pela internet, a conversão não é sensivel ao contexto e pode não ocorrer a tradução para o significado orginal. O serviço de Extensão da Carolina do Norte (NC State Extension) não garante a exatidão do texto traduzido. Por favor, observe que algumas funções ou serviços podem não funcionar como esperado após a tradução.
English
English is the controlling language of this page. To the extent there is any conflict between the English text and the translation, English controls.
Clicking on the translation link activates a free translation service to convert the page to Spanish. As with any Internet translation, the conversion is not context-sensitive and may not translate the text to its original meaning. NC State Extension does not guarantee the accuracy of the translated text. Please note that some applications and/or services may not function as expected when translated.
Collapse ▲
A robot holds an artificial human brain. Photo by @julientromeur on Unsplash.
As artificial intelligence (AI) technology advances, the rise of deepfakes and synthetic media is one of the most pressing challenges in digital literacy. Deepfakes—hyper-realistic AI-generated videos and images—make it increasingly difficult to distinguish between truth and fiction. The ability to critically evaluate digital content is essential to prevent the spread of misinformation in today’s media landscape.
What Are Deepfakes and Synthetic Media?
- Deepfakes are AI-generated videos or audio that manipulate or replace people’s faces, voices, and actions. These highly convincing alterations can create false representations of individuals, such as politicians or celebrities.
- Synthetic Media includes any AI-created or altered content—whether video, audio, text, or images. Deep learning technologies like Generative Adversarial Networks (GANs) are commonly used to create these media, raising concerns about their potential for misinformation.
The Threat of Deepfakes
Deepfakes pose a significant threat to society. According to a 2023 survey conducted by Pew Research Center, only 42% of Americans can recognize a deepfake image when they see it. Even pop star Katy Perry’s own mother was fooled by an AI-generated photo of her on the 2024 Met Gala red carpet!
- Misinformation and Disinformation: Deepfakes can be used to spread false information, often damaging reputations, manipulating public opinion, or influencing elections.
- Erosion of Trust: As deepfakes become more sophisticated, the ability to trust visual and auditory content diminishes. This undermines the integrity of media and hinders truth-seeking in society.
The Need for Digital Literacy
To navigate this new digital landscape, digital literacy must extend beyond basic skills. It requires the ability to critically assess content and recognize synthetic media. Key areas where digital literacy is essential include:
- Understanding AI Technologies: Knowledge of how AI works, especially in content creation, is essential for spotting deepfakes. Users should understand the technology behind synthetic media to better evaluate its authenticity.
- Critical Thinking: Digital literacy encourages users to question the authenticity of digital content. Key questions include: Who created this? Why was it created? What is the agenda behind it?
- Recognizing Deepfakes: While deepfakes are becoming harder to detect, there are still signs to look out for:
- Inconsistent lighting and shadows: AI-generated images often struggle with realistic lighting.
- Facial glitches: Look for odd blinking or strange facial movements.
- Audio mismatches: Voices may not sync with lip movements.
- Contextual errors: Anomalies in the background or setting can signal manipulation.
- Verification Tools: There are growing tools to help detect deepfakes:
- Reverse image search: Platforms like Google Reverse Image Search can help track the source of images.
- Detection software: Tools like Microsoft’s Video Authenticator and Deepware Scanner aim to detect AI-manipulated content.
- Fact-checking sites: Websites like Snopes and FactCheck.org offer reliable ways to verify suspicious claims.
The Role of Education and Media Platforms
- Educational Initiatives: Schools and universities should incorporate digital literacy education into their curricula, teaching students to spot misinformation and understand the ethical implications of AI-generated content.
- Platform Responsibility: Social media platforms must implement AI-based detection systems, work with fact-checkers, and provide transparent content moderation to minimize the spread of deepfakes.
Empowering Individuals to Fight Misinformation
In the face of growing deepfake threats, individuals must take an active role in combating misinformation:
- Be Skeptical: Avoid taking content at face value. Cross-check information with trusted sources before sharing it.
- Educate Yourself and Others: Stay informed about the latest AI developments and share knowledge with others to create a more digitally literate community.
- Support Ethical Media: Advocate for responsible content creation and media integrity.
Conclusion
Digital literacy is no longer just about using devices or surfing the web—it’s about critically engaging with content in a world filled with AI-generated media. In the age of deepfakes, recognizing misinformation is essential to preserve trust in media and maintain an informed society. The rise of synthetic media underscores the need for a more informed and skeptical approach to digital consumption, ensuring that we can navigate the complexities of modern information.
———-
Rebecca will be teaching the class “AI & Disinformation” at the Bertie Council on Aging at 1 p.m. on March 26, 2025. Additionally, she plans to host a series of evening classes at Bertie Cooperative Extension later this spring, dedicated to exploring all things AI: what it is, how to use it to streamline your productivity, and how to avoid being fooled by fake AI-generated images and videos. If you’re interested in participating or would like to schedule a one-on-one tutoring session, please call her at 252-794-6170 or email rjinnet@ncsu.edu.
———-
References
Pew Research Center. (2023, August 17). What Americans know about AI, cybersecurity, and big tech. Pew Research Center.
National Telecommunications and Information Administration. (2022). The national artificial intelligence research and development strategic plan: 2022 update (NTIA Report No. 22-031). U.S. Department of Commerce.
North Carolina Department of Information Technology. (n.d.). Artificial intelligence. North Carolina Department of Information Technology.