Misinformation vs. Disinformation in the Age of AI
Picture this: You come across a health infographic on social media stating that "Americans who sleep less than 6 hours a night have 3x higher dementia risk and 40% lower workplace productivity." The graphic includes a chart showing health risk increasing and productivity decreasing over time, a citation from the “Journal of Sleep Research” and even a university logo. You share it because it seems helpful and besides, thousands of others are also passing it along.
The only problem is that the infographic was developed using the social media site’s built-in AI features and the facts are made up. And here's the tricky question: Did the original poster spread lies, or did they just trust the wrong tool?
This is where AI technology complicates something we used to understand clearly.
- Misinformation means sharing falsehoods without knowing they're false.
- Disinformation means lying on purpose.
People like you who are sharing the infographic don’t fit neatly into either box.
The built-in AI features in the social media site also don’t fit neatly into either box. People like you sharing the infographic thought they were sharing something helpful, but they might not have realized the facts were fake. As AI tools become part of how we work and communicate, false information travels in ways our old labels don't quite capture.
Hallucinations and Deepfakes: Two Routes to False Information
AI creates falsehoods in two distinct ways, and understanding the difference helps you know what to watch for.
What is an AI Hallucination?

Hallucinations are AI mistakes or accidents. They’re byproducts of AI tools designed to be helpful.
As research suggests, you can think of AI large language models (LLMs) like ChatGPT as sophisticated pattern-matching systems. They predict words based on what they've encountered before, but they don't check if something is true. When a chatbot generates a citation or a statistic from thin air, researchers advise that the chatbot is doing exactly what it was built to do. The model writes with complete confidence whether it's right or completely wrong.
This is what makes hallucinations so slippery. They can show up in contexts where you're genuinely looking for accurate information. They're formatted like real sources. They read like facts. There's no asterisk, disclaimer or hint that the chatbot just made something up because it sounded plausible.
What is a Deepfake?
Deepfakes involve someone weaponizing AI tools to deceive. They are deliberate and require human intention.
They take the form of videos and audio recordings engineered to show someone saying or doing something they never actually did. Someone picks a target, crafts the false content and uses AI tools to make it look real. Psychology Today notes that the technology takes advantage of your human tendency to believe what you see and hear with your own eyes and ears. A deepfake of a CEO announcing fake financial results or a political figure making inflammatory claims can spread before anyone has time to verify it.
Information from AI systems can also be impacted by bias. Learn more: What is AI Bias? Understanding How Data Shapes AI Decisions.
Spotting AI-Generated Falsehoods
The good news is you can protect yourself. It just takes different approaches depending on what you're dealing with.
How to Tell If AI is Hallucinating
The most reliable defense is also the simplest. You should check the claims of AI output yourself. When an AI tool hands you statistics, citations or factual claims, take a moment to look them up in verified sources. An invented journal article won't appear in real databases no matter how official it sounds.
Pay attention to a particular red flag: specificity without a trail. Researchers report that LLMs generate impressive fluency with hallucinations by delivering precise details like exact dates, specific percentages and detailed procedures. The details are served with the kind of confidence that makes you want to believe them. But if you can't trace those details back to a verifiable source, that confidence means nothing.
Something worth keeping in mind is to use AI tools for what they’re good at. AI tools are excellent for drafting, brainstorming and helping you refine ideas you already have. They’re terrible at retrieving facts, explaining legal requirements, providing medical guidance or anything else where accuracy matters more than sounding good.
Keep reading about AI safety and responsible use: Understanding AI Ethics: Issues, Principles and Practices.
Find Your Program
How Do You Spot a Deepfake?
Start by looking for things that don't quite add up visually. Deepfake technology still struggles with facial characteristics. Pay attention to eyes and mouths. MIT has found that unnatural eye movements, odd blinking patterns or lips that don't sync properly with speech are all tells.
Homeland Security guides you to consider context. Ask yourself, “Where did this come from?” Deepfakes typically spread through social media or unfamiliar websites, not through established news organizations with editorial standards. If something shocking appears without confirmation from multiple credible sources, that's your signal to pause.
When in doubt, verify through official channels. If a video claims to show a public figure making a statement, check their verified social media accounts or official website. Real announcements leave official trails.
Interested other effects AI is having? Keep reading: Understanding the Environmental Impact of Artificial Intelligence.
How to Protect Yourself From Deepfakes and Hallucinations
![]()
The single most powerful thing you can do is slow down. AI-generated content rewards speed. It's designed to spread quickly. But verification takes time, and that's okay. Build a habit of checking before you share, even when (especially when) something feels urgent.
Another useful filter is to be suspicious of perfection. Both hallucinated text and deepfake videos often look a little too polished. Examples include flawless grammar in a casual context, perfect lighting in what's supposed to be spontaneous footage and ideal framing when real life is usually messier. When something looks unusually perfect, take a closer look.
Want to continue building your AI literacy? Read more: How to Learn Artificial Intelligence (Plus Helpful Courses and Skills).
Maintaining Trust
AI tools changed how false information reaches us, but the tools haven’t changed everything. The fundamentals still matter: We still need to check claims, verify sources and think before we share. What's different now is how convincing the fakes can be.
But you can adopt verification routines when you use AI tools, when you encounter surprising claims, and when you're about to pass information along to others. These habits won't catch every falsehood, but they'll catch many of them. And as AI tools become more woven into daily life, your ability to distinguish what's real from what just seems real becomes more than a useful skill. It becomes essential to how we maintain trust in information, in institutions, and in each other.
As for that health infographic, there are credible studies associated with the National Institutes of Health supporting the claim of the impact of sleep on dementia and productivity — but not at the statistical levels contained in the infographic.
By slowing down and verifying through official channels, you can build trust in your network.
Education can change your life. Find the SNHU artificial intelligence course that can best help you meet your goals.
Shawn Powers, EdD (she/her) serves as the senior director of AI policy at SNHU. Her work in academic policy extends back to her nearly 10-year role as an associate dean at the School of Arts, Sciences, and Education at SNHU. As senior director, Powers oversees the guidelines and policies of effective and ethical AI use throughout the university. Her vantage point is enhanced by her extended work in philosophy and ethics in education as both a teacher and researcher.
Powers has presented widely on the issues of ethical AI, including the AIxHEART conference where she presented her paper, “Prompting a Dialectic of Freedom in AI.” She is also a 2025 EDSAFE Women in AI Fellow. Connect with her on LinkedIn.
Explore more content like this article
What Are Extracurricular Activities and Why Are They Important?
Dual Degree vs. Double Major: What’s the Difference?
Major vs. Minor vs. Concentration: What’s the Difference?
About Southern New Hampshire University
SNHU is a nonprofit, accredited university with a mission to make high-quality education more accessible and affordable for everyone.
Founded in 1932, and online since 1995, we’ve helped countless students reach their goals with flexible, career-focused programs. Our 300-acre campus in Manchester, NH is home to over 3,000 students, and we serve over 135,000 students online. Visit our about SNHU page to learn more about our mission, accreditations, leadership team, national recognitions and awards.