How to Spot Fake News in the Age of AI
December 2023
“It’s not what we don’t know that gets us into trouble. It’s what we know that isn’t true.”
Everyone knows Mark Twain said that, but our certitude about the matter points once again to a troubling reality about our world: There has never been a time when more misinformation, disinformation and outright fakery has been this rampant. This is made even worse by the fact that most of us are far more confident in our ability to know when somebody is trying to dupe us than we have any right to be.
According to the Institute for Public Relations’ 4th Annual Disinformation in Society report, the vast majority of Americans (78%) express confidence in their ability to recognize “news or information that misrepresents reality or is false.” Issued in November, the report also found that only 18% say they are “not very confident” in their ability to do so.
Four out of five Americans say they “sometimes” go to other sources to check the veracity of information, while only 20% admit they “rarely” or “never” take the trouble. (What might constitute “other sources,” given the polarization of today’s media, raises important questions in itself.)
What’s worrisome, too, is that the world of artificial intelligence is changing daily — maybe hourly — and the bad actors determined to trick an unsuspecting public are getting better at what they do all the time.
‘Democratizing’ Deepfakes
“Rapid advances in artificial intelligence, including deepfakes, generated text and images, are making it harder to tell what’s real, true, and accurate. This is occurring during a time of deep political polarization when Americans’ trust in the mass media is at a record low, and millions are turning to alternative sources, such as social media, where the threat of misinformation is rampant, influencing advocacy and election campaigns,” says Mark Ames, director of government relations for the American Industrial Hygiene Association (AIHA). “But is there a way to use these technologies to restore trust, connect with our audiences and ethically achieve our goals faster? How should we react when others use these technologies to influence our interests?”
Ames, who will speak on these subjects at the Council’s Advocacy Conference in Austin, Texas, in late January, says that “the power of AI to easily, quickly and often inexpensively influence people’s thoughts and behaviors presents ethical dilemmas that we are called to bravely confront. Now is the time to have these discussions, educate ourselves and begin experimenting with incorporating AI tools into advocacy and communications campaigns to find the right balance between the threats posed by AI of magnifying biases and spreading disinformation and the opportunities they hold for making historic progress toward our goals.”
Becoming Smarter News Consumers
But we have to discipline our thinking, which can be difficult with a subject that is changing as rapidly as this one is. “It’s challenging, even discussing the general subject of misinformation, disinformation, deepfakes and AI, to bring the conversation down to a practical level,” says Council President Doug Pinkham. “The fact that people are overly optimistic about their own ability to detect falsehoods is itself concerning, when actually doing so has never been more difficult. We have to get a better handle on our own vulnerabilities, which means, in the workaday world, we need to become smarter consumers of information. Considering the collapse of trust in our institutions — in our democratic processes themselves — this is critical.”
Fortunately, there are software products that can enable us to detect deepfakes, which few nonspecialists can do without outside help. “These software products can detect codes and patterns that are indicators of deepfakes,” Pinkham says. “They are in the early stages of development now, but in a year or so, we will probably be able to install them on our own laptops like those that detect viruses, which we now take for granted.”
This is already happening in academia. Software products are becoming available to help college professors, for example, detect when students have used ChatGPT and other large language model systems to write their term papers.
By now, most of us have already been told that ChatGPT and its competitors can be useful for generating first drafts of articles, issue briefs, news releases and other documents.
“But these should only be starting points,” Pinkham says. “We shouldn’t consider them finished products. They can be helpful, but any written document needs to be in our own words — and whatever these systems produce should be fact-checked, which brings us back to the importance of becoming more critical consumers of information.”
So, what can we do to better arm ourselves against misinformation and disinformation?
- Consider the source. Some supposed news sites are completely bogus, and it can take you about a minute to check. The “About” page of something called WTOE 5 News readily admits it is a “fantasy news website.” (com keeps a list of fake news sites. PolitiFact, FactCheck.org and BBC Verify are also useful sources for double-checking questionable reports.)
- Read more than the headline. “Obama Signs Executive Order Banning The Pledge of Allegiance in Schools Nationwide,” according to FactCheck.org, cited a source too obscene to be quoted here.
- Check the author. The CEO of Sports Illustrated just got fired after fake biographies of nonexistent SI reporters — the supposed authors of AI-generated stories — were unearthed.
- Check for facts. Real news items from reputable outlets don’t report one clickbait-worthy “fact.” Legit stories are more complicated and thorough than that.
- Consider other possibilities. Maybe the story is in fact satire or parody and not intended to be taken seriously. “Satirical websites are popular, and sometimes it is not always clear whether a story is just a joke or parody,” warns the cybersecurity firm Kaspersky.
- Look for motives. Why was this story written? Whose interests is it supposed to serve, and how slanted is its presentation?
- Don’t believe everything you see. Images these days are easily manipulated — or just made up.
- Practice “lateral reading,” which is a term of art for checking different sources for information on the same subject.
- Finally, check your own biases. “Confirmation bias leads people to put more stock in information that confirms their beliefs and discount information that doesn’t,” FactCheck.org explains.
“It’s not just other people and new technologies that are imposing their biases on us,” Pinkham says. “We do it, too. We have our own biases from our own backgrounds and experiences. That’s how we make sense of the world — by filtering information. And we have to acknowledge that reality to be smarter news consumers.”
Learn more on this topic at The Advocacy Conference, January 28-31 in Austin, TX.
“
What’s worrisome, too, is that the world of artificial intelligence is changing daily — maybe hourly — and the bad actors determined to trick an unsuspecting public are getting better at what they do all the time.
More News & Resources
Featured Event
Is your organization prepared to adapt its social impact initiatives based on who is elected, from the President all the way down-ballot? Navigate post-election shifts at STRIDE this November.
Washington, D.C. | November 21