Skip to main content

The Buzz: Did a Machine Write This?

The Buzz: Did a Machine Write This?

June 2024

By Alan Pell Crawford

Eager to find ways to spot AI-generated prose, one naturally turns to ChatGPT for help — and ChatGPT is only too happy to oblige my prompt with this advice: “Detecting whether a text is written by AI or by humans can be challenging, especially with the advancements in natural language processing (NLP) technology.”

But it’s important to be able to tell the difference between machine-generated text and human prose, especially when we are deluged every day with verbiage manufactured by AI. This verbiage, after all, can include misinformation, disinformation and other forms of falsehood — much of it cooked up without regard for accuracy in the details, much less truth in its totality. Machines of course don’t know the difference and don’t care. It’s also important to be able to tell which is which if we are to become better communicators ourselves.

Telling the Difference

But there are some “tells,” as poker players might say. “Human writing,” ChatGPT reports, “often exhibits a certain level of inconsistency or variation in style, tone and structure while AI-generated text may appear more consistent throughout, lacking the nuances of human expression.”

And that’s a pretty good summation — as good, or close to it, as you might get from real flesh-and-blood human beings working on their laptop with Roget’s Thesaurus by their side. And ChatGPT did it instantly.

But is it good enough at a time when public affairs professionals need to know how to evaluate the reams of writing that they must read and evaluate — and when, in their own work, they rely increasingly on AI-generated text, at least for first drafts? And when major news organizations themselves will soon be using AI, if they aren’t already? (The Washington Post, in late May, told staff that it will begin using it “everywhere in our newsroom.”)

‘Robot-Speak’

AI-generated text is almost always devoid of human juices. “Humans often infuse their writing with emotion, empathy and personal experiences, which can be challenging for AI to replicate convincingly,” ChatGPT says. “[It] may lack genuine emotional expression or may use emotion in a formulaic or predictable manner.”

While that’s true, real people have expressed the same idea much more vividly. Jess Zafarris on Ragan’s PR Daily calls it “robot-speak.” “Think of the worst, most vapid LinkedIn post you’ve ever read or the most soulless corporate statement you’ve read that are exactly like that other one,” Zafarris writes, and that’s what AI is likely to produce. That’s to be expected because large language models treat words, phrases and sentences as nothing more than data and try to predict the next word, phrase or sentence to follow, and the result is usually “mind-numbingly drab” from start to finish.

Then there are the rote introductions and conclusions. “In this fast-paced world,” they might begin and wrap up with something like, “In conclusion,” filling in between with unsubstantiated and unattributed adjectives like “impressive,” “thought provoking,” “pivotal,” “transformative” and “impactful.”

‘Nutritional Substance’

Zafarris quotes an AI-generated account that reported how one project “manifested through remarkably innovative and comprehensive communications strategies.” That doesn’t even make sense, and the prose is “as empty of nutritional substance as an AI-generated image of potato chips.”

There are now tech tools that can help — programs through which you can run the text in question, expecting a machine to tell you whether it was cranked out by a machine somewhere else. But they are not yet completely reliable. As generative AI tools improve, “the quality of the texts generated by these tools also rapidly improves,” says Dongwon Lee, professor in the College of Information Sciences and Technology at Penn State. And this is good, but it also makes “it more and more difficult for humans to detect AI-generated text and the integrity of the information in it.” Irene Solaiman, policy director at AI startup Hugging Face, says the tools coming online “just can’t keep up. You’re playing catch-up this whole time.”

And whether we are aware of it or not, in our eagerness to sound like we too belong in this Brave New World of machine-driven communications, our own prose can begin to sound like ChatGPT, as life imitates art. But equally likely is this silver lining: As we become increasingly alert to the stock phrases and other cliches that litter AI-generated text, we become more and more aware of them in our own prose — and, it is hoped, steer clear of them when we write.

But it’s important to be able to tell the difference between machine-generated text and human prose, especially when we are deluged every day with verbiage manufactured by AI. This verbiage, after all, can include misinformation, disinformation and other forms of falsehood — much of it cooked up without regard for accuracy in the details, much less truth in its totality.

Featured Event

Covering emerging issues affecting local, state and federal government relations professionals, expand your network while getting answers to your toughest policy questions.

Washington, D.C. | Sept. 25-27