How Will ChatGPT Change Your Job?
In today’s fast-paced world of public affairs, staying on top of the latest news and insights is essential. From monitoring social media to conducting research and responding to inquiries from stakeholders, public affairs professionals are tasked with managing a wide range of tasks and responsibilities. That’s where ChatGPT comes in. This powerful language model, developed by OpenAI, can help public affairs professionals save time and improve their workflows by providing quick access to information, generating insights on complex issues, and automating repetitive tasks.
Guess what. The preceding paragraph was produced by ChatGPT itself, when asked to write an article on how ChatGPT can change the way public affairs professionals do their jobs.
Maybe it’s not ideal — the word choices are robotic — but as a rough draft, it’s not bad and might be better than what some communications professionals could themselves produce in a half-hour, maybe more.
And it was produced for free, on the spot, in seconds — which is why ChatGPT and artificial intelligence (AI) in all its forms hold such promise for public affairs professionals, but also why they can be scary.
Goldman Sachs reported earlier this year that AI could “significantly disrupt” global labor markets, exposing some 300 million jobs to automation. The Wall Street Journal reports that as “with every wave of automation technologies, the latest will have a significant impact on jobs. Whereas blue-collar workers bore the brunt of earlier waves, generative AI will likely have a greater effect on white-collar professions.”
“AI will have profound implications for the entire economy and for every industry, including public affairs professionals” says Michael O’Brien, vice president of global public affairs for BSA/The Software Alliance, who will speak at the Council’s 2023 Digital Media & Advocacy Summit (DMAS) on June 12 in Washington, D.C. “This technology will accelerate rapidly, which is why it’s difficult to say where exactly we’re headed.”
Most of us don’t even have a firm grasp of where we’ve been or where we are. “AI is already operating in ways we take for granted,” says Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology (CDT), who spoke at the Council’s Spring Executive Conference. “AI is operating in the GPS in your car, in your social media feeds and in the ordering of results when you do a Google search. There are ways AI operates that we don’t even think about, much less question.”
But it is ChatGPT, released just this past November, that people have been “sort of fixated on,” according to Chandler T. Wilson, founder of Bridge Corporate Intelligence. “They’re obsessed with the content-creation side of machine learning, which of course not only produces text but images and sounds, as well as writes code. ChatGPT amasses gigantic data sets, which includes not just articles that have been published and posted online but also tweets that go viral and other forms of social media messaging.”
This can give you access to data that Wilson says “is simply beyond the capacity of human intuition to make sense of — to create structure around, linking key facts and key themes, breaking down the complexity of this data. It can turn the ‘soft’ knowledge we’re used to relying on as individuals, with all these mistaken conclusions we come to and predictions we make, into statistics-based information that can mathematically model outcomes, and do so with far greater accuracy and is ultimately far more useful. For example, you can now predict engagement with press releases with — this is no exaggeration — 93% accuracy.”
An ‘Impression of Greatness’
There are limits, too, of course. ChatGPT “is incredibly limited, but good enough at some things to create a misleading impression of greatness,” according to OpenAI’s Sam Altman, one of its founders with Elon Musk. It would be “a mistake to be relying on it for anything important right now,” Altman tweeted just after ChatGPT’s release. This advice was ignored, largely, as more than a million users were taking advantage of it within five days of its launch. Within weeks, it had 100 million users, making it “the fastest growing app of all time,” ZDNET reports.
ChatGPT “is good for generating a first draft,” says Vlad Eidelman, chief technology officer at FiscalNote. “And it is very good at doing that because it’s trained on such a huge amount of data it has learned a useful representation of a lot of different information. It can write a plausible-sounding email for you, and even be quickly adapted through prompting to using your own style to do it, but it is still limited in important ways. It can produce something that sounds plausible but might not even be accurate or true.” (One obvious shortcoming: ChatGPT — unlike a simple Google search — has access to information up to 2021 only.)
Public affairs professionals can use ChatGPT for content creation — for writing a press release, an email or an op-ed — saving a great deal of time and trouble in the process. But that’s just for starters. By asking the right question — by creating a smart and useful “prompt” — you can access vast amounts of information otherwise unavailable to you, and it can be analyzed and sorted. Too often, trying to develop a plan of action in response to some development in the political world, we just read articles, talk to a few experts and make educated guesses.
“I can’t speak for individual companies, but I believe major companies in media and finance, for example, are already figuring out ways to take the information they have amassed internally for decades and use it as a reliable data source to allow others — for example, journalists, lobbyists and policymakers — to interact with the data through a natural language interface in ways that were not possible before,” Eidelman says. “The value down the road is the ability to interact in this new paradigm with more customized and even proprietary data sets, such as your own organization’s documents, statements, research and other forms of information. Then you can apply it to your own specific needs.”
Such information as well as the insights that go with it can be used to plan out possible scenarios and your organization’s responses to them. “You can identify stakeholders, experts and relevant regulators in ways that have not until now been possible, reducing the ‘cognitive load’ and enabling a lot of your work,” says Eidelman, who will also speak at DMAS next month. “Beyond simply summarizing information, it can personalize it for your own organization and its needs.”
“In the area of issues management, you can use it to help plan an entire program which will be waiting for you to implement, given which of a range of possibilities plays out — who wins a presidential election in another country, for example, and what that might mean for your industry and your own company here at home. You can do the same with regulatory decisions, too — with any number of developments.”
Interest in topics can be tracked “much like a stock market tracks how stocks go up and down in value, depending on all kinds of factors,” Wilson says. “You can use these machine-learning tools to track issues that you otherwise wouldn’t even know were linked, allowing you to design an entire campaign proactively.”
And you can get into the game — beyond the free ChatGPT level — for as little as $35,000 to $50,000, as Wilson told a gathering at the Council’s European office in March. “You can be up and running in three months, reducing your own labor costs by 1,000 – 10,000% while getting better and faster outcomes” than you could otherwise hope to obtain. Meanwhile, rivals such as Microsoft’s AI Bing and Google’s Bard, among others, are emerging. For the creation of text, there’s Chatsonic, Jasper and YouChat.
Part of the appeal of some of the new AI tools are that they don’t “require us to be masters of computer science,” according to O’Brien. “But we do need to become savvy about how to use them. And it is a very bad idea to think you can outsource your job to these tools. AI can be applied to your grassroots program or PAC program, for instance, identifying words or phrases that resonate with your members or stakeholders that may elicit a response. It can even draft that response more quickly than you can. But there might be something in that draft — a word your boss would never use and never want to use — that needs to be fixed.”
‘Recipe for Disaster’?
These are the kinds of judgments “that require a human touch,” according to O’Brien. “The ability to know such things makes you more, not less, valuable to the organization. Deferring to an algorithm is not just a bad idea: In high-risk circumstances — in hiring, credit or pricing decisions — it can invite serious questions about bias or discrimination, which is where future regulations are likely to impose requirements for AI users and developers.”
It is essential that organizations and institutions maintain public trust, especially in a democracy with a market economy, which makes the proliferation of AI tools a matter of great public interest. O’Brien notes, “We should take care to establish guardrails for these technologies in ways that build trust and guard against inadvertently perpetuating biases or other unintended consequences.”
AI is routinely used in areas that governments are struggling to understand and address — and that worries CDT’s Givens. “Whenever these tools are used to screen for public benefits, for hiring, for access to housing, they involve the civil rights of people in a democracy,” she says. “We need a whole-of-government approach, where each of the federal agencies is looking at the areas it regulates, to address the information gaps we now face.”
Fortunately, as Givens tells members of Congress, “you don’t have to be a tech person to address these concerns. This is not a tech issue, so you shouldn’t screen yourself out of the conversation in the mistaken notion that you have to know more about the underlying technology than you do. You don’t need to understand the technology to know where employment discrimination protections are being violated, for example, and to take appropriate corrective action.
“And you don’t have to understand the technology to address the full breadth of risks — the use of AI not only for decision-making but also to produce fake texts and fake images and false statements and enable the harassment of individuals,” Givens says. “The public and the government need to address the integrity of our information ecosystem so the information is reliable and trustworthy.”
Regulators historically play catch-up with tech innovations, and they’ve never faced innovations that have come on so quickly and with such profound implications. So for now, we’re stuck — for better and for worse — with AI running ahead, with even some of AI’s pioneers expressing serious concerns. That doesn’t mean you cannot or should not use these tools as they exist today, but they must be used carefully and with a sense of their pitfalls and limitations.
Human judgment is more important now than ever, even if it might seem that machine-learning robots are about to take over our jobs, if not the world. “This new world that advances the possibilities of connectivity with our stakeholders actually takes us into a place where human interaction will be more important, not less,” Wilson says.
Signature Event: Digital Media & Advocacy Summit
Virtual Workshop: Advocacy Content Creation
Contact us at firstname.lastname@example.org.