Creating More Meaningful Conversations about Generative AI

Ai, Machine learning concept. Flat illustration.

Generative AI has captured the world’s imagination and the swell of forward progress advances daily. While debate within the communications field vacillates between optimism for a collaborative human/machine future and fear of the collateral damage that may come with it, many communications leaders remain in a state of analysis paralysis.

Looking across the communications landscape, what you see is a well-worn path of topics:

  • Generative AI is going to be transformative.
  • Generative AI will have widespread impacts on many industries.
  • We need guardrails to protect people and brands.
  • We, at X agency, are continuing to watch this space.

Sound familiar?

If it does, you’re not alone. The truth is, everyone, everywhere is buzzing about generative AI and no one really knows what’s next. And that’s okay – preparing for what may come next is the fun part. As this technology evolves, our industry must also start penning the next chapter instead of simply shouting the same superficial vagaries into the void.

Careful observation is critical. However, the nature of AI demands both observation and thoughtful action. The AI rocketship has left the planet and we must graduate from the “Generative AI 101” class and move on to more meaningful discussions.

An Action Plan

There are steps we can take today to begin formulating an action plan for engaging with generative AI. Here are a few questions you may want to consider asking yourself, colleagues, clients, and company leaders to begin exploring how best to use generative AI within your organization.

  • How comfortable are you with delivering work to key stakeholders or clients developed, in some way, with an assist from AI?
  • How much of an assist is acceptable?
  • And, importantly, do your stakeholders/clients agree?

These are important questions to ask yourself and those within your organization, but they’re impossible to answer in a vacuum. Opinions on acceptable use of AI are as vast as the potential of AI itself.

Open a dialogue with your colleagues, clients and other key stakeholders about their views on AI and the role they feel comfortable with it playing in their programming. This exercise can help shape your organizations own policies.


Other Considerations

Have you educated your staff about what is and isn’t acceptable content for AI chatbots to ingest? Who should make that determination within your organization?

Most people understand by now that it is not advisable to share confidential material with AI. But what constitutes “confidential” may not be so black and white. Contracts? Obviously. But what about a pitch angle? Or even just one simple sentence about a forthcoming client launch?

Talk to your teams about confidentiality and you’ll help them make more informed decisions about their AI use. It may seem obvious, but a recent study reported that 11 percent of all content submitted to ChatGPT has been confidential. Communications teams must remain vigilant and committed to not contributing to this statistic.

Have you instituted critical fact-checking measures to ensure any AI-generated material is correct? And how will you enforce fact-checking protocols across teams?

If you ask ChatGPT to produce a list of travel influencers in the Pacific Northwest, for example, it’s easy enough to manually cross-check the suggested output for quality and accuracy. Does the person exist? Do they have a following? Do they still produce travel content? Who might be missing from the list?

However, what if you asked for a rundown of the key issues preventing autonomous driving from taking off in the U.S.? If you are unfamiliar with the category, how are you approaching the fact-checking process? Or are you taking the output at face value?

It’s important to remember that just because something “looks right” doesn’t mean it is. Work with your analytics team to determine implementation of best practices and protocols for ensuring you and your teams avoid inadvertently spreading misleading or inaccurate information.

How flexible is your organization? How adept is your organization to responding to changes in AI deployments as those changes come?

Let’s pretend for a moment that you’ve settled on best practices for using generative AI in the workplace. And let’s say it’s a Tuesday. Well, Wednesday morning comes around and maybe ChatGPT announces that you can now create a realistic AI copy of yourself using a simple photo and sound file. Is your organization prepared to quickly process this new update and consider all implications?

This new world order moves fast and demands continual monitoring. It’s important to nail down a process for observation, analysis and reaction so that your organization can adjust to the shifting tides.

Are you ready for every issue that may arise?

It’s time to dust off your crisis response playbook, because AI has created a whole new set of scenarios for which to prepare. When generative AI guardrails do roll out, know that they will not be bulletproof. AI will continue to shape shift – such is the nature of this technology. And as such, many talented communicators will make mistakes. Carefully mapping out potential issues and setting up a response plan is step one. Step two is continually revisiting your plan as the world, powered by AI, changes all around us every day.

This is just a start. We are having these conversations while we continue to learn all we can about this nascent technology and its many wonders.


Chip Scarinzi is head of technology at Ruder Finn.