How Brands Can Combat Crises Sparked by AI ‘Hallucinating’

silhouette of a human head with an interface icons.social network, communication in the global computer networks.

What do a picture of the pope in a puffer jacket, a song collaboration with Drake and The Weeknd and an interview with Michael Schumacher have in common? All three are completely fake, have successfully fooled many people, and were created by AI.

We’re once again abruptly entering a new era for technology. One where just about anyone can go online, plug-in data, and use AI to fabricate anything from convincing stills for a nonexistent Muppets WWII movie, to a high fashion shoot modeled by the crew of the Starship Enterprise-D, to next week’s homework assignment.

And just as when the dot com boom started or when social media exploded, it’s currently the wild west of AI – a volatile, ungovernable and unpredictable marketplace.

 

AI “Hallucinating”

AI doesn't always get things right. It has already conjured falsehoods that have led to the spread of fake news and disinformation. Known as “hallucinating,” AI tools have come to some pretty absurd and inaccurate conclusions, all shared with blind confidence and a refusal to concede things as simple as the fact that it is 2023.

It has comically, yet astutely, been called an “automated mansplaining machine,” and at its current stage, AI usually doesn’t know the difference between substantial journalism and fake news. There’s already been a defamation lawsuit against ChatGPT for making up false accusations about an Australian mayor.

AI chatbots, photo generators, audio replicators and the like are constantly pumping out content without reliability or consent. Whether a troll trying to trick people into believing false information about a public figure, or using another artist’s work to create AI music or images, this “ask for forgiveness later” approach to AI technology could lead to disastrous consequences for both brands and the people who consume them.

 

Crisis Planning

So how do you navigate being a public figure right now when AI poses a constant risk to pretty much everyone in the public eye?

The answer is preparation. The time to work on a crisis plan is before you find yourself in a crisis. If you’re working with brands, it’s mission-critical to have a plan in place for a variety of potential crises.

In addition to many of the crises we’re used to, it’s important to prepare for the potential of being tied to false information or damaging content generated by AI.

Develop problem scenarios and decide how to move forward in each situation. Ask questions such as:

  • Are you going to publicly refute potential stories. and to what degree?
  • Is litigation appropriate?
  • Do you need to inform your company’s internal team?
  • How will your response vary by the nature of the content?

Make sure that whoever is a part of your crisis response team is calm and experienced at handling crisis scenarios. When negative information is out, there’s only so much time before the story breaks. Oftentimes, reporters might not ask for your comment until minutes before the article goes live. Be ready no matter what.

If you are caught off guard, you or your client should swiftly come forward with the facts and find a way to remedy the situation. For instance, if you’re a vegan shoe brand and an AI chatbot spread disinformation that your shoes were actually made of human toenails, it would make sense to have an ethics statement, a clear way to prove the claims false, and a way to ensure the quality of your shoes won’t be questioned in the future.

It’s always challenging to adapt to new technology coming onto the scene, especially one as ironically incalculable as AI. As long as you’re prepped and ready, you’ll be about a thousand steps ahead of any AI-tastrophy that comes your way.

 

Eric Yaverbaum is CEO of Ericho Communications and an author who wrote PR for Dummies – as well as six other titles, including Leadership Secrets of the World’s Most Successful CEOS.