Public relations is a messy business. Even when you don’t screw up, somehow you do. That is even more true now in the emerging AI era of deepfakes, when an image of your likeness can be used against you, even if it wasn’t really you.
Take the recent example of the deepfake Taylor Swift, which had her PR team in overdrive and threatening legal action. It was clear from the swift response—no pun intended—that her PR team already had a plan in place that could be instituted as the crisis unfolded. That should become the standard for publicity teams.
In 2024, it’s evident you need a deepfake scenario as part of your crisis communications plan, along with other disinformation management plans. Otherwise, you can see your reputation tarnished by actions or events that never actually happened. That’s the reality of the world we live in today, and it’s best to meet the challenge head-on.
The Hazardous Social Media Landscape
In an era when reputations can be built or shattered with a single post on TikTok, the arrival of AI further complicates an individual’s or company’s public image. In this precarious social media landscape, even when you’re doing everything right, you might still find yourself entangled in a web of misinformation and publicity crises.
The rise of deepfakes introduces an additional challenge for PR teams, who have to mitigate the fallout of manipulated images and videos that may deceive the public. Nefarious actors can use deepfakes to create false statements or narratives, leading to reputational harm for individuals or organizations. The Taylor Swift incident serves as a reminder that even the most well-meaning individuals can find themselves at the mercy of AI-generated disinformation.
Moreover, when a genuine crisis occurs, deepfakes can worsen matters by introducing fabricated evidence or statements that exacerbate the situation. In the midst of a crisis, the last thing you want is something that fuels the flames, especially when the fuel is originating in the world of disinformation.
A Crisis is (Always) Coming
So, how does one prepare for a crisis that hasn't happened yet? It’s not as though you can train your colleagues or a client on what not to do—because when it comes to deepfakes, they are not the problem. What people think they’ve done is. So, I recommend first performing an AI risk assessment and then developing a crisis communications plan.
Conduct an AI Risk Assessment
For your risk assessment, It’s imperative to understand the potential for AI deepfakes to be a genuine threat to a company, brand or individual’s reputation. Ignoring the possibility could leave you blindsided when misinformation strikes. So, ask yourself the following questions:
- What’s the worst that could happen?
- What disinformation could someone proliferate that would damage the reputation of your company or client?
Consider all mediums of deepfake content, everything from illicit AI-generated photos to an audio deepfake. Maybe it’s a CEO talking about committing securities fraud or a celebrity cheating on their publicly beloved spouse.
Craft a Comprehensive Crisis Plan
After the risk assessment, craft a comprehensive crisis communications plan that specifically addresses the nuances of AI-induced crises. Take the following steps:
- Identify key stakeholders
- Establish communication channels
- Outline steps to swiftly counteract false narratives
- Employ AI experts in your team
Your AI experts should be able to publicly debunk fake images and break it down for the public. This might be someone who can say an image isn’t real because of a particular established timeline or because it contains elements that don’t exist.
Test Your Plan’s Effectiveness
After establishing a team, conduct simulations to test the effectiveness of your crisis communications plan in a controlled environment. Craft realistic and challenging scenarios that mimic potential AI-induced crises, and notify the crisis response team of the simulated exercise, triggering the initiation of the crisis communications plan. This mimics the real-time activation process in a crisis situation. Each team member assumes their designated roles and responsibilities as outlined in the crisis communications plan.
Counter the Deepfake Narrative
If your team has adequately trained with crisis simulations, you can respond more rapidly when a real crisis occurs. That’s imperative because a quick response demonstrates accountability and a commitment to transparency. Conversely, delays in communication can contribute to speculation, a loss of credibility and more disinformation. You must counter the deepfake narrative before it takes hold.
The age of AI and deepfakes demands a reevaluation of crisis communication and public relations strategies. That means developing a PR team that is not just media literate but technologically sophisticated—because PR now requires multiple people to work effectively, both on offense and defense.
Gabriel De La Rosa is Principal at Intelligent Relations.
[Editor's Note: Don't miss our Leveraging AI for PR online seminar March 27.]