How to Manage Brand Reputation in Light of AI-Fueled Misinformation 

Fake news, disinformation and manipulation abstract concept 3d illustration.

Generative AI—tools like Dall-E 2, Midjourney, Stable Diffusion and ChatGPT—has garnered immense attention for its potential across business and creative functions alike.

However, it's crucial to recognize the potential seismic shift these tools may cause in the misinformation and disinformation space.

New dynamics created by generative AI are set to lower the barrier to entry for bad actors while simultaneously increasing the sophistication of their efforts, and could change the game for reputation management. These will require communications professionals to take a closer look at the impacts and solutions for heightened threats around disinformation.

Upending the Disinformation Framework 

A common framework in the anti-disinformation space is ABC:  actors, behaviors and content. At a recent conference on "Combatting Disinformation," Jack Stubbs, VP of Intelligence at Graphika, discussed the implications of generative AI across these three areas.

Lower Barriers for Bad Actors: Generative AI will lower the barriers to entry, making more complex activities more accessible and feasible for less sophisticated actors. These technologies can be used by individuals and groups with limited resources to create and spread disinformation, potentially increasing the number of malicious actors in the space.

Economies of Scale Facilitate Bad Behaviors: Generative AI will allow actors to create new content at scale, dramatically increasing the volume of deceptive online content with minimal impact on the cost of its production. This could lead to an inundation of disinformation, making it harder for people to distinguish between fact and fiction.

Content That Passes the Sniff Test: Historically, threat actors, particularly those operating at a state level, have struggled to create convincing content from cultural and linguistic perspectives. However, with the ability to have AI tools create content for them that masquerades as linguistically native, they are likely to quickly overcome these challenges.

Image and video creation tools also raise the specter of more convincing manipulated—or outright fake—rich media creation. As Sam Gregory, executive director of the human rights organization Witness, said in a recent Washington Post story on deepfakes, “there’s been a giant step forward in the ability to create fake but believable images at volume.” This was recently seen in an influence operation by pro-Chinese actors where they promoted AI-generated video footage of fictitious people.

The CEO of OpenAI (developer of ChatGPT) recently acknowledged his own concerns on this front, saying in an interview with ABC News, “"I'm particularly worried that these models could be used for large-scale disinformation.”

Changing the Reputation Management Game 

The critical role of digital channels—and search in particular—as an information source has led to search engine optimization becoming a central aspect of reputation management. Generative AI—specifically, ChatGPT and other similar platforms—looks set to upend that.

I was recently on a call with a company CEO who cited the portrayal of their company by ChatGPT as a measure of whether their reputation had been impacted by a crisis. While it’s important to note that ChatGPT is only trained on data up to September 2021, it still raises a question: How do you manage your company's online reputation when reputation is governed, not by a list of search results, but by AI that is trained by the entire internet? 

Tools are being trained by an information landscape that is polluted by a worsening infodemic. Witness, for example, recent coverage of misinformation injected into an article on the new Navalny documentary, thanks to the author using AI.

Responding to this Landscape 

Given the significant challenges posed by generative AI in the fight against misinformation and reputation management, here are five thoughts on how organizations might respond to this landscape:

Nail the Basics: As many organizations have already found, the best way to combat disinformation is to get ahead of it. This still rings true, and the less-glamorous aspects of risk management—baking disinformation into vulnerability and risk assessments, scenario planning, preparation and training—are ever more important, as is the need to shore-up those vulnerabilities through proactive communication—whether to proactively mitigate those risks, or in some cases to prebunk acute threats.

Increased Investment in AI Detection Tools: The positive, proactive potential of AI shouldn’t be overlooked. Embrace AI-powered tools for monitoring, analyzing and managing your company's online reputation. Organizations should invest in AI tools that can help identify and flag manipulated content, including that created by generative AI such as deepfakes. Adobe, for example, launched the Content Authenticity Initiative in 2019 and recently partnered with Microsoft to announce a new feature aimed at verifying the authenticity of photos and videos. Meanwhile, it didn’t take long after ChatGPT launched for tools like GPTZero to be created with the aim of detecting content developed using the platform.

Strengthen media literacy: Media literacy has become an essential skill in the age of disinformation. Encourage critical thinking and teach people to recognize the signs of AI-generated content. Governments in countries like Finland have led the way in media literacy. However, in a world where people trust their employer more than almost any other source of information, companies can play an important role in raising media literacy within their own workforces through internal training programs to bolster resiliency within their workforces.

Partnerships: The fight against disinformation can’t be solved solely by business, government or civil society alone. It will take the collaboration of each of these institutions, coming together to create and foster to develop effective solutions at a societal level—for example Google’s partnership with Cambridge University to test the impact of prebunking as a vehicle to counter disinformation, or Canada’s Digital Citizen Initiative, which funded 23 projects aimed at building resiliency among Canadian citizens.

Legal and Regulatory Frameworks: Finally, we need to develop regulatory frameworks that effectively govern the use of AI in order to combat disinformation. This might include regulations that promote transparency and accountability in AI development and use, as well as those to prevent the malicious use of AI-generated content, with material penalties for those in violation.

The EU is at the forefront on this area, with the proposed EU Artificial Intelligence Act including transparency obligations around deep fakes, for example. In the short term, communicators should be mindful that the legal landscape around AI-generated content is in flux—particularly as it pertains to evolving areas such as copyright—and be thoughtful around the risks inherent in this rapidly developing landscape.

The rise of generative AI tools presents a significant challenge in the fight against disinformation and reputation management, and threatens a step-change across multiple aspects of the disinformation landscape. Without significant focus at both the organizational and societal levels, we may be headed for a proverbial zero-trust environment where no-one can trust what they see or hear.

 

Dave Fleet is head of global digital crisis at Edelman.

[Disclosure: Adobe, Microsoft, and Google are clients of the author’s employer, DJE Holdings.]