We talk a lot about the potential benefits and challenges of AI and what it will mean for the economy and jobs. Another significant aspect of AI for brands and organizations will be reputational.
AI will touch every industry acutely. This means brands and organizations will need to think strategically about how they communicate about AI. A new survey, Artificial Intelligence & Communications: The Fads. The Fears. The Future., asked consumers across the U.S and the U.K. about their sentiments toward AI. It also asked a panel of 25 global AI experts to provide perspectives on key areas where AI is making an impact. A major takeaway is that much more education is needed.
The question for communicators is what do we need to be thinking about and advising brands and organizations to do as they embark on this educational journey?
First, AI needs to be unbiased. It can’t be prejudiced. Yet humans are inherently biased. And humans teach AI via the algorithms they develop. As AI inherits that bias it becomes less effective in enforcing its decisions fairly–and it will do so at the peril of the brand behind it.
In the future, we can expect brands using AI technology will have to defend the integrity of their algorithms against accusations of racism, sexism and other forms of discrimination. This means AI needs to be smart enough to perform difficult tasks, but do so without reinforcing the biases of its creators.
Second, imperfect AI algorithms and automations lead to unintended–and sometimes destroying– consequences. Training AI to learn the subtleties of human language, as well as the symbolism often contained in imagery, is difficult. Sarcasm, slang and cultural nuances evolve constantly and can be difficult for even the smartest humans to grasp.
Organizations and brands will need to understand the nuances of the AI technology used in their business and communications, and must anticipate the public impact technology can have.
They also need to be able to defend the integrity of AI’s decision-making. In some cases, they may need to create a dialogue with the public to build trust and goodwill, and solicit input on the technology itself. What better partner to help them do so but PR?
Third, the ethical use of data will continue to be a major issue for brands and organizations–one that they’ll need PR support to manage. As we’ve seen with the rollout of GDPR in Europe, the tightening of privacy legislation and the impact of privacy breaches, continued stakeholder dialogue is critical to maintaining the license to operate.
AI needs untold amounts of data to do its job. In part it will be PR’s role to ensure our brands have permission to use that data. In addition PR should help maintain clarity and transparency in explaining where data is obtained and how it is used.
These three themes will no doubt grow as the tectonic shift to AI and automation happens across society. But it’s clear that we as PR practitioners will be critical partners in helping organizations manage through these issues and mitigate the reputational risks that will come with them.
Sophie Scott is global managing director of the technology sector practice at FleishmanHillard