How the Communications Industry Can Learn to Trust AI

AI ethics or AI Law concept. Developing AI codes of ethics. Compliance, regulation, standard , business policy and responsibility for guarding against unintended bias in machine learning algorithms.

Advancements in artificial intelligence are poised to transform the day-to-day for communicators and public relations professionals by automating monotonous tasks, simplifying research and surfacing data and insights. According to a survey from The Conference Board, 85% of communicators have used or experimented with AI tools for at least one application. The survey of 287 respondents also found that 60% now use AI at least “sometimes” in their daily work.

As AI transforms the wider communications landscape, agency leaders and in-house executives alike face a new responsibility: to ensure the tools their teams implement follow ethical standards and processes. When used ethically, AI tools can enhance decision-making and boost efficiency, but PR pros must also weigh the potential risks, such as inaccurate data, biases and potential security breaches. In the AI era, strong data and security protocols are non-negotiable.

Implementing Strong Data Protocols

Transparency and accuracy are two keys to successful implementation of AI tools. First, communicators should seek out tools that are transparent when it comes to their AI algorithms and clearly outline how they generate results or recommendations. This includes an understanding of how AI systems are audited for biases that could mislead or damage comms strategies. AI vendors should also be upfront about the data sources they use. Knowing where the data comes from and how it is used within AI models builds credibility. Trustworthy data is the backbone of any AI system, and without it, AI results can lead to inaccurate or misleading insights.

Additionally, any data that comms teams feed into AI systems must be clean, up-to-date, and relevant in order to produce insights that are both actionable and rooted in truth. For example, when producing coverage insights in executive reports, it's essential that the context of that coverage in relation to the company's strategies and objectives is taken into account. AI systems that utilize Retrieval-Augmented Generation (RAG) to reference a company's internal strategic documents and KPIs will produce more relevant coverage takeaways in their analysis.

Prioritizing Data Security

Trustworthy data is essential and maintaining data security practices is equally important. Many AI tools access essential company, customer and client data, so robust security measures are non-negotiable.

This starts with thoroughly vetting any potential AI vendor and ensuring that its tools meet industry standards for security and data protection when it comes to encryption, secure data storage and strict access controls. AI tools must also comply with data protection laws already enacted such as GDPR and CCPA to reassure clients and build trust in AI usage. Teams should look for vendors who have published policies for data security and AI ethics and governance, as well as partnerships with leading researchers and scientists in the AI and data science fields.

Futureproofing the Industry

We’re in the midst of a paradigm shift in how communicators work with AI. As AI becomes increasingly central to communications strategies, teams must prioritize transparent and ethical AI systems alongside strong data security protocols as more than a nice to have. The opportunities for enhanced productivity and efficiency with AI tools are undeniable, and with strong trust and security measures, teams can shift from fear of AI to feeling confident about its ability to unlock new insights, creativity and boost strategic impact.

Chris Hackney is Chief Product Officer at Meltwater.