ChatGPT has ignited a wave of interest among PR teams, but according to a study by The Conference Board, as of August 2023, less than 50% of organizations have even begun to work on company-wide guidance, and only 26% have published a policy on generative AI use.
This may lead to the question of: Is your team clear on what can and can’t be done with AI at work?
Almost daily, I hear the reply is a resounding “no.”
Developments in AI move so fast, many are left dizzy and doubtful. Others are balancing the promises of software vendors with their own legal obligations, and coming up confused.
Here’s how Highwire PR tackled the problem. We were one of the first agencies with a robust framework, policy and tools to help inform AI adoption, and we made most of it public—available for others to build on.
Focus on principles first, rather than policies
AI’s frantic pace of innovation led us to focus on frameworks, rather than rules which could quickly become outdated. We trained our staff on understanding the implications of AI, from contractual and legal to a deep understanding of copyright, ownership and ethical concerns, as well as a new appreciation for quality control, necessary when working with AI.
We paired these frameworks with tools and templates to let our teams easily assess a situation before applying AI. Our AI Risk Maps and other public content have been viewed by over 55,000 professionals so far this year.
Most importantly, we embraced an open discussion of how AI is changing our profession. We flushed out feelings of secrecy and shame, and created a safe space (both in person and in a dedicated Slack channel) where teams can feel free to experiment, learn and ask questions before handing work over to a robot.
Leveling-up skills and familiarity
Getting our staff AI-ready was a priority. We created an "AI Bootcamp" training to cover the basics, demystify the technology and get everyone using the same definitions and language. From there, we educated on specific risks and pitfalls. Establishing a shared starting point sped everything up later on.
We made this training mandatory, and every new hire takes it within a week or two of joining.
Setting clear boundaries
With a framework and shared understanding in place, we shared clear ‘do’s and don’ts’ and prohibited several problematic uses of AI, like impersonating a human without consent, and creating content without human supervision.
We vetted vendors, created approved applications and integrated them with our IT team’s acceptable use policy.
Throughout all of this, transparency is key—we disclose our approach upfront to all clients, and never swap human effort for machine work. We’re also updating contracts to clarify our usage of AI.
Coaching toward quality and clarity
My background as a journalist makes me wary of misinformation. While AI promises productivity gains, we can't forget about quality control. I remind clients not to trade speed for trust. AI-generated content still requires careful verification and fact checking.
Whenever I’m asked to review an AI adoption plan or policy for a client, I start by asking how clear it will be to those implementing it. The temptation of AI is so great, and the applications are evolving so quickly, there’s no room for doubt.
And each organization requires a unique solution. Highwire’s Risk Maps were built for PR teams, and specifically for agency employees. I often coach clients on their own AI guardrails, using their specific language, priorities and use cases.
With the right framework, careful implementation and thoughtful adoption, AI can make us all smarter and more focused on meaningful work. But rushing in recklessly can damage credibility and create unnecessary risk. There’s a responsible way to surf the wave.
James Holland is EVP Digital at Highwire PR.