PRSA Updates AI Ethics Guidelines for 2025: What PR Pros Need to Know

PRSA's latest AI ethics guide for PR professionals

PR pros have all been there. It's 4:47 p.m. on a Friday, and you need to draft three versions of a press release before you leave the office. You open ChatGPT, paste in some background, and—boom—you've got a solid first draft in seconds. Crisis averted, weekend saved.

But here's the question nobody wants to ask out loud: Did you just accidentally leak confidential client information? Will that AI-generated quote come back to haunt you? And if a journalist asks who wrote this release, what exactly do you say?

For too long, PR professionals have been navigating AI adoption alone—making judgment calls in the moment, hoping for the best, and crossing their fingers that they do not end up as a cautionary tale in someone else's ethics presentation.

That uncertainty ends now.

What's New in the 2025 AI Ethics Guidelines

PRSA's Board of Ethics and Professional Standards (BEPS) just released the updated "Promise & Pitfalls: The Ethical Use of AI for Public Relations Practitioners"—and this isn't a typical guidance document to bookmark and never open again. This is a practical handbook the profession desperately needs as AI tools moved from emerging trend to daily reality.

Here's what's new in the 2025 version:

  • Transparency gets its own section. There's now dedicated guidance on disclosure protocols for AI use in content, visuals, hiring, reporting and contracts—with specific examples of when and how to disclose.
  • Action-oriented best practices. The guide adds immediate actions you can take across AI literacy, privacy protection, responsible use and bias awareness. No more wondering "okay, but what do I actually do?"
  • Governance and training frameworks. New step-by-step guidance on vendor assessment, team training, forming cross-functional AI advisory groups and maintaining human-in-the-loop requirements.
  • Expanded regulatory analysis. Deeper focus on copyright, trademarks, FTC disclosure requirements, state-level laws like the Texas Responsible AI Governance Act and international regulations including the EU AI Act and GDPR.
  • A critical framing shift. The document moves from "AI as a risky tool" to "AI as embedded systems requiring oversight," positioning PR professionals as leaders and ethical gatekeepers.

The timing couldn't be more critical. AI agents are now entering the mix—autonomous systems that respond to stakeholders, adapt strategies, and take action without human intervention. Communicators are not just dealing with tools that help them write faster anymore. They are working alongside systems that can launch campaigns, respond to media inquiries and make strategic decisions while they sleep.

That's exciting. And terrifying. And exactly why the industry needs clear ethical guardrails.

From Theory to Practice: Governing AI in PR

What makes this updated guidance different is its shift in perspective. The document reframes PR pros as active governors of AI adoption rather than cautious users. PR pros have the opportunity to move from passive recipients of technology to strategic advisors who shape how AI gets used in organizations and for clients.

Think about what that means. When a CEO asks whether to implement an AI-powered chatbot for customer service, you're not just a communications expert anymore. You're the ethical conscience in the room—the one who asks: Have we tested this for bias? How will we disclose its use? What happens when it makes a mistake?

PR professionals bring the curiosity and judgment AI lacks. This is a core to the professional value we bring in an AI-powered world.

AI Risks in PR: What Can Go Wrong Without Oversight

Yes, communicators all worry about AI making factual errors or "hallucinating" content. But the updated guide goes deeper, highlighting risks that should keep every PR pro up at night.

Using AI without oversight leads to unintentional spreading of misinformation, producing biased outputs that perpetuate discrimination, causing inadvertent plagiarism or copyright violations, exposing confidential or proprietary data, masking AI authorship and bypassing human accountability.

For instance, imagine using AI to screen job applicants and unknowingly excluding qualified candidates based on biased data. Or accidentally creating an "astroturf" campaign where AI-generated letters deceive legislators about public opinion. Or monitoring employee sentiment without their knowledge, destroying the trust a company spent years building.

These are all scenarios bringing real risks that the guide addresses with specific examples and recommendations.

When to Disclose AI Use: Transparency Guidelines

If there's one section that deserves to be printed, highlighted, and taped to every PR professional's desk, it's the guidance on transparency.

PRSA's AI Ethics Guideline states: "Be transparent about the use of AI in most public relations practices. Clearly disclose when content, decisions, or interactions are significantly influenced or generated by AI, especially when this information could impact how messages are perceived, how relationships are built and how trust is maintained."

Transparency is not always a yes-or-no decision, and it exists on a continuum. The guide acknowledges what we all know from experience: If AI is used to support your thinking and the final product is meaningfully shaped by human input, disclosure may not always be required. The key question is whether AI use could affect trust, transparency or audience understanding.

The guide offers practical disclosure examples, from simple statements like "This content was generated with the use of AI" to more detailed explanations that specify degree of human oversight. These examples provide suggestions to help practitioners navigate disclosure in good faith.

Common AI Ethics Questions for PR Professionals

The guide's FAQ section reads like a transcript from every PR team meeting happening right now:

  • "Is it ethical to use AI to draft a press release?" Yes—if it's accurate, reviewed by a human and aligned with the PRSA Code of Ethics.
  • "Can I use public AI tools for client work?" Only if the tool doesn't store or reuse inputs, and you're not entering confidential data.
  • "Should I tell my clients I'm using AI?" Transparency builds trust. Disclose AI use in deliverables or contracts, especially when the tool makes a meaningful contribution to the outcome.

These straightforward answers give practitioners confidence to make ethical decisions in real time.

How to Lead AI Adoption with Integrity

The revised guide includes a "How To" section, which provides practical roadmaps for leading AI adoption with integrity.

Want to vet AI vendors? The guide provides specific questions to ask, from data privacy protocols to bias testing and human oversight requirements. Designing AI training for your team? There's a framework for that, emphasizing practice over theory and cross-functional learning.

The guidance is straight-forward: “If a vendor can't answer transparently about data handling and ethical safeguards, reconsider. If their platform replaces, rather than augments, professional judgment—walk away.”

Perhaps most valuable is the section translating PRSA's Code of Ethics into AI-specific guidance. The revised framework introduces new case examples and a cost-benefit analysis to help practitioners evaluate AI’s impact on workflow, creativity and public trust.

Future of AI in PR: Emerging Ethics Challenges

The guide doesn't pretend to have all the answers. PRSA's Board of Ethics and Professional Standards will continue providing guidance through Ethical Standard Advisories as relevant issues arise, addressing emerging topics like accountability, agentic AI, cultural misrepresentation and environmental stewardship.

The reality is that AI isn't slowing down. As the guide states, AI agents are already working across platforms, responding to headlines, coordinating messages and managing real-time interactions faster than human teams can react. A communicator's role is expanding beyond message management into system design—governing voice, curating reputation, and embedding ethics into every interaction.

Leading the Profession Forward

AI is a tool. PR and communication professionals are trusted strategic advisors to their organizations and serve as the conscience of the organization. This updated guidance gives professionals the framework to use judgment, apply ethical principles and lead with transparency as AI becomes increasingly embedded in our daily work.

Most importantly, it positions PR professionals exactly where they belong in this transformation: not as bystanders watching technology reshape communication and trust, but as ethical leaders who determine how it gets used.

The question isn't whether to use AI—that ship has sailed. The question is whether we'll use it ethically, strategically and in ways that strengthen rather than compromise our profession.

This guide ensures PR pros can answer "yes" with confidence.

Andrea Gils Monzón is a long-time PRSA Member and a current national board member. She collaborated with the Board of Ethics and Professional Standards (BEPS) committee members to update Promise & Pitfalls: The Ethical Use of AI for Public Relations Practitioners.