Navigating Trust Challenges With AI Disclosure

Humanoid robot typing on computer in office. illustration of AI and disclosure.

Since the launch of ChatGPT, OpenAI and other language learning models, AI's role in business, marketing and communication has sparked questions about transparency and trust. Should companies disclose AI involvement in content creation, or could it harm their credibility? Research from Big Valley Marketing indicates that while transparency often builds trust, AI disclosure may actually have the opposite effect—around 80% of people report mistrust in AI, potentially widening existing trust gaps.

PRNEWS talked to Tim Marklein, Founder and CEO, Big Valley Marketing, about the role of AI in content, how communicators can bridge the trust gap and what disclosure can do for a brand, organization or public figure.

PRNEWS: Why do you think there is such a disparity in consumer trust when it comes to AI involvement in content creation?

Tim Marklein: The lack of trust is largely due to a fear of the unknown, which is typical with most new technologies. That will likely fade over time. The more lasting challenge is that people are worried for their jobs. Is AI the next great productivity breakthrough, or is this the moment machines start to take over the world—or more specifically, YOUR job?

PRNEWS: How do you see AI disclosure policies evolving in sectors like academia and journalism compared to less-regulated industries like marketing?

Marklein: Academia and journalism have outlined the most restrictive guidelines for generative AI. That’s fueled by plagiarism and copyright concerns, which undercut the foundation of both fields. In other industries, automation and productivity are more powerful forces—especially as individuals and companies try to do more with less. That doesn’t mean original thinking and creative ideation go away. We expect both will become important differentiators as AI commoditizes other parts of the marketing mix.

PRNEWS: When should companies disclose AI involvement in content creation? Are there specific thresholds that make disclosure more critical?

Marklein: The challenge right now is that AI disclosure means different things to different people. Some would argue there’s no need for a writer to disclose they used AI for research or editing help, any more than they would disclose their use of Google search and Microsoft spellcheck. Others argue that disclosure is a must, since people are concerned about bias, plagiarism and misinformation. The assumption is that “transparency breeds trust,” which doesn’t really work when [according to our research] 80% of the general public doesn’t trust AI.

At this point, we have two specific recommendations. First, we agree with several trade groups that focus on “substantial use” as a milestone for triggering AI disclosure. It’s an established legal and business threshold that applies well to AI content creation. Second, we recommend the disclosure be short and specific, focusing on the who and why of AI usage more-so than what and how. People will evaluate for themselves whether the use was responsible (who) and appropriate (why).

PRNEWS: How important is the terminology used in AI disclosures? Can certain words or phrases influence consumer perception?

Marklein: It’s still too early to tell. Most of the current AI disclosure guidelines are first-wave efforts that draw primarily from an organization’s existing ethics, brand or professional guidelines. There’s very little empirical research to demonstrate what actually resonates with customers and citizens. With that said, since trust is the goal, we recommend people use very human language in their AI disclosures—avoid legal terminology, cliches, corporate speak and industry jargon.

PRNEWS: How can proactive labeling of AI-generated content improve or harm a brand’s reputation?

Marklein: Proactive disclosure of AI use—whether it’s part of a product, service or content—is likely a net-positive in the long run to fuel both trust and adoption, but it comes with an array of short-term challenges. Brands need to be thoughtful and authentic in their AI disclosures, listen attentively to stakeholder feedback, and adapt accordingly. As always, trust is earned over time through both words and actions.

PRNEWS: Are there examples of companies successfully navigating AI disclosure, and what can we learn from their approaches?

Marklein: It’s still too early to say which companies are successfully managing AI disclosure. We found and reviewed dozens of disclosure policies from companies like Microsoft, Meta, TikTok, McKinsey, Home Depot, Sports Illustrated and Ernst & Young. Policies naturally vary based on industry, audience and company culture. The key takeaway is that every company should establish clear and informed AI policies, so they have a structured approach to guide employees and address customer questions.

PRNEWS: Do you see AI disclosures becoming a standard expectation for consumers in the near future, similar to privacy notices?

Marklein: Yes, we believe AI disclosures will become a standard expectation—for consumers, businesses, employees, citizens, etc. However, they will fail if they become like privacy notices, which are dreadfully long and overly legalistic. Marketers and communicators need to take a leadership role to ensure simple, authentic language that resonates—while actively soliciting and internalizing feedback from stakeholders.