It can be an uphill battle to detect AI-generated content. As AI models improve, it becomes increasingly difficult to know what people are really seeing online. To combat this, communicators should focus on fostering media literacy among both consumers and the industry, arming them with the necessary skills to understand the difference between fact and fiction.
For years, I’ve been helping tech companies navigate the complexities of AI in marketing, public relations, video production and content creation. I’ve learned how businesses can leverage AI while maintaining brand trust, the importance of transparency in AI-driven campaigns, and why it’s so important for individuals to take control of their own media literacy right now.
Why is Media Literacy Critical?
AI detection tools like Pindrop, Maybe's AI Art Detector and WeVerify can help identify manipulated media. But these tools, while valuable, have limitations. They struggle to keep up with rapid advancements in AI-generated content, making it difficult to identify deepfakes and other forms of digital manipulation.
In July, Elon Musk, CEO of X (formerly known as Twitter), shared an AI-generated deepfake video impersonating Vice President Kamala Harris on X, without clarifying it as a parody. AI-generated content now goes beyond altering existing videos of public figures. Some circulating videos are nearly indistinguishable from reality, depicting events that never occurred. These include fabricated footage of public figures committing crimes and even current presidential nominees Donald Trump and Harris portrayed in fictional romantic relationships. These sophisticated fakes are often created using AI tools like Grok-2, which is readily available on Musk's X platform. As the November election approaches, such deepfakes could potentially mislead voters and impact the electoral process.
As detection become increasingly difficult as AI improves, I believe the focus should shift from relying solely on technology to fostering critical thinking and analytical skills among individuals.
Media literacy empowers users to question and analyze the content they encounter. It requires humans to be diligent—to verify the sources of everything seen online and maintain a healthy skepticism towards digital media, becoming more aware to misinformation.
Balancing AI Innovation with Brand Trust
For businesses, AI brings both opportunities and challenges. On one hand, AI can enhance content creation, allowing companies to stay competitive in a changing market. On the other hand, there's a risk that over-reliance on AI could lessen authenticity and trust.
Many businesses think they have to choose between utilizing AI for innovation or avoiding it to maintain trust. But that doesn't have to be the case. By viewing AI as a tool to augment human creativity rather than replace it, businesses can push boundaries while preserving the trust that cements their brand reputation. This approach requires an investment in technologies like Adobe’s Content Credentials, which provide a verifiable way for video, audio and image creators to label their work in a way that is viewable by anyone who accesses it.
Using blockchain technology to track the origin and ownership of content is another possible solution to help creators establish a transparent record of their work that’s more difficult to alter than traditional methods. This is already happening with NFTs, which have helped creators protect their digital works by proving ownership and preventing unauthorized duplication.
The Role of Transparency and Authenticity in AI-Driven Marketing
As AI-generated content becomes more prevalent, businesses must be clear about how and when AI is used. This transparency maintains trust with consumers, who are increasingly aware of and concerned about the potential for AI-driven manipulation. And whether or not you’re using AI to generate content, the tone and of your brand should have a consistent voice, recognizable to your audience.
Earned media also provides a powerful form of social proof that is more valuable than owned media alone, as the use of AI-driven content increases. Earned media's credibility makes it a more potent tool for establishing trust and authority among audiences in an increasingly crowded marketplace.
Practical Steps for Fostering Media Literacy
Still, even as businesses and online content creators work to use AI responsibility and maintain trust, AI manipulation of images, videos and audio will remain problematic.
To combat this growing threat it's crucial to foster media literacy across all segments of society. Educational institutions, tech companies and media organizations should collaborate to create comprehensive literacy programs that equip people with the skills they need to navigate the digital world.
Public awareness campaigns and community workshops can also help bridge the literacy gap. Every individual should seek out diverse perspectives and fact-check information to avoid the pitfalls of echo chambers and misinformation that can so easily happen online.
Both businesses and governments have a role to play in promoting media literacy. Companies should support initiatives that encourage critical thinking and source verification among their audiences, while governments should allocate more funding for media literacy resources and training programs. The responsibility for developing critical thinking skills falls on individuals, but it’s also a collective effort that requires support from different industries.
In the age of deepfakes, media literacy is not just an option—it's a necessity. While AI detection tools can help identify false content, they are not foolproof. The real solution is empowering individuals with the skills to critically evaluate the media they consume.
Jordan Mitchell is the Founder of Growth Stack Media.