
Michael Brito, Global Head of Data + Intelligence at Zeno Group, discusses how AI's interpretation of sentiment doesn't always mirror the original source, potentially amplifying negative viewpoints. Understanding how AI weighs authority versus sentiment across various channels is crucial. This helps manage brand reputation and correct misinformation.
These tips are part of "Reputation Intelligence: Tools to Track, Measure & Predict Brand Risk."
This session was part of the PRNEWS PRO's Online Training Workshop: "Brand Reputation and Crisis Comms in the AI Era."
Watch the full session at this link.
Full transcript:
[MICHAEL BRITO]
Let me just say for years analytics teams, intelligence teams, we've always reported on the sentiment of articles, of social media conversations, of Reddit threads... It's a pretty standard practice, and I'm sure you've done it with your teams before.
The challenge here though is that AI doesn't always match the sentiment to the original source which m- to my point earlier right... In some cases it reframes it more negatively, and i- as you can see on the left where under sustainability um and in other cases you you know it is not as maybe as bad as it should be.
On the right we see topics like leadership and these are just example topics of of you know not real data about any brand. Just some thoughts that of things that I see generally speaking. Um you can see things like leadership instability, greenwashing concerns, all getting amplified when AI answers repeat those negative anchors right? They're repeating context on an article or context on Reddit or context on X And it's just not a mirror right? It's a filter. And unless we're measuring this we don't know when the filter is working for us and/or against this.
Now this type of analysis would require more than just access to like Profound or Brandlie It would require kind of blending datas datasets. So pulling data from Acision or pulling data from Muck Rack or pulling data from you know Meltwater or social listening tools like Talkwalker and integrating and trying to understand like what are those things that are that are that are Um what's the source sentiment of sustainability versus earnings and product launches and exec interviews on the media?
And then how is the AI how are they interpreting that context? Because I guarantee you it's gonna be very different, and we need to manage that. And it's not just ChatGPT right... It's Claude it's Perplexity, it's it's Google AI Mode and AI Summaries, It's you.com, it's DeepSeek, and there's dozens of other large language models that people are slowly adopting.
The last slide here is one of my favorite types of data because um you're looking at where these perception gaps appear. okay So on the left we see h- how different AI engines weigh authority versus sentiment. So again these are just example channels very similar to the stakeholder mapping I showed you earlier right?
Sometimes they put disappro- uh disproportionate weight on like Reddit threads or blogs even when sentiment is highly negative. Um but in this case authority really isn't quantitated.
This would be like a conversation that you'd have internally. Like how are we going to prioritize what's authoritative? Maybe this graph shows our top 50 media outlets. Maybe this graph shows the NGOs that we care about. Maybe this graph shows just the social media channels right? In either case um you c- I have I have specific channels I have like Wall Street Journal you know um, trades, New York Times. I have industry analysts. If you're in B2B like Gartner Forrester blogs, YouTubes et cetera. Um but you might want to think about the authority differently and the sources differently um and so it just depends upon what it is you wanna do right?
But understanding how the model is basically prioritizing negative sentiment positive sentiment and then showing how many citations which is the f- size of the bubble um which is pulling through. It gives you kind of a better idea on how you may want to um again reach out to these pe- reach out to to to correct misinformation or at least get on the radar of particular outlets.
On the right this is looking at media coverage and comparing it to AI and how they're you know uh covering certain topics. Um you can design prompts that say "What is my brand" um "What is the what is How prevalent is my brand in innovation?" And you can pull media coverage and you can start to understand um the disparity between the AI models.
Maybe they don't think of your brand as a leader in the in the market, but you get great coverage on it but it's not pulling through on on on the uh the AI engines. So again it's a red flag for reputation because obviously we care about how we're being viewed by the public and and others. So again the tools stay the same on this analysis.
Produced by: PRNEWS