
David Bar-Aharon, Global Director, Private Sector at Cyabra, conducted an informative Q&A session following his session on "AI and Reputation: Brand Perception, Disinformation and Deepfakes." Questions from the audience included what size does a company have to be to consider using tools against disinformation and what types of tools can be used for threat protection, amongst others.
This session was part of during the PRNEWS Pro workshop "AI for PR."
Watch the Q&A segment here or watch the full session at this link.
Full transcript:
[KAYLEE HULTGREN]
So we do have a question here from Carrie, I'm not sure how to say that. She's asking, are you able to pinpoint locations of where these fake accounts are being generated from?
[DAVID BAR-AHARON]
Yeah, that's a very good question. the fake organizations that create or the fake accounts that are created by these organizations work very hard in order to make sure that they're that they're not caught. Even if we do give you a location of where this fake account is coming from, and we do that based on publicly available information, it's very hard to pinpoint who funded this, where this actually came from. And what we do at Cyabra is we really understand kind of where these foreign actors and campaigns are coming from, patterns that each country uses, and also against brands. And we give you educated guesses, but at the end of the day, we can't give you an exact address of where these fake accounts are coming from. It's very difficult to do that, of course, and they're very good at hiding.
[KAYLEE HULTGREN]
We have another question about tools. I'm assuming Cyabra is one of these. question is, what tools can organizations use for proactive threat detection and real time monitoring? Is there a group of tools? Different types of tools?
[DAVID BAR-AHARON]
Yeah, so I think the first step is obviously traditional social listening tools and staying on top of what are people saying about your brand. Many, many organizations have this in place or use a PR agency that has these tools in place. When it comes to really the threats on social media and the things that I share today, obviously tools like Cyabra are the way to go in terms of being proactive. Many tools today deal with more looking at the narrative after it's already become a narrative. We don't believe in that approach. We want to actually look at even that one lone wolf fake account that is trying to push forward a narrative, catching that before it becomes viral and making sure that the brands and the agencies know ahead of time.
[KAYLEE HULTGREN]
We have another question for you from Trevor. What size does your company need to be before this is something they should truly be concerned about? I imagine it's mostly large international brands, but should small companies be concerned also?
[DAVID BAR-AHARON]
Yeah, so this is a great question and it comes up a lot. Many small brands believe that they aren't potential threats of misinformation campaigns. Of course, the large Coca-Colas of the world and international brands are always attacked on a daily basis by fake news. But we found with many of our clients from, you know, local banks to paper manufacturing companies that aren't that large at all.
They're many times attacked by Greenpeace bots, by different accounts trying to spread negative sentiment towards their CEO. You never know when and what the motives are behind the scenes for your company to be attacked. We recently helped a small hospital in Arizona that was pushing forward. We found narratives around their CEO that were completely fabricated. And this was just a whole campaign around a hospital with about 100 employees. So it could hit on every level.
[KAYLEE HULTGREN]
Another question, what are the expected future counters to these kinds of crises? Will there be true verification from social account development to manage bot-based misinformation?
[DAVID BAR-AHARON]
So that is the dream for Cyabra, of course, and for I think any company dealing with this. think the social media companies are comfortable right now with the sense that the fake accounts are also supporting their business model when it comes to selling ads. It doesn't really matter if a real person saw your ad or if a fake person saw your ad. So we're, as much as we want to partner with them and work with them to really find these fake accounts and disinformation campaigns and help brands and governments sift through the noise, this is something that is gonna be done more in the private sector from brands like Cyabra and companies like Cyabra and hopefully it be adopted by the social media platforms with time. That's the main goal.
[KAYLEE HULTGREN]
It's got, I just had a thought. I mean, there's less moderation rather than more moderation these days, right, with social media platforms. So it seems like that's getting pushed even further into our future, perhaps.
[DAVID BAR-AHARON]
Yes, it seems like there's less moderation. The goal of organizations and think tanks and obviously any company dealing with stopping misinformation is market education as well, showing the world that this actually exists. Whether you're sifting through Twitter or X or Facebook or TikTok, whatever it may be after this webinar. thinking twice around the actor, who that person really is when you're reading their comments and going to their profile and seeing if there is an agenda behind the scenes. These are the things that we can do right now to help you really sift through the data and understand if this is factually real or fake, of course. But it takes a big effort and an army of companies to push this forward.
[KAYLEE HULTGREN]
In your view, are you seeing many more deep fakes, for example, being produced by AI? know that people were duped by some of the examples that we've all seen go viral of an executive making speech or saying something.
Are consumers a little bit more savvy or are you seeing the tools getting better? What are your thoughts on how that affects brand reputation?
[DAVID BAR-AHARON]
Yeah, so in terms of deep fake videos, I still think that humans today have a great eye on understanding if a video is real or fake when it comes to kind of humans and some part of the video might be a little bit off and you get a sense for it. When it comes to images and they're only improving and they're going to get better and indistinguishable very, very soon. Many of them that are more expensive in creating are actually indistinguishable in general right now.
But for images, we're seeing that already happening. It really is difficult to understand if an image is real or fake. And that's why, again, tools that exist today that can detect if an image is AI generated or if a video is AI generated is extremely important for organizations to have that capability to verify.
Produced by: PRNEWS