AI Bots: Identifying and Defending Your Brand

Mary Beth Levin graphic for her session AI's Role in Content

AI is transforming the way PR pros create content, monitor conversations and analyze data. In this clip, Mary Beth Levin, Manager, Social Strategy and Analytics at the United States Postal Service, explains how to identify bots on social media and through influencer partnerships, as well as how to protect your brand when the bots turn against you.

These tips are part of "The Role of AI: Content Creation, Social Listening and Data Summaries."

This session was part of the PRNEWS Pro workshop "Social Media and Influencer Marketing for PR."

Watch the full session at this link.

 

Full transcript:

[MARY BETH LEVIN]

So the dark side of AI are the bots um and bots can appear in a number of different ways. We see this a lot when it comes to elections for example. They can try to influence popular opinion.

We've also seen influencers hire bots to increase their numbers. You can actually hire a company that will provide bots to give you a greater number of followers. They're increasingly sophisticated. They will comment, they will like, and that's something to be mindful of when if you're thinking about hiring someone to serve as an influencer for you.

And then the other issue is people who are pretending to be you. So all these images you see here are folks who have pretended to be our social customer response team responding to customers on our own corporate social media sites, and what they're doing is these are bad actors who are trying to get personally identifying information so they can pretend they can steal that person's identity, And I've actually reported so many of these accounts that Facebook is now actually recommending some of these people as my friends. They're not my friends.

So what can you do about all these challenges? So let's just break it down. So for the chatbots that are trying to influence opinion um there is special software that you can buy that can say okay this percentage of the conversation is about is from these bad actors or from these chatbots. A lot of the conversation about this is about the the the technology and how effective the technology is. One of the missing pieces of that conversation is what do you do once those bad guys have been identified? And I'll talk about that in just a sec.

For influencers who can hire bots we one of the things that we look at is engagement rate, but because these are more sophisticated they can like they can comment...we actually look for click-throughs and we provide our own links. So then rather than relying on the influencer to provide their metrics we've got our own metrics that we can access any time and that's been our workaround for that.

And then for those chatbots that pretend to be you and if you notice a lot of them presented as real people with photographic images, and that's gonna be relevant to the case study I'm gonna talk about in just a sec. But you know in terms of what does work and what doesn't work you can report it in platform and block and delete it. It's kinda therapeutic to go ahead and do that.

Uh reporting it in platform Facebook used to tell you whether or not your reporting was effective. It no longer does, but when it when I I was doing it it was only about 20% effective at the time. Did it prevent um additional people from coming? No but it helped me sleep well at night knowing that I was doing what I could to protect other people and prevented those initial bad actors from coming back.

We then reported it within platform to a specific email. That was not helpful. We then reached out to our customer service rep who was very proud that all of the accounts that I had reported had been taken down and the thing that I brought to his attention was, "That's great That helps me sleep at night because it helps me to protect other people, but it doesn't really do anything to help me because I've already blocked these people. I've already deleted these people. What are you doing to make my job easier? What are you doing to lessen my workload?" And he didn't have an answer for that.

So what does work is we have a pinned post that lets our customers know that the best way to reach us and actually the only way to reach us is by hitting that message button. That doesn't reduce the number of bad actors, but we've seen that actually reduce the number of our customers responding to these bad actors.

And then what's really been helpful in terms of reducing the actual workload for these bad actors is looking at their language and including that the language that they use in our profanity filters. So that's a- what's actually made my life easier.

So as I talked about before it's one you know it's great to have this technology that can help you identify these bad actors but what do you do once you have that information? And this is where relationships are important. We're seeing a transition among the platforms that they're relying more and more on community monitoring and community modeling reviews and they've they've changed their standards.

So a year ago when someone was doxxing one of our employees regarding elections we reported it it was taken down immediately. This most recent round of elections reported it, and they didn't take it down immediately And we said "You know this is QAnon. Here's the evidence that QAnon is the one providing this misinformation." They didn't do anything about it. Here's We provided information showing that the employee's actually being harassed. They didn't do anything about it. So we asked them what was going on because it was very different from our previous experience with them, and what they said was they were relying on the community notes more, and that they're not gonna take something down that's misinformation unless it's from a news source. So I'm like "Okay How are you defining a news source Because a lot of people on X?" and that was the platform "identify as journalists or bloggers or podcast hosts. How are you defining a journalist?" And they didn't have an answer for that. And then the next thing I asked them was "Well what about chatbots?"And they didn't have an answer for that either.

So this is an evolving situation and the best thing you can do is reach out to your customer success representatives on these platforms and find out what their current policy is and more importantly find out what the process is.

Produced by: PRNEWS