Why Facebook Needs to Fix Its Ratio of Human to A.I. Content Reviewers

Was Mark Zuckerberg the luckiest man in America last Friday, September 28? This was the day that Facebook alerted the public of a data breach compromising the information of 50+ million users of its platform, including Zuckerberg himself. Facebook earned props for quickly making the breach public just three days after it occurred this time. It wasn’t nearly as swift to speak up about the Cambridge Analytica scandal.

Zuck caught some luck considering that Facebook made this latest breach public while the news cycle barely had any oxygen left to absorb it. The cable news networks’ "all Kavanaugh all the time" coverage had little or no room to include Facebook’s largest data breach yet.

Who Programmed the Spam Filter?

Facebook chief Mark Zuckerberg

Out of the date breach news came another fascinating story that barely was noticed initially: some users were unable to post news stories about the breach because Facebook’s spam filter ruled those posts to be bogus. “Our security systems have detected that a lot of people are posting the same content, which could mean that it’s spam. Please try a different post,” a Facebook pop-up said when users tried to post the story.

As you can imagine, the small group on social media not following the Kavanaugh news blasted Zuckerberg and company for its faulty spam filter that just happened to block anti-Facebook stories.

The Apolitical Political Ads

Things became even more interesting today when The Washington Post took another swing at Facebook’s filtering system. The story begins with reporters looking through Facebook’s new public database of political and issue ads, a wonderful example of transparency. Unfortunately for Facebook, Post reporters found a slew of blocked ads that didn’t seem political. The blocked ads had one thing in common, however—they all had a connection to LGBT themes and words. Facebook’s filter blocked them.

Our Mistake

In response to the Post, Facebook admitted most of the ads were mistakenly blocked (good), though it refused to say whether humans or an algorithm did the blocking (bad in terms of transparency). This opens a huge can of worms, which we’ll get to below.

The Post story also features interviews with several leaders of the groups whose ads were blocked. Most seemed to find Facebook’s ad-blocking policy confusing; some were told that, had they registered with Facebook in advance, their ads would have run. In other words, Facebook considers LGBT a political issue.

A Facebook representative told another leader his organization wasn't political, still his ads were blocked.  There was no explanation as to why. Facebook requires political advertisers to register and provide funding details. Upon being informed of that requirement, several leaders questioned why they would need to register as a political advertiser if they’re not political organizations.

Such confusion makes Facebook appear to have a flimsy policy or none at all.

Filter Repair Needed

These instances make clear the spam filter, Facebook’s most crucial tool to combat bogus material and ads from influencing elections, needs fixing. Particularly with regard to breaking news, a filter should be able to recognize that a lot of people will want to post the story from a credible news outlet. Certainly a filter’s human trainers and programmers understand that.

The Proper Mix

Machines, and people, can make mistakes when it comes to blocking or approving material.Within the community of machine learning experts, most  agree that implementing a human-A.I. loop is a proper way to ensure some level of quality control with A.I. Though Facebook uses humans and machines to monitor material, the precise ratio is guarded. Recent events make clear that finding the right mix of humans and machines to separate legitimate content from spam still proves daunting. So does the thought of protecting data and monitoring a platform with more than 2.2 billion monthly users.

Seth Arenstein is editor of PR News.  Follow him: @skarenstein