PR Lessons From Facebook’s New Policies to Combat Disinformation

It's been a tumultuous couple of weeks in Menlo Park. Facebook again dominated the news cycle with a widely-maligned public speech from Mark Zuckerberg. This was followed by a series of new policies intended to curb hostile actors and foreign governments from weaponizing the social network to influence the U.S. elections in 2020.

Shortly after many outlets reported Zuckerberg secretly met with conservative pundits to discuss accusations of Facebook's liberal bias, he went on the offensive. Live-streaming a speech at Georgetown University, Zuckerberg said Facebook would not ban political ads, even those purportedly spreading disinformation, on the grounds of free speech.

“People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society,” Zuckerberg said during the Georgetown speech.

“And while I certainly worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judged to be 100 percent true,” he added. “Banning political ads favors incumbents and whoever the media chooses to cover.”

Media reaction to the speech was swift and largely unfavorable. While USA Today and The Boston Globe lauded Zuckerberg for taking a stand for free speech, many decried what they heard as empty language. Critics also blasted Zuckerberg's intentional blurring of the distinction between free speech and untrue speech.

Hollow transparency is an empty gesture

"Mark Zuckerberg is on a transparency tour," wrote Recode's Roni Malla at the start of a  "Mark Zuckerberg said a lot of nothing in his big speech" story. "But all these attempts at transparency aren’t happening in a vacuum... [d]oes it matter if you’re being transparent if you aren’t really saying anything?"

This is a point that communicators can't hear enough—transparency, on its own, is useless. If audiences peer into your processes, and find no substance or consistency, then the transparency lacks importance. Moreover, some are suggesting the speech illustrates how deluded Zuckerberg has become.

"If Zuckerberg’s relentless optimism is simply a canny P.R. strategy, then perhaps a new combination of incentives—a regulatory tweak here, a mass boycott there—would be enough to make him change course," wrote Andre Marantz in The New Yorker. "The more alarming scenario is that Zuckerberg is actually high on his own supply—that, despite everything, he remains an unreconstructed techno-utopian. If the past few years haven’t been enough to puncture his faith, it’s hard to imagine what would."

A focus on bad behavior, not bad content

If Zuckerberg's optimism is, indeed, a PR strategy, it seems that even Facebook realized it was not enough. Likely realizing the presence of political disinformation on Facebook that last week's Georgetown speech condoned, this week Facebook announced a set of policies intended "to help protect the democratic process and [provide] an update on initiatives already underway."

A three-pronged policy effort, it is focused: on fighting foreign interference, increasing transparency and reducing misinformation. In its announcement, Facebook explains that it already discovered Russian and Iranian efforts to meddle in the 2020 elections.  The attacks have gotten more advanced, and Facebook decided to focus on sniffing out bad actors based on their behavior, not their content.

"In each case, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared our findings with law enforcement and industry partners."

One part of the new security measures is Facebook Protect. A two-step verification method, it is intended "to further secure the accounts of elected officials, candidates, their staff and others who may be particularly vulnerable to targeting by hackers and foreign adversaries."

Transparent steps in the right direction

As part of its refreshed commitment to transparency, Facebook also promises to make Pages more transparent. "Increasingly, we’ve seen people failing to disclose the organization behind their Page as a way to make people think that a Page is run independently," the announcement reads. "To address this, we’re adding more information about who is behind a Page, including a new 'Organizations That Manage This Page' tab that will feature the Page’s 'Confirmed Page Owner,' including the organization’s legal name and verified city, phone number or website."

Initially, the verification will appear on Pages with large followings that have gone through the network's Business Verification program. From January, advertisers will be required to display their Confirmed Page Owner.

This feels like a step in the right direction for Facebook. Previously, Facebook required a button that informed users why they were seeing an ad (revealing why they were micro-targeted). Whether or not hostile foreign actors are able to circumnavigate this verification process, however, will depend on how robust Facebook's security teams are at navigating the vast networks of shell companies and front businesses.

Other efforts to increase transparency include a political campaign spend tracker, along with a label for ads that Facebook considers 'State-Controlled Media.' "We developed our own definition and standards for state-controlled media organizations with input from more than 40 experts around the world specializing in media, governance, human rights and development," the announcement reads. "Those consulted represent leading academic institutions, nonprofits and international organizations in this field."

A siloed, hollow disinformation strategy

All of these efforts to increase transparency feel, for the first time in Facebook's long, slow march to repair public trust, genuinely substantial. Of course, we'll only know for sure once the cards fall and the 2020 electoral process plays out.

It's the third tier of this policy initiative, however, that threatens to undermine the other two policies—preventing the spread of viral misinformation. How can a platform that makes a steadfast commitment to go after bad actors around behavior, and not content, possibly have a handle on what constitutes "vital misinformation?"

"Over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share," the initiative says. "The labels...will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker."

While Facebook's strategy for combatting bad behavior and launching true transparency tools into its user interface are powerful reforms, what do they mean if the legitimacy of its content moderation remains questionable?

(Update: In a hearing today, U.S. Representative Alexandria Ocasio-Cortez prodded Zuckerberg over the fact that these new policies will flag false political content, but not remove it.)

Who watches 'The Watchmen'?

Showing the assessment of fact-checkers is a fantastic new feature, but not enough—how were fact-checkers chosen? We've seen the problems with Facebook's third-party content moderators in the past, including the psychological damage they incur and often squalid working conditions they endure. None of that is addressed here, and it's enough cause for concern about the practice through which this company's external partners vet what's accurate and what isn't.

Facebook protects itself from playing the role of 'police' by outsourcing these elements of its new initiatives to a third-party. As any PR pro who has had to rely on outside counsel in moments of duress can tell you, some things are better kept in the family.

Ultimately, whether these policies are effective depends on how the wit and ingenuity of those working at Facebook—and just outside—match up against those who have weaponized the platform's algorithms to manipulate and deceive actors with phony narratives.

For communicators, this saga has proved to be many things—a lens into the distrust that engulfs marketing communications, a lesson on the power of self-regulation, and a reminder of what transparency does (and doesn't) look like in action. Whatever space you're working in, the effectiveness of these new initiatives will set the precedent for Facebook's future. They should factor in to your team's relationship with the network moving forward.