The Pandemic’s ‘Infodemic’ and Facebook’s Face-Saving Response

Leilani Carver-Madalon, Associate Professor, Maryville University

In the midst of a global pandemic, we are simultaneously experiencing a massive global infodemic. The World Health Organization (WHO) defines this as “an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it.”

On social media, misinformation often is shared at accelerated rates. In early April, approximately one-third of social media users reported seeing misinformation about COVID-19 on social media in the United States, Argentina, Germany, South Korea, Spain and the United Kingdom in a study conducted by the Reuters Institute for the Study of Journalism at the University of Oxford.

Social media plays a crucial role in this infodemic, especially on Facebook. It still is the largest social media platform in the world, with approximately 2.6 billion monthly active users. In the past, Facebook has rendered enormous control over what content, including news, its users see and share. Unlike traditional media, though, it often took little responsibility for content and turned a blind eye to misinformation.

COVID-19 Changes

Uncharacteristically, Facebook has been proactively targeting COVID-19 misinformation and enacted a number of crisis communication strategies:

  • Launched a COVID-19 Information Center, which is featured at the top of the Facebook News Feed (it includes real-time updates from health organizations and vetted world authorities, such as the WHO)
  • Gave the WHO free ad space and granted ad credits to other health organizations
  • Removed COVID-19-related misinformation that could contribute to imminent physical harm (see sidebar)
  • Employed fact-checking partners in 45 languages to address claims that don’t directly result in physical harm, like conspiracy theories about the origin of the virus
  • Partnered with the International Fact-Checking Network (IFCN) to launch a $1 million grant program

Many people define misinformation as a message with false content that has the underlying intent to deceive. Yet intention is incredibly difficult to measure, especially on social.

In a public health context, misinformation takes on extreme stakes as it may translate into life or death situations.

Facebook has been seriously contemplating effective, evidence-based strategies to curtail misinformation. One concern has been that if you let users know they have shared misinformation, it may lead to further spread of the false message—what in communication theory is called a boomerang effect.

An example is when US voters were notified that Russians had influenced their election choices. No one likes to feel like they have been duped. Facebook knew it needed a new, better strategy.

A Face-Saving Strategy

In light of the boomerang effect, Facebook enacted a face-saving strategy from Professor Stella Ting-Toomey’s “Face-Negotiation Theory.” A face-saving strategy is one that protects an organization’s reputation.

In other words, users will not be informed directly they have liked, shared or commented on misinformation, nor will they be corrected on the specific piece of misinformation (which some people find disturbing).

Instead, users will be sent a notification from WHO’s list of common myths about the virus. They will be encouraged to share it to “help friends and family avoid false information.”

Facebook claims that third-party fact checkers will monitor advertisers. It seems to be working. The platform said April 16 it has directed more than 2 billion people to resources from WHO and other health authorities, both through its COVID-19 Information Center and its pop-ups on Facebook and Instagram. More than 350 million people have clicked through to learn more. Facebook has also committed $20 million in matching funds to fight COVID-19.

CONTACT: [email protected]

SIDEBAR

Facebook and the President: A Delicate Dance

As Professor Leilani Carver-Madalon notes in her article, Facebook has been uncharacteristically proactive. Similar to other platforms, it’s been slow to remove posts. Facebook’s removal of COVID-19 misinformation reverses earlier policies that for years rendered it a worry-free vehicle for posters.

Several years ago, when it was apparent that Facebook would need to change its laissez-faire editorial stance, it did so carefully. Facebook said it would remove posts that advocated violence or other dangerous behavior.

To this day, a sticking point is the posts of political leaders. Twitter and Facebook refused to pull these posts, claiming they’re important for political discourse. Facebook came under the spotlight as US conservatives believe it’s biased against president Donald Trump.

Moving to the present, in a March 2020 NY Times interview with Ben Smith, Facebook founder Mark Zuckerberg said it’s easier to monitor health content than other posts. COVID-19 material is relatively black and white compared with political content, which can be nuanced, he said. Facebook, he added, is monitoring coronavirus “misinformation that has imminent risk of danger, telling people if they have certain symptoms….” On the other hand, Zuckerberg said, “Things like ‘you can cure this by drinking bleach.’ I mean, that’s just in a different class.”

About one month later, on April 23, the president tested Zuckerberg’s policy. During a televised White House briefing, Trump suggested it would be worth testing the merits of disinfectant and ultraviolet light as ways to combat coronavirus in humans. Most of the numerous posts about the president’s remarks—the Times found 5,000+ posts, videos and comments promoting disinfectants as a virus cure on Facebook, Instagram, Twitter and YouTube the week after Trump’s comments—remain on Facebook.

The Times found 700+ posts about UV treatments had collected some 50,000 comments and likes. US regulators have said little about this recently. In Britain, though, members of Parliament blasted Facebook for its hands-off treatment of posts where world leaders share false medical information. They cited Trump’s April 23 remarks.

Facebook said it will continue to remove “definitive claims about false cures…including ones related to disinfectant and ultraviolet light.” During its most recent investor call, April 29, Zuckerberg reiterated he’d pull outrageous COVID-19 claims, such as using water to fight the virus.

-- by PRNEWS