Facebook Cracks Down on ‘Deepfakes,’ Mirroring a Trend in User Preferences

As fake news and misinformation continue to plague online discourse, Facebook says it has taken a significant step to deter the spread of doctored video and photos. In a blog post, the company's vice president of global policy management, Monika Bickert, outlined Facebook's latest plans for the detection of and enforcement against manipulated content.

For several years, marketers and communicators have feared the damage done to the brands and public figures they represent via the propagation of "deepfakes," videos and photos that use the assistance of machine learning technology to replace faces and objects with false, but realistic, images. In her statement, Bickert assured users that Facebook has recruited a global network of organizations (Cornell Tech, the University of California Berkeley, MIT, Microsoft, the BBC and Amazon Web Services, to name a few) to build machine learning solutions capable of detecting and flagging deepfakes.

Bickert noted that any deepfakes detected by Facebook's third-party fact checkers that violate the platform's community standards will be removed. However, those that do not violate community standards will remain live, albeit demoted in the Newsfeed and prohibited from running as ads.

Facebook's refusal to remove some deepfakes is likely to raise alarm bells for digital strategists. Luckily, any content flagged by fact-checkers as doctored will also include warnings alerting users who see it, try to share it or who have already shared it. This plays into a new training and education effort by the platform to help users—particularly news organizations—avoid further spread of false visuals. The thinking seems to be that if users are exposed to deepfakes, they can inoculate themselves against future deception.

"If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context," Bickert explained. (Social marketers can take a free deepfake detection course, a collaboration between Reuters and Facebook, here.)

Facebook's move mimics a related trend in audience behavior on Instagram. Social Media Today reports that a new study from Rowan University, Wayne State University, the University of Missouri and the University of Illinois found that "using photo filters in selfies results in fewer likes received, on average, than selfies posted without filters." Researchers studied 2,000 Instagram images and their engagement numbers, controlling for follower count and users' average engagement rates. The study's findings could signal that just as concern around doctored content (especially in a news context) has grown, social media audiences have become less receptive to visuals that are anything less than true-to-life.

Researchers concluded that "excessive self-presentation in selfies negatively influence[s] other users' evaluation toward selfie takers." In response, marketers should take a hard look at their influencer and employee-generated content to ensure that at least some of that content reflects brand ambassadors, customers and clients as they truly are, rather than the way they wish to be perceived.

Perhaps it's finally time to send that puppy mask filter to doggie heaven.

This post originally appeared on our sister brand, The Social Shake-Up.

Follow Sophie: @SophieMaerowitz