Social media platforms have a lot of work to do in the way of gaining trust from the American public.
“Three-quarters of U.S. adults say technology companies have a responsibility to prevent the misuse of their platforms to influence the 2020 presidential election, but only around a quarter say they are very or somewhat confident in these firms to do so,” a recent survey from Pew Research Center says.
Sounds like Twitter, Facebook, and others have a reputation problem.
Eric Yaverbaum, CEO of Ericho Communications and author of “PR for Dummies,” agrees.
“While it’s 'nice' that Facebook and Twitter have talked about, and introduced, new policies aimed at reducing misinformation, where they continue to fall short is enforcement,” Yaverbaum said. “The companies must work to implement effective enforcement strategies that work every time and not just some of the time.”
Actions speak louder than words and as Yaverbaum mentioned, the companies are introducing strategies almost daily to fight misinformation, which can be tough to control in its many forms.
For example, last week, Twitter announced policies surrounding false claims and the election.
"Starting next week, we will label or remove false or misleading information intended to undermine public confidence in an election or other civic process. This includes but is not limited to:
- False or misleading information that causes confusion about the laws and regulations of a civic process, or officials and institutions executing those civic processes.
- Disputed claims that could undermine faith in the process itself, e.g. unverified information about election rigging, ballot tampering, vote tallying, or certification of election results.
- Misleading claims about the results or outcome of a civic process which calls for or could lead to interference with the implementation of the results of the process, e.g. claiming victory before election results have been certified, inciting unlawful conduct to prevent a peaceful transfer of power or orderly succession."
According to The Wall Street Journal, Google said “it would screen more auto-complete suggestions to avoid voters being misled.” A company blog post revealed that this includes removing search predictions “that could be interpreted as claims for or against any candidate or political party,” as well as auto-complete statements about voting methods and requirements, such as you can vote by text or you can’t vote by text. Users can search for such topics individually, but auto-complete will not oblige.
Facebook, which continues to try to overcome missteps over misinformation that surfaced on the platform during the 2016 election, announced an entire political advertising ban for the week leading up to the 2020 U.S. presidential election. In addition, Facebook said it is working to monitor and root out posts that attempt to dissuade voting. The platform already has plans to deal with post-election claims of false victories as well.
But are policies enough to sway the American user base? Yaverbaum said the proof will be in actions the companies take to enforce their policies.
“Whether this means hiring more fact-checkers or building a smarter algorithm more versed at identifying and removing misinformation, consistent enforcement will be the key to regaining the trust of its user base,” Yaverbaum said.
Nicole Schuman is a reporter for PRNEWS. Follow her @buffalogal