Facebook Might Be Making Progress Against Fake News

Facebook has had a reckoning of sorts this year. After the fiasco that was the Cambridge Analytica scandal and coming to terms with its outsized influence on worldwide election processes, media, news distribution, and the spread of misleading and dangerous misinformation that has led to lynchings and violence in India, Myanmar, and elsewhere, the company has been quite hard at work to regain user trust, fully realising the unintended side effects of its growth-at-all-costs mindset and its unabated hunger for ever-more personal data from its 2.2 billion-plus user base.

Image: The Next Web

And if a recent collaborative study between Stanford University and New York University is to be believed, it may be working. Titled "Trends in the Diffusion of Misinformation on Social Media," the study analysed the performance and engagement of stories posted by 570 fake news websites from January 2015 to July 2018, finding that "Interactions with these sites on both Facebook and Twitter rose steadily through the end of 2016. Interactions then fell sharply on Facebook while they continued to rise on Twitter, with the ratio of Facebook engagements to Twitter shares falling by approximately 60 percent."

Instead of outright banning pages promoting fake news off the platform, Facebook's response has been to "demote individual posts etc. that are reported by FB users and rated as false by fact checkers," apart from demoting "pages and domains that repeatedly share false news" and demonetising such publishers by preventing them from using its advertising tools, leading them to lose around 80 percent of any future views. Among other steps, Facebook has also gotten rid of algorithmic-driven Trending Topics, deprioritised news articles in favour of posts from friends and family, hired close to 3,000 content moderators, acquired UK-based startup Bloomsbury AI and deployed AI tools to combat fight news and kill hate speech.


That's not all. In the coming days, Facebook is also set to open a "War Room" to monitor election interference ahead of U.S. midterms this November, according to a new story published by The New York Times last week. "More than 300 people across the company are working on the initiative, but the War Room will house a team of about 20 focused on rooting out disinformation, monitoring false news and deleting fake accounts that may be trying to influence voters before elections in the United States, Brazil and other countries," says the report.

Ultimately though, I still stand by what I said two years ago right around U.S. presidential elections. It's convenient to blame tech for most problems (although it's true that tech firms are often outsmarted by the software they create), but fake news is spreading, and will continue to spread, irrespective of the platform or the medium. If the diffusion of misinformation is faster today, it's not because of Facebook (granted, it facilitates and exacerbates the phenomenon), but rather because more people use smartphones today, which makes Facebook and WhatsApp the perfect conduits to circulate propaganda, fake news and whatnot. The problem, therefore, extends beyond the communication technologies currently in place, and the solutions will have to be as well.

Read more about the study below:
  • Facebook's attempts to fight fake news seem to be working. (Twitter's? Not so much.) - NiemanLab
  • Facebook's Crackdown on Misinformation Might Actually Be Working - Slate

Comments