Facebook removes 20 million pieces of COVID-19 misinformation

Facebook removes 20 million pieces of COVID-19 misinformation

share on

Facebook and Instagram have removed 20 million pieces of content worldwide for violating policies on COVID-19 related misinformation. According to Facebook's Community Standards Enforcement Report, over 3,000 accounts, pages, and groups were also removed for violating the rules against spreading vaccine misinformation.

The tech giant also displayed warnings on more than 190 million pieces of COVID-19-related content on Facebook that its third-party fact-checking partners rated as false, partly false, altered or missing context. When a piece of content has one of these ratings, Facebook added a prominent label warning people before they share it and show it lower in people‚Äôs feed.

The report also highlighted progress on hate speech and child safety. The prevalence of hate speech on Facebook continued to decrease for the third quarter in a row. There were five views per 10,000 views in Q2 compared to five to six views per 10,000 views in the first.

Facebook also removed 31.5 million pieces of hate speech content from its platform compared to 25.2 million in Q1, and 9.8 million from Instagram, up from 6.3 million in Q1. The company attributed this to its continued improvement in its proactive detection, such as its investments in AI.

To ensure child safety, Facebook also created two new reporting categories under the topic of child endangerment: child nudity and physical abuse, and sexual exploitation. The company previously only reported on one metric: child nudity and sexual exploitation of children. This expanded metric seeks to provide a more detailed, transparent overview of its efforts in this space.

The tech giant has since clamped down on 2.3 million pieces and 458k pieces of content on child nudity and physical abuse on Facebook and Instagram respectively. It also removed 25.7 million and 1.4 million pieces of content surrounding child sexual exploitation on Facebook and Instagram respectively.

On top of this, 16.8 million pieces of suicide and self-injury content were also removed from Facebook, compared to 5.1 million pieces in Q1 2021. This resulted from a technical fix which allowed the team to go back and catch violating content it previously missed. Meanwhile, it also removed 34.1 million pieces of violent and graphic content on Facebook, compared to 30.1 million pieces in Q1. On Instagram, 367k of organised hate content and three million pieces of suicide and self-injury content were deleted in Q2 2021.

Join our Digital Marketing Asia conference happening from 9 November 2021 - 25 November 2021 to learn about the upcoming trends and technologies in the world of digital. Check out the agenda here.

Related articles 
Facebook and Twitter clamp down on hate speech by removing millions of content
Digital advertising worth US$153m spent across sites with anti-Asian hate speech content, finds Nielsen
Nearly 80% of content removed online for spam, hate speech and explicit nature
After Facebook, Google next to restrict ad targeting for youths under 18
Facebook to restrict ad targeting for youths under 18
Google again accused of fixing ad prices with Facebook in new lawsuit






share on

Follow us on our Telegram channel for the latest updates in the marketing and advertising scene.

Free newsletter

Get the daily lowdown on Asia's top marketing stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.

subscribe now open in new window