Facebook removed about more than 25 million pieces of hate speech content from its platform between January to March in 2021. According to the social media giant in a blog post, this was nearly 97% of hate speech content being removed before it was reported by users. Instagram also removed more than six million pieces of content, which was 93% before someone reported it.
In the same post, Facebook also addressed the “abhorrent racist abuse" faced by some members of England’s football team on its platform after the Euro 2020 final. The social media platforms were criticised for not being able to curb the racist emojis directed at England’s Black soccer players over the team’s defeat in the final against Italy on 11 July, reported Bloomberg. According to Bloomberg, the Black England players received racist and abusive messages that included the banana and monkey emoji. While both Facebook and Twitter have reportedly removed posts and suspended users since the game, their efforts at policing emojis have been deemed insufficient, Bloomberg reported.
Meanwhile, Facebook also updated its content moderation tools for group admins earlier in June. In the updated Admin Assist, group admins are able to set up certain criteria to automatically moderate both posts and comments. Facebook’s AI would also detect and notify admins when there may be contentious or unhealthy conversations in their group through a new type of moderation alert called conflict alerts.
Similarly, Twitter removed an estimated 3.8 million Tweets from July to December last year. The company said in an update on its Transparency Centre that it removed Tweets that had “violated the Twitter Rules”. More than 70% of the tweets removed had less than 100 impressions, with only 6% having more than 1000 impressions at the time of removal, Twitter said.
The reports of the two social media giants follow closely on the heels of Facebook’s announcement yesterday that it will invest more than US$1 billion in content creators, together with Instagram. The company said that this will include bonus programmes to pay creators who hit certain milestones on its apps, including on Instagram, and fund users to produce content.
At the same time, Twitter also announced the axing of its Fleets function yesterday, citing the low rate of uptake as a reason to remove its expiring Tweets feature with effect from 3 August. Fleets took up the space at the top of the timeline on the social media interface, and expired within 24 hours of posting. The removal of the function comes a mere eight months after its global rollout, and followed an experiment with Fleet ads for brands in just the previous month.
Separately, Google has also stepped up its content regulation. Earlier this year in March, it wrote in a blog post that it had restricted 3.1 billion ads last year for violating its policies and restricted an additional 6.4 billion ads. It also removed 981 million ads containing sexual content and 168 million ads that had dangerous or derogatory content. At the same time, Google also invested in automated detection technology to scan the web for publisher policy compliance at scale.
Photo courtesy: 123RF
Digital advertising worth US$153m spent across sites with anti-Asian hate speech content, finds Nielsen
Nearly 80% of content removed online for spam, hate speech and explicit nature
8 months after global roll out, Twitter expires its Fleets function
Facebook, Twitter and Alphabet claimed to be mulling HK exit due to evolving data protection laws
Facebook sues Vietnamese group and California marketing firm for ad scams
Facebook introduces new content moderation tools for group admins
More than 3 billion ads blocked by Google last year for violating policies