Facebook’s AI has apologised for mistakenly identifying African Americans as primates in one of its videos, according to multiple media reports including the Associated Press, The New York Times, and South China Morning Post. This came after a video published by British tabloid, Daily Mail, showed an altercation between African Americans and some police officers, media outlets said. Facebook users who watched the video by Daily Mail were subsequently asked if they would like to "keep seeing videos about primates".
In response to MARKETING-INTERACTIVE, a Facebook spokesperson said that it has since disabled the entire topic recommendation feature so it could investigate the cause and prevent this from happening again. "This was an algorithmic error on Facebook and did not reflect the content of the Daily Mail’s post. We apologise to anyone who may have seen these offensive recommendations and to the Daily Mail for its content being subject to it," the spokesperson added.
In 2018, NYT reported that facial recognition technology gets an individual's race right 99% of the time when a white man is pictured. Quoting research from M.I.T Media Lab, NYT said more errors surfaced when the colour of the skin became darker. The number of errors formed up to nearly 35% for images of darker-skinned women.
Meanwhile, Facebook is not the only tech company to have experienced a racial blunder as a result of automated systems. In 2015, multiple media outlets including NYT and BBC reported that Google Photos had mistakenly labelled a photo of two African Americans as "gorilla". Although Google later apologised, media outlets later found that Google's solution was to block the words “gorilla”, “chimp”, “chimpanzee” and "monkey" from its searches, The Guardian said.
Meanwhile, Twitter and Microsoft also made headlines over the years for similar incidences. Last year, Twitter users found that its image cropping algorithm automatically focused on white faces over black ones, leading Twitter to apologise, The Guardian reported. Microsoft had to shut down its AI chatbot Tay in 2016 after it began spewing a series of lewd and racist tweets, CBS News said.
Power up your PR and communications efforts today with MARKETING-INTERACTIVE's PR Asia Week on 1 and 2 December. Learn ways to build an evidence-based practice, up the ante on your strategies, and be head and shoulders above your competition. Click here to register today!
Photo courtesy: 123RF
Wunderman Thompson names new APAC CEO as Annette Male joins Facebook
Facebook removes 20 million pieces of COVID-19 misinformation
Google again accused of fixing ad prices with Facebook in new lawsuit
Facebook to restrict ad targeting for youths under 18
Facebook and Twitter clamp down on hate speech by removing millions of content