In a response to the latest evidence that Facebook’s advertising tools had allowed ads to be targeted at users posting racist comments or hate speech in their profiles, Facebook said it will change how ads can be targeted.
Last week, Facebook temporarily disabled some of its ads tools following a report from ProPublica that slurs and other offensive languages could be used as targeting criteria for advertising. For example, if someone self-identified as a “Jew-hater” or said they studied “how to burn Jews” in their profile, those terms would also show up as potential targeting options for advertisers.
The latest report prompted Sheryl Sandberg, Facebook’s chief operating officer to directly address the issue in public. On Wednesday, Sandberg expressed in a post on Facebook that she is disgusted by the offensive sentiments or language used, as well as “disappointed that [Facebook’s] systems allowed this.”
“Hate has no place on Facebook – and as a Jew, as a mother, and as a human being, I know the damage that can come from hate. The fact that hateful terms were even offered as options was totally inappropriate and a fail on our part. We removed them and when that was not totally effective, we disabled that targeting section in our ad systems,” she said.
She then said the systems have in the past been particularly powerful for small businesses and in order to allow businesses, especially small ones, to find customers who might be interested in their specific products or services, Facebook offered them the ability to “target profile field categories like education and employer.”
“People wrote these deeply offensive terms into the education and employer write-in fields and because these terms were used so infrequently, we did not discover this until ProPublica brought it to our attention. We never intended or anticipated this functionality being used this way – and that is on us. And we did not find it ourselves – and that is also on us,” she added.
Following this incident, Sandberg also announced that the social media giant is working towards strengthening its ads targeting policies and tools.
First, Facebook will clarify its advertising policies and tighten its enforcement processes, to ensure that content that goes against its community standards cannot be used to target ads. This includes anything that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity, or disabilities or diseases.
“Such targeting has always been in violation of our policies and we are taking more steps to enforce that now,” she said.
Secondly, it will add more human review and oversight to its automated processes. After manually reviewing existing targeting options, Facebook said it is reinstating the roughly 5,000 most commonly used targeting terms such as “nurse,” “teacher” or “dentistry” to ensure they meet its community standards. From now on, Facebook said it will have more manual review of new ad targeting options to help prevent offensive terms from appearing.
The company said it would increase its 3,000 employees in the team to the 4,500 members to exclusively review and remove content that violates its community guidelines. Thirdly,
Facebook is now working to create a program in encouraging users to report potential abuses of its ads system that will feedback to the social giant directly.
“We have had success with such programs for our technical systems and we believe we can do something similar with ads,” Sandberg said, adding it hopes these new changes will prevent abuses similar to cases like this going forward. She also said, in cases which Facebook discovers unintended consequences in the future, it will be unrelenting in identifying and fixing them as quickly as possible. Sandberg added,
We have long had a firm policy against hate on Facebook. Our community deserves to have us enforce this policy with deep caution and care.
This is not the first time the social giant encounter sensitive issues about race and its targeting tool. Last year, Facebook was reportedly allowing its advertisers to exclude users based on their ethnicity while posting their ads. The news came after ProPublica, a non-profit investigative journalism group reported on what it has tested on the system that it said, not only allows advertisers to target users by their interests or background, but also shows an option to exclude a specific group called “Ethnic Affinity” for housing and employment ads. Races listed under the “Ethnic Affinity” group include African Americans, Asian Americans and Hispanic people.
In response to the allegation, Facebook’s spokesman shared with A+M earlier that it believes that multicultural advertising should be a tool for empowerment and takes a strong stand against advertisers misusing its platform. Facebook has since updated the term to “Multicultural affinity” and it no longer allows that classification to be used in ads for credit, housing or employment.
While the company is still working toward mending its relationship with big advertisers after it revealed on some of the flaws over its measurement tools in delivering accurate results, online revenue ads remains one critical income segment for Facebook. The company’s majority annual revenue now comes from online ads, which has more than tripled in the past four years to US$27.6 billion in 2016, with its net profit at US$10.2 billion.