Facebook, YouTube and Twitter, in collaboration with marketers and agencies, have agreed to adopt a common set of definitions for hate speech and other harmful content and to collaborate with a view to monitoring industry efforts to improve in this critical area. This will be done through the Global Alliance for Responsible Media (GARM), a cross-industry initiative founded by the World Federation of Advertisers (WFA). The agreement comes after 15 months of talks within GARM between major advertisers, agencies and key global platforms, with the first changes to be introduced this month.
The four key areas for agreement are:
- Adoption of GARM common definitions for harmful content;
- Development of GARM reporting standards on harmful content;
- Commitment to have independent oversight on operations, integrations and reporting;
- Commitment to develop and deploy tools to better manage advertising adjacency.
These area of action are said to be designed to boost consumer and advertiser safety, with agreed individual timelines for each platform to implement across the different areas.
The new agreement will bring about a set of common definitions that will create a common baseline on harmful content. According to WFA, all platforms will now consistently enforce these standards as part of their advertising content standards and consistently label and enforce the common definitions. The common definitions are set by GARM, which it has been working on since November. They aim to add more depth and breadth pertaining to specific types of harm such as hate speech and acts of aggression and bullying. This comes as WFA finds that varying definitions of harmful content on different platforms can make it hard for brand owners to make informed decisions on where their ads are placed, and to hold platforms to account.
The agreement will also bring about harmonised reporting that will drive better behaviours. All parties have now agreed to harmonised metrics on issues around consumer safety, advertiser safety, platform effectiveness in addressing harmful content. It is added that work to harmonise metrics and reporting formats will continue between September and November, with the system to launch in the second half of 2021.
Additionally, the agreement allows independent oversight on major platforms that will drive better implementation and build trust within brands, agencies, and platforms. The goal is to have all major platforms fully audited or in the process of auditing by 2020. "With the stakes so high, brands, agencies, and platforms need an independent view on how individual participants are categorising, eliminating, and reporting harmful content. A third-party verification mechanism is critical to driving trust among all stakeholders," WFA said in a press release.
Facebook, YouTube, and Twitter will also need to develop advertising adjacency solutions to help advertisers have visibility and control so that their advertising does not appear adjacent to harmful or unsuitable content. Under the agreement, platforms will provide a solution through their own systems, via third party providers or a combination thereof. Those that have not implemented an adjacency solution will need to provide a development roadmap in Q4 of 2020. In addition to Facebook, YouTube, and Twitter, WFA said there are "firm commitments" from TikTok, Pinterest and Snap to provide development plans for similar controls by year end.
WFA believes that these standards should be applicable to all media and not just the digital platforms, given the increased polarisation of content regardless of channel, It also encourages its members to apply the same adjacency criteria for all media spend decisions irrespective of the media.
"The issue of harmful content online has become one of the challenges of our generation. As funders of the online ecosystem, advertisers have a critical role to play in driving positive change and we are pleased to have reached agreement with the platforms on an action plan and timeline in order to make the necessary improvements. A safer social media environment will provide huge benefits not just for advertisers and society but also to the platforms themselves,” said Stephan Loerke, WFA CEO.
Raja Rajamannar, CMO at Mastercard and WFA president: “We are delighted that GARM has made such significant progress in such a short period of time. I know these discussions have not been easy but these solutions when implemented, will offer more choice and control for advertisers and their agencies by supporting content that aligns with their values.”
However, this agreement is not a declaration of victory, according to Jacqui Stephenson, global responsible marketing officer of Mars. "There is much work to be done and we rely on all of our platform partners to follow through on their commitments with the pace and urgency these issues demand. Nevertheless, this is an important step in making social media a safer place for society and it’s important to recognise the progress and build further momentum as a result," he said, adding that this is a "meaningful milestone" that will help make social media an experience that is safer for everyone, consumers and brands alike.
Separately last week, WFA unveiled an advertiser-centric framework for cross-media measurement accompanied by a proposed solution, designed to give advertisers a much greater understanding of the reach and frequency of their advertising efforts. The proposal leveraged a virtual ID and differential privacy methods to preserve privacy while preventing double-counting of impressions across media. It was developed in partnership with digital platforms, including Facebook and Google, and will now be tested by the UK and US, with ISBA and the Association of National Advertisers respectively leading local efforts.
(Photo courtesy: 123RF)
Join us on a three-week journey at Digital Marketing Asia 2020 as we delve into the realm of digital transformation, data and analytics, and mobile and eCommerce from 10 to 26 November. Sign up here!