The Global Alliance for Responsible Media (GARM) has created a new strategy as it looks to eliminate harmful online content and create a more sustainable and responsible digital environment that protects consumers, the media industry and society. The alliance which was first launched in June 2019 by the World Federation of Advertisers (WFA) in partnership with the Association of National Advertisers (ANA), has created a three-pronged action plan. This includes:
- Shared definitions - The alliance has developed and aims to adopt common definitions to ensure that the advertising industry is categorising harmful content in the same way. The 11 key definitions covering areas such as explicit content, drugs, spam and terrorism looks to enable platforms, agencies and advertisers to a shared understanding of what is harmful content and how to protect vulnerable audiences, such as children. According to WFA, establishing these standards marks the first step needed to stop harmful content from being monetised through advertising.
- Common tools and systems - The alliance will be developing and adopting common tools that will create better links across advertiser controls, media agencies tools, and the platform efforts to categorise content. This is in a bid to ensure these linkages will improve transparency and accuracy in how media investments are steered towards safer consumer experiences, particularly in images, videos and editorial comments.
- Independent oversight - The Alliance will also establish shared measurement standards to allow the industry and platforms to fairly assess their ability to block, demonetise, and take down harmful content. Transparency via common measures, methodology for advertisers, agencies and platforms is named by WFA as key to guiding actions that enhance safety for consumers. According to the alliance, adopting key measures and agreeing to independent verification will also be crucial to driving improvement for all parties. A special working group has been formed from the GARM to activate this strategy in April 2020.
Through this new strategy, the alliance aims to accelerate and integrate efforts on improving safety across the media supply chain. The long-term vision will also be to drive growth and connectivity for society on ad-supported media platforms, which foster and enable civil dialogue.
This comes on the back of an estimated 620 million pieces of harmful content found and removed by YouTube, Facebook and Instagram between July and September 2019.
However, approximately 9.2 million pieces of harmful content still reached consumers during that three-month period, equating to roughly one piece of harmful content viewed per second, WFA said.
Stephan Loerke, WFA CEO said advertisers can play a unique role in improving the digital ecosystem. "Given that brands fund many of the platforms and content providers, we can ensure society gets the benefits of connectivity without the downsides that have sadly also emerged. These first steps by the GARM are a significant move in the right direction, which will benefit consumers, society and brands,” he added.
Social media platforms such as Instagram and YouTube have continuously bolstered their systems to detect and remove harmful content. Instagram first introduced a AI-powered feature that notifies users when their comment may be considered offensive before it is posted, which then allows users to to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification.
Meanwhile, YouTube concluded in August 2019 that users are seeing less borderline content and harmful misinformation due to its investment in policies and resources to combat hate speech and violent content. The Google-owned platform also partnered lawmakers and civil society around the globe to limit the spread of violent extremist content online.