Keeping deepfakes real: Could they be used for good?

Recently, a Malaysian minister denied his involvement in a viral gay sex video, which later prompted speculation that deepfake technology was at work. While on one end of the spectrum, companies worldwide make use of machine learning to make informed decisions faster, the other end has seen the rise of artificially manufactured content – much of which we refer to now as deepfakes.

While deepfake as an issue has been around for several years now, one recent incident that caught the attention of media outlets such as The Washington Post and The Guardian was when social media giant Facebook declined to remove a doctored video of US House Speaker Nancy Pelosi in May that made her seem drunk.

Speaking to Marketing on the matter, a Facebook spokesperson said the platform had nonetheless “dramatically reduced” its distribution immediately after its fact-checkers marked it as false. Additionally, users sharing the video were quickly alerted to its falsehood.

She explained that while Facebook currently removes content that violates its community standards, it does not have a policy stipulating the information posted on the platform must be true. “We work hard to find the right balance between encouraging free expression and promoting a safe and authentic community, and we believe that reducing the distribution of inauthentic content strikes that balance,” said the spokesperson.

Walking the talk, the same policy was applied when a video of Facebook CEO Mark Zuckerberg himself surfaced on Instagram a few weeks later where he appears to be talking about the power Facebook wields on CBS News, the spokesperson explained. To limit the searchability of the fake video, it was filtered from hashtags and Instagram’s “Explore” page. These steps were all in tandem with how other content violating policies are treated on Instagram, she assured Marketing.

The mobile platform, in the meantime, has also seen its fair share of false content. FakeApp, an application that applies AI-generated facial reconstructions on videos, gained worldwide attention when a user created a fake video of Barack Obama insulting US president Donald Trump. After fake nude pictures of women globally were uploaded to pornographic sites, the developer of DeepNude also publicly apologised for inadequate safety measures and shut it down.

Clearly, deepfake content is not about to make its way out anytime soon.

A silver lining?

But while the damage of such deepfake content on an individual or a company’s reputation is clear, Anthony Baker, executive technology director at R/GA Tokyo, a part of the IPG network, said the technology, when used in the right way, can also help to elevate culturally relevant messages, and provide new video-based services and experiences.

He explained: “Artificially generated film can be leveraged for good, and to create value both for brands and consumers. This includes empowering the creation of localised and translated content, and light personalisation without the high cost of filming multiple variations.”

In April, the agency’s London branch conceptualised the “Malaria Must Die” campaign that rides on the upward trend of voice technology in everyday life. The short film, which uses AI video synthesis technology often used in deepfakes, saw David Beckham appearing to speak nine languages when the voices are in fact those of malaria survivors and doctors. The video invites others to add their voices, forming a piece of audio art known as a sound sculpture.

Baker said that artificially generated videos could potentially be a “great tool” to expand the reach and relevancy of messaging, content and brand relationships across diverse ethnicities and cultures.

However, there are pitfalls to avoid. “Marketers must be hyper-aware of the implications of content capture, usage and storage, especially with video featuring faces. Data must remain secure and not vulnerable to hacking,” he explained.

Besides getting the necessary consent, he added that marketers have to communicate in a clear, understandable and relatable way to people engaging in deepfake content about how their data might be used and stored, and how they can exercise control over it. He cautioned: “Marketers should also be aware of the risk to brand equity – leveraging an obscure tech provider could end up seriously damaging perception and positioning.”

[Digital Marketing Asia Conference 2019 in Singapore is back! Join us on 8-9 October as we hear from experienced practitioners and thought-leaders on how they are managing complex digital transitions and reimagining new ways for their marketing to become more customer focused, agile and interactive. Book your seats today.]

Meanwhile, Samsung AI Center also opened up on the possible educational value of deepfake. A research paper published in May described how its system can create a “realistic neural talking head” from just one image. In a video, a researcher shared living, talking portraits of famous characters, including the likes of Mona Lisa, Albert Einstein, and Marilyn Monroe.

Consumers’ reception

For former Lenovo digital and social centre of excellence director, and former StarHub marketer Rod Strother, his experiment with deep learning did not fare as well as he hoped when his team trialled a content intelligence engine for a period of time in 2013.

“Our own curated content resonated better with our audience compared to those pieces by the engine. The expectation was that over time, it would learn and improve, but the results were consistent – the team just knew the audience better,” said Strother, who was most recently the VP of StarHub.

However, he acknowledged that today, with more developed technologies, surpassing human intelligence may now be a possibility. He added: “I’m not saying there’s no place for machine learning, but I’ve not seen a presentation yet where I was convinced it could replace what could be learned through the community managers, social listening and good old-fashioned talking to your audience face to face.”

Elena Mirandola, managing director for digital marketing agency Artefact, Southeast Asia and Australia, pointed out that today, consumers are normalised in viewing ads “where people’s images are widely enhanced by computer”.

Similar to the emergence of digital image manipulation software a few decades ago, she expects deepfake technology to become “a new tool in the content creation palette” and be commonly used in a few years time. People’s inherent wish of creating stories and imaginary situations with others, according to Mirandola, will be the primary driver of deepfake content creation and consumption.

“We believe creative and advertising agencies will play a big role in defining the boundaries of good or bad, preserving the values of authenticity, yet innovating using the new technologies. This is an important responsibility given that every creation is now technically possible,” she explained.

Meanwhile, she is of the view that tech companies need to now evolve to take on an educational role and raise awareness about the associated risks so consumers will “not blindly believe in all that they see”. And while solutions could be in different shapes and forms, including certified ads or browser plug-ins, it is the human judgment that ultimately acts as the most important ad-blocker.

Demand for adtech

According to Stefano Vegnaduzzo, senior vice-president of data science at Integral Ad Science (IAS), deepfake content is “very likely here to stay and equally likely to rapidly advance in terms of its sophistication”.

With modern computer processing powers, deep learning algorithms can be applied on relatively standard laptop or desktop computers. Coupled with the broad availability of software, people without specialist knowledge can also wield deep learning algorithms. These factors have therefore, led to an explosion of deepfakes, said Vegnaduzzo.

The underpinning technology could benefit content creators, enabling advertisers to revive iconic people from yesteryears or “seasonalise” attires in their ads, he said. However, as the use of applications grow, so will the need for transparency into whether a video is altered by deepfake technology.

“If an ad is adjacent to a deepfake negative piece of content, there could be brand safety risks if the fake content is controversial and believable in nature. Additionally, deepfake content could target the brand directly with malicious actors creating fake content purporting to be coming from company spokespeople or executives,” he explained.

He expects the deepfake “threat-scape” to be similar to ad fraud, with multiple subtypes of deepfakes each requiring different techniques for detection and protection.

On whether deepfakes will open a whole new market for adtech companies, he said it was difficult to estimate at this nascent stage. “When we are talking about ad fraud, the presence of a financial incentive for fraudsters to ‘game the system’ is a solid indicator that the market will require protection,” he said.

As for the threat of deepfake, he added IAS is still monitoring how big a threat it poses and the immediate and future market need for related products.

The article first appeared on the August 2019 issue of Marketing Magazine.

Read more:
Ethics and AI: Where are we when it comes to bias and responsibility?
The rise of the AI influencer: Are they simply easier to work with?