Interview: Comms director Ananya Mukherjee on what PR professionals need to look out for as AI takes over
share on
Earlier this year, it was found in a survey by Capgemini Research Institute that 70% of Singaporean consumers, as compared to 73% of consumers globally, trust content created by generative AI. This spans many aspects of life, from financial planning and medical diagnosis to even relationship advice.
It was also found that 70% of consumers globally use generative AI tools to seek recommendations for new products and services and that 65% of Singaporean consumers, compared to 64% globally, are open to making purchases based on these recommendations.
Don't miss: 73% of consumers trust content by generative AI: Here's why they shouldn't
While this is not surprising considering that AI has been the buzzword for 2023, there has also been a rise lately in conversations about the safety of these tools as machines get smarter. In fact, not too long ago, CEO of OpenAI which created ChatGPT, Sam Altman, took the stand to speak directly to US lawmakers about the risks artificial intelligence (AI) poses and why heavier regulations are needed amidst ethical, legal and national security concerns.
Altman's concerns come as more marketers bring AI into the fold, learning to work alongside them and even getting them to flesh out full campaigns. However, as of yet, there are no set standards or best practice guidelines on credits for content created through generative AI despite the fact that it often involves ethical, practical and legal variants, according to Ananya Mukherjee, global communications director, narrative and content at Cargill.
"Hence, it is difficult to put it all under a blanket policy. If a human uses AI simply as a tool to assist in creating a campaign, you could credit the human. Let’s think of it as a loaned drawing board, but then the human’s creativity takes charge." she explained. "On the other hand, if the system generates the content and it is presented exactly in the same shape and form with no creative inputs or decisions from the human, it’s important to be transparent and acknowledge AI’s contribution," she said.
A good example is when Pizza Hut Singapore launched its first AI- based value campaign earlier this year. Titled Takeaway Pocket Pleasers, the campaign used the AI programme Midjourney to create a visual that showcases the unmistakable DNA of the Pizza Hut brand. The campaign featured arresting AI-generated visual highlights that emphasised the brand's commitment to remain accessible to everyone as it cuts down advertising and creative costs and passes the savings down to its customers. It was also very vocal about the fact that it had used AI to create the campaign.
As the allure and novelty of using AI to create campaigns wears down though, marketers and brands may decide to cease disclosing and being transparent about their usage of AI in their work. The consequences could be many-fold and may cause significant damage to the reputation of an organisation, said Mukherjee. She explained:
It could range from ethical concerns, legal risks and mitigation to putting a question mark on the authenticity of the campaign.
Mukherjee added that for example, there could be a copyright violation if you’re using AI to generate content that resembles someone else’s licensed work or you could run into legal risks if your campaign is using proprietary information.
The role of communication in AI usage
This is why it is important that PR and communications professionals get ahead of the troubles that might be caused by AI. According to Mukherjee, PR and Communications leaders play a pivotal role in addressing and managing the challenges of amplified disinformation and false narratives derived from generative AI.
"This is why, they, of all people, need to be vigilant. We need to be prepared to roll up our sleeves to proactively counter the potential risks to the reputation, credibility and public trust of an organisation under threat," she said.
For a start, this can be done through public education and spreading internal awareness of the existence and implications of content created through generative AI, explained Mukherjee.
"It’s critical that PR and comms teams are aware of the risks associated with synthetic media, and give information verification and fact-checking utmost importance, besides continuously monitoring media to detect false information and false narratives," she said.
Saying that, the answer is not to simply stay away from AI completely to avoid the potential pitfalls. In fact, it’s important to realise that we can no longer stay aloof from an AI-driven world as it continues to evolve, according to Mukherjee.
"There are a number of ways in which we can tap into its strengths and collaborate with AI, that will allow PR and communications teams to work more efficiently, enhance data analysis, create personalised content for target audiences, manage and curate content, monitor media and manage crises, amongst others," she said.
Communicators must simply be cautious about not becoming robots by letting AI take ownership. She said:
It’s important that we retain the human element because that’s what connects us to people and is the defining competency in our trade.
Check out more of what Ananya Mukherjee has to say at #PRAsia #Singapore on 2 November at Paradox Singapore Merchant Court where you will be able to connect with 100+ PR and communication leaders worldwide to share ideas, forge partnerships, and unlock endless possibilities
Related articles:
Google looks to generative AI ads as it amps capabilities
Microsoft partners with universities to allow the use of generative AI in school
The power of generative AI: Unlocking creativity in ad campaigns
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window