Recently, it was found in a survey by Capgemini Research Institute that 70% of Singaporean consumers, compared to 73% of consumers globally, trust content created by generative AI. This spans across many aspects of life, from financial planning and medical diagnosis, to even relationship advice.
It was also found that 70% of consumers globally use generative AI tools to seek recommendations for new products and services and that 65% of Singaporean consumers, compared to 64% globally, are open to making purchases based on these recommendations.
The survey found that consumers who use generative AI frequently are most satisfied with chatbots, gaming, and search use cases, however, generative AI platforms are also being used for personal, day-to-day activities and consumers seem to trust AI for these activities as well.
Over half of the respondents (53%) trust generative AI to assist with financial planning while globally, 67% of consumers noted that they could benefit from receiving medical diagnoses and advice from generative AI. 63% on the other hand indicated that they are excited by the prospect of generative AI aiding with more accurate and efficient drug discovery.
Additionally, two-thirds (66%) of consumers surveyed revealed that they would be willing to seek advice from generative AI for personal relationships or life and career plans, with Baby Boomers the most likely (70%) age group to use it for this purpose.
While this is not surprising considering that AI has been the buzzword for 2023, there has also been a rise lately in conversations about the safety of these tools as machines get smarter. In fact, recently, CEO of OpenAI which created ChatGPT, Sam Altman, took the stand to speak directly to US lawmakers about the risks artificial intelligence (AI) poses and why heavier regulations are needed amidst ethical, legal and national security concerns.
Altman's concerns come as more businesses and agencies bring AI into their work from AI-powered chatbots on e-commerce sites such as Lazada to Malaysia's first AI DJ.
Considering these risks, should individuals and businesses really be as trusting as they are of the system and is there really that much to be concerned about as more brands implement them into their systems and marketing efforts?
What is the real threat level of AI on a business level?
For a start, it is important to note that while it is possible for generative AI to be used for malicious purposes, there are challenges in making it functional and convincing, according to Doug McKee, principal engineer and director of vulnerability research at cyber threat intelligence company Trellix.
"In our experience, our researchers attempted to force ChatGPT, a language model, into writing a template that may be used for malware and this resulted in a basic server-client architecture. So, while generative AI can be used to create malware, it still requires human input and guidance to be effective.
Furthermore, many security measures are being developed to detect and counteract malicious software, such as advanced threat detection and machine learning algorithms, said McKee. "Today, language models such as ChatGPT have severely limited if not outright blocked the ability for the system to create malware. Additionally, AI-generated malware may not be able to adapt to changing environments or defenses as quickly as human-written malware. This means that it may be easier for security professionals to detect and mitigate AI-generated malware."
However, it is not entirely accurate to say that the fears surrounding generative AI and malware are unfounded, McKee clarified. For instance, new techniques are implemented at a rapid pace by advanced threat actors and defenders race to understand them.
One of the immediate concerns with ChatGPT in the enterprise is that while it is incredibly powerful for increasing productivity, there are significant privacy concerns.
As such, organisations need to ensure that they have proper controls in place to take advantage of AI platforms such as ChatGPT while guaranteeing that sensitive data is not leaking into the AI through prompts, McKee said.
This would as a result directly apply to individuals using AI for financial, medical and relationship queries or who use AI-powered chatbots on sites such as Lazada which recently released LazzieChat, an eCommerce AI chatbot powered by OpenAI’s ChatGPT technology in Microsoft’s Azure OpenAI Service to provide a more personalised shopping experience to its customers.
The issue it would seem with incorporating AI into consumer-facing platforms is that AI programmes are not yet ready to address the needs of the heavyweight industries, according to Leonard Lee, the president, APAC of Beyond Limits. "At this point in its development, the technology mainly benefits the everyday consumer and help to boost everyday efficiency for the average employee or student," he said.
"Professionals in the energy and industrial fields though cannot depend on current generative AI software yet as the risks are too significant to rely on results generated by a machine that does not provide transparency on how it arrived at its outcomes," he noted.
However, Ranganathan Somanathan, the co-founder of RSquared Global Ventures disagreed saying that in his opinion, he foresees more of a process risk than a people risk. "What I mean by that is that people will be able to do their jobs better, if they leverage the proactively embrace GenAI. The risks lie in how the various data and content that are stored in different containers are referenced in creating the response. Are these transactions following the data privacy laws? Are they tapping into proprietary vs public information to generate the response?"
Data protection and privacy is still in its early state of evolution and GenAI will put pressure on accelerating their maturity curve.
Why are people so trusting of AI considering the risks?
With all the fears surrounding misinformation and data leaks, why exactly are so many people willing to completely trust the platform? Kat Warboys, the marketing director, APAC at HubSpot noted that this could be because the technology has been consistently delivering at scale and outperforming human capabilities in specific domains.
"At a time when budgets are under greater scrutiny, businesses in particular are adapting and responding to the environment around them by exploring new ways of doing things, particularly when there is a promise of efficiency and productivity outcomes," she said, before noting that this is what HubSpot is seeing with generative AI as well.
"These AI systems are developed by experts in their respective fields and apply deep learning to output. We see this play out across sales and marketing functions where generative AI has demonstrated impressive capabilities in handling multiple tasks — from content development to task automation, campaign personalisation, and data analysis," she said.
Agreeing with her, Don Anderson, CEO of Kaddadle noted that given the "sheer impact and power" that AI has over so many aspects of industry and our lives, and how quickly it has scaled in the last six to eight months, as a society we’re all moving between our own stages of processing which may be the reason why we are starting to trust it more.
Citing the Capgemini Research Institute above, Anderson said that media coverage is slowly capturing all five stages of society's acceptance of AI. "The report may suggest we’re moving between the denial, bargaining and acceptance stages of this moment. No question that we have anger and depression in the mix, as demonstrated by the current Hollywood Writers Guild strike over the AI threat to creative livelihoods, and protests that have occurred by workers in other parts of the world in recent months. Or the very recent example of a Malaysian radio station using an AI DJ," he said.
"Expect more of this as the realities of the effect and impact of these technologies becomes clearer among populations. We’re in a bit of a starry-eyed ‘honeymoon phase’ with AI as what always occurs with the bright shiny toy of new technologies. We’re more captivated by the positive possibilities of its use than the negatives, which might explain in part the survey results," Anderson said.
He noted that if consumers become too trusting of AI, we risk the potential of letting it guide all facets of our life, including what sources we’re relying on to inform our buying decisions.
How businesses can navigate the threats posed by AI
Ethical and responsible AI has then become a necessary expectation as more businesses tap on innovative technologies for business measurement.
"To tackle potential issues of data privacy and security, businesses need to have an enterprise data strategy in place to ensure accessibility and accountability," said Lee.
This entails devising a comprehensive framework that outlines the various roles and responsibilities within the organisation, establishing policies and standards, and ensuring adherence to relevant regulations, he said. "They should also classify and inventory all data assets, as well as identify sensitive data and determine who has access to it. Technical and administrative measures should also be put in place to safeguard the data, including access controls, encryption, data masking, and data anonymisation."
Humans and AI should also collaborate to ensure full transparency and visibility, said Lee. "This is where Cognitive AI steps in to address this challenge. By providing explainable AI solutions that provide a clear audit trail that allow humans to more easily understand and adopt in the long run, it helps users to trust its recommendations, compared to the black box of traditional AI models."
He added that this also means that highly regulated industries, which operate within the most demanding environments and work with highly sensitive data, will be able to trust Cognitive AI to optimise their current operations at scale.
"There are always going to be gatecrashers with advancing technologies, those who will always want to take advantage of consumers," noted Anderson. "Marketers must demonstrate and communicate a sense of responsibility in using this tech from the get-go and maintain transparency with audiences. They need to position themselves as the “authenticators” – the ones in front of the AI equipped with the foresight of deciding what’s in the best interests of the brand and consumers, as they will likely expect the two to work in tandem to deliver more trustworthy messaging, are acting in their best interests and eliminating bias," he said.
Local radio station introduces Malaysia's first AI powered DJ
AI Singapore partners Run:ai in bid to increase AI adoption in SG by 2030
Dear Straight People unveils new web series largely scripted by ChatGPT
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.subscribe now open in new window