



Why AI needs an identity: Gary Liu explains the first step in data security
share on
Agentic AI has rapidly emerged as the latest marketing phenomenon worldwide, with its market size projected to reach US$197 billion by 2034, according to a report by Hong Kong-based technology firm Terminal 3. In Hong Kong alone, 71% of organisations plan to deploy AI agents to work alongside employees within the next year, as revealed by a recent Cisco survey.
The growth trajectory for agentic AI is nothing short of remarkable, said Terminal 3's CEO and co-founder Gary Liu (pictured) at MARKETING-INTERACTIVE's Digital Marketing Asia Hong Kong. By 2032, the market is expected to surpass US$100 billion, and just two years later, it’s set to nearly double, he said.
"This is extraordinary growth of a sub sector of artificial intelligence. I'm just talking about AI agents, autonomous bots that now have the permissions to make decisions and make transactions on behalf of human operators. By 2034 the agentic AI market is likely to be at least about a fifth, if not about a third of the all overall AI marketplace," he added.
Potential data threats of Agentic AI
However, the exponential growth of AI is rapidly outpacing marketers' ability to govern the safety and security of its data use. According to Liu, while agentic AI unlocks powerful capabilities, it also introduces significant risks, particularly in the realm of data privacy and security.
"Regulators are the ones who are supposed to care most about data privacy and data security,” he noted. “But they’re still learning about this industry—and it’s evolving so quickly that they simply haven’t been able to keep up. Right now, there is no comprehensive data governance structure anywhere in the world that can adequately address the existential risks posed by the rise of AI agents.”
On the marketing front, Liu pointed out that regulators and tech giants such as Alphabet and Meta currently dominate the conversation around data governance and privacy. “We talk about it constantly, yet we’re still just following their lead on what they decide to implement,” he said.
But in the age of agentic AI, the rules are about to change, Liu said. “We are now writing software that allows autonomous systems to make up their own minds, and it doesn't matter the amount of regulation we put in place, if the regulation looks anything like traditional data regulations, we're not going to be able to control what AI agents actually do the moment we feed data into an AI agent and it ends up in what is called memory."
Liu further cautioned that once data becomes part of an agent’s memory, it’s effectively irretrievable. “It's gone forever as it's out of our control,” he said. “AI agents will then use that data to train the next version of their own baseline numbers, what we call foundational models. But now what we're seeing is something even more pervasive, and you can consider it nefarious if you don't trust the actual AI models.”
In essence, autonomous agents have quickly realised that data is their most valuable currency. “They’re now exchanging information with one another—trading data for data. And if this continues unchecked, we’re on track to completely lose control over governance and traditional regulatory mechanisms.”
How can marketers better prepare for this?
In fact, this growing control by AI-powered agents might be one reason some marketers have hesitated to adopt agentic AI tools, he said.
But make no mistake, that’s exactly what they want.
And yet, he warned, this is just the beginning. The current suite of AI tools only scratches the surface of what marketers will be using in the coming years. “Right now, generative AI is mostly being used for relatively mundane tasks—basic campaign optimisation,” Liu explained. “But very soon, it will be responsible for creating the vast majority of your creative content.”
That shift will directly impact marketers in creative roles—copywriters, editors, designers. “AI is going to permeate your day-to-day work because of the sheer productivity gains it delivers,” Liu said.
Beyond content creation, the next phase involves AI negotiating on one's behalf—across platforms—in real time. “It won’t just optimise spend within a single platform,” he said, “but will dynamically shift budgets and adjust creatives across multiple platforms based on the permissions you’ve granted—not by fixed rules, but through autonomous decision-making.”
Eventually, Liu predicted, AI agents will take over much of what today’s advertising stacks—both human-led and rules-based systems—currently do. “These agents will launch hyper-targeted ad campaigns to individual users on the fly, with little to no human input.”
This raises a critical question: "Do we have the guardrails in place so that these AI agents don't start running completely afoul with this incredibly private and sensitive and important data, start trading with one another, start mimicking either yourselves, your companies, or your customers and your users. How do we control this very possible runway?"
Training AI identities
For Liu, the answer lies in establishing robust digital identities for AI agents, especially as they manage critical but overlooked areas like procurement—a domain representing billions in B2B transactions.
“No one likes to think about procurement because it’s deathly boring,” Liu said. “I’m talking about large corporate offices ordering toilet paper, or manufacturing companies buying screws and nails—these everyday transactions represent billions of dollars in capital flow across B2B organizations, and we rarely see any of it.”
Today, many of these processes are already being handled by AI agents operating on behalf of humans. “These systems can negotiate terms, send purchase orders, receive invoices, and even authorise payments,” Liu explained. “And when you’re dealing with hundreds of billions of dollars in transactions, the implications are massive.”
He outlined four major issues that must be addressed as AI agents take on more responsibility - identity impersonation, permission verification, data security and auditability and transparency.
To address these issues, Liu proposed a four-part full-stack identity solution. "First, we have to assign a unique, verifiable identity to every single agent and a human operator. That solves the first problem knowing your AI agent. Second is we have to use digital credentials to be able to provide real time verifiable permissions to AI agents so we can actually verify that they're allowed to do what they're saying they can do."
"The third is that we have to start using third party data, passports or networks to hold on to private data. Never let the AI agent touch it, but have this data delivered directly and securely to transaction platforms complete transactions or purchase or whatever. And then finally, we have to use some kind of system. I would submit that it's blockchain, so that every single action, regardless of how fast it happens that an AI agent takes is immutably tracked so that we can audit it, that we can regulate it," he added.
Related articles:
Agentic AI for dummies: 101 on how marketers can leverage on the trend
Agentic AI and the CX reset: The tech changing how brands serve and retain customers
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window