TikTok will roll out a new tool that aims to help creators label AI-generated content they produce, in order to curb disinformation. With this tool, the platform stated that it aims to continue investing in media literacy and transparency to empower creativity as well as viewer discretion.
The tool aims to help creators easily comply with the company’s existing AI policy. The policy requires all manipulated content that shows realistic scenes to be labelled in a way that lets viewers know that the scenes are fake or have been altered.
To make it easier for creators to get a better hang of the new tool, the app will also be releasing educational videos and resources over the coming weeks. This is with the goal that creators and viewers will be able to share and contextualise their content, similar to verified account badges and branded content labels.
"To drive more clarity around AI-powered TikTok products, we are also renaming TikTok AI effects to explicitly include 'AI' in their name and a corresponding effects label and updated our guidelines to do the same,” the platform said in a release.
TikTok stated that the tool came around after consultation with Safety Advisory Councils when developing these updates, as well as industry experts such as MIT’s David G. Rand who is studying how viewers perceive different types of AI labels. Rand’s research empowered the design of the AI-generated labels.
“We continue partnering closely with peers, experts and civil society across our industry, knowing that AI raises complicated questions that no one platform can solve alone,” TikTok said, with its “Responsible Practices for Synthetic Media” code standing as evidence. In February this year, TikTok was part of this partnership with other bigwigs such as Google, Meta and Adobe to name a few, to create a framework on how to responsibly develop, create and share synthetic media.
In August this year, it partnered with the non-profit Digital Moment, to host roundtables where young community members shared their perspectives on the advancement of AI online.
“AI-generated content is an exciting opportunity, and as it evolves, our approach will too. We will continue to iterate as we evaluate the impact of these updates and work with our community to safely navigate the recent advancements of AI-generated content together,” it concluded.
The need for these labels has become all the more important at this point in time. In a recent study by Salesforce Singapore, it was revealed that 74% are concerned about the unethical use of AI, with 63% of the respondents thinking that generative AI will lead to unintended consequences for society.
The main concern regarding generative AI, the study stated, is that most people are concerned about the implications of generative AI on data security, ethics and bias. To combat this, companies can focus on transparently communicating how they use AI and making it clear that their employees – not technology – are in the driver’s seat, it said. “For instance, a mere 37% of customers trust AI’s outputs to be as accurate as those of an employee. Accordingly, 81% want a human to be in the loop, reviewing and validating those outputs,” the study added.
The biggest conference is back! Experience the future of marketing with 500+ brilliant minds at Digital Marketing Asia on 28 - 30 November in Singapore. Uncover ground-breaking strategies that connect leading brands with their target audiences effectively.
74% concerned about the unethical use of AI: How businesses can remain transparent
TikTok to remove personalised algorithm in EU: Could it sully the name of targeted ads?
Shake your phone with this new ad format by TikTok
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.subscribe now open in new window