marketing interactive Digital Marketing Asia 2025 Digital Marketing Asia 2025
Meta expands Instagram safety features for teens amid regulatory pressure

Meta expands Instagram safety features for teens amid regulatory pressure

share on

Meta has introduced new teen safety features on Instagram, as regulatory scrutiny intensifies in Australia ahead of a proposed under-16 social media ban - a move that could reshape platform access, audience targeting and brand engagement.

The updates include expanded protections in direct messages, enhanced nudity filters and new measures for adult-run accounts that primarily feature children. The announcement comes as the Australian government prepares to enforce tougher age restrictions for social media platforms, including a likely under-16 ban and mandatory age verification by the end of 2025.

Among the changes, teen Instagram users will now see clearer safety prompts when opening new DMs, including options to block, report, or view when the contact’s account was created - giving young people more context before engaging in conversations. A new combined ‘Block and Report’ feature also simplifies the process of removing and flagging inappropriate users.

Meta is also extending some of its teen-specific protections to adult-managed accounts that prominently feature children - such as those run by parents, publicists or talent managers. These accounts will now default to the strictest message settings, have offensive comments automatically filtered via Hidden Words and will be shielded from contact with adults the platform flags as potentially suspicious.

The company shared new data showing its existing safety tools are gaining traction.

In June alone, teens blocked 1 million accounts and reported another 1 million after seeing in-app safety notices. Nudity protection features, which blur suspected explicit images in DMs, have been left on by 99% of users, with more than 40% of blurred images going unopened. A separate warning that encourages users to think twice before forwarding such content led to a 45% drop in sharing in May.

Meta says it has already removed 135,000 Instagram accounts for sexualising child-focused content, along with an additional 500,000 linked accounts across Instagram and Facebook. The company is also sharing data with other platforms via the Tech Coalition’s Lantern program, aimed at identifying and removing exploitative content and users across the internet.

The new product features coincide with a wider public relations effort by Meta, TikTok and YouTube to push back against the growing regulatory tide.

Promoting positivity

In late June, TikTok ran a full-page ad in the Australian Financial Review celebrating its role in youth education and culture. Earlier this week, Google took out a similar ad in The Australian, declaring YouTube “proudly in a category of one” and highlighting that 82% of Australian teachers believe the platform supports positive student learning.

Both campaigns appears to be a strategic effort to reshape the narrative as the federal government prepares to tighten online safety laws.

While Meta’s latest update focuses on platform safety, it also functions as a pre-emptive defence. With Australia’s under-16 ban not yet in place, but policy drafting and consultation well under way, this is a reputational play as much as a product one.

share on

Follow us on our Telegram channel for the latest updates in the marketing and advertising scene.
Follow

Free newsletter

Get the daily lowdown on Asia's top marketing stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.

subscribe now open in new window