marketing interactive Digital Marketing Asia 2025 Digital Marketing Asia 2025
Instagram doubles down on teen safety as Australian scrutiny looms

Instagram doubles down on teen safety as Australian scrutiny looms

share on

Meta said its newest wave of teen safety tools on Instagram is the result of years of development - not a reaction to Australia’s proposed ban on under-16s using social media.

Speaking in Sydney at a the company’s first Instagram Safety Camp event attended by parent creators, Meta’s global head of child safety Ravi Sinha stressed the tech giant is “deeply invested” in long-term safeguards for younger users and working closely with parents, not just engineers, to ensure young people have safer, more positive online experiences.

“We’ve been working on teen accounts for a very long time,” said Sinha, a former US Department of Justice prosecutor who now leads global child safety efforts for Meta. “It’s not really responsive to the social media ban. What we treat it as is a much more practical way for teens to have good experiences online - and for parents to supervise it.”

Australia’s proposed regulation, due to come into effect later this year, would restrict social media access for under-16s, forcing platforms to verify users ages. The event is one of several moves by major tech platforms - including TikTok and YouTube - to push back on the federal government’s proposed under-16s social media ban.

YouTube has taken out full-page ads in publications like The Australian, with TikTok taking out full page ads in The Australian Financial Review promoting the positive role platforms can play in young people’s lives and challenging what it sees as alarmist messaging around online harm.
Default protections, parental visibility

At the event, Instagram walked through new default settings for teen accounts, including aggressive restrictions on who can message, follow or interact with users aged under 18. These restrictions are automatically applied, with extra protections in place for users aged 13 to 15, who cannot opt out unless a parent approves.

“If you’re under 16, you’re going to stay in those protections unless your parent says otherwise,” said Sinha. “We know this may reduce time spent and social connections for teens - but we’re comfortable with those trade-offs.”

Instagram also limits the type of content teens see, opting them into the platform’s strongest anti-bullying filters and hiding sensitive content such as alcohol or tobacco posts. Teens are nudged after 60 minutes of daily use, while a ‘sleep mode’ disables notifications between 10pm and 7am.

“We want to make sure that people - and most specifically teens - are having positive, additive, age-appropriate, safe experiences online,” Sinha told attendees. “That is what we’re building for.”

According to global data, Meta said 97% of users aged 13 to 15 remain in the default protection mode after enrolment, with 94% of parents say the safety features make them feel more confident managing their teen’s experience.

Tackling age verification (with a grain of salt)

Meta’s tools also lean heavily on AI-powered age prediction models - part of a broader effort to address one of the core challenges of online child safety: teens lying about their age.

“We take the age that somebody tells us with a grain of salt,” Sinha told former Network 10 presenter Sarah Harris during a Q&A session.

“We really work extremely hard using sophisticated AI technology to predict whether a user is actually a teen or an adult.”

Instagram now requires users who change their age to undergo a verification process, and is exploring centralised, privacy-safe methods of sharing a user's age across multiple apps - something Sinha said could reduce confusion for parents and avoid “40 different verification processes.”

“We think when a parent sets up a new phone for their teen, they should be able to enter an age once and have that shared, in a privacy-preserving way, with apps they approve,” he said.

A ‘reset button’ for teens

Two new features rolled out globally this year also reflect Instagram’s focus on teen mental health and personalised experiences.
The first, an ‘interest picker’, allows teens to tailor their feed and search results to specific themes, such as sport, music, or animals and aligning with what Sinha called “the positive, additive stuff they’re actually on Instagram for.”

The second, a ‘recommendations reset’, offers a full content slate refresh, which Sinha said was particularly helpful for teens “as their interests shift year to year.”
“It gives them a fresh start. They can wipe the slate clean and re-teach the system what they want to see. We think that’s an important control,” he said.

While the tools pre-date the proposed social media bans, Meta isn’t ignoring the policy debate. Sinha acknowledged Australia’s situation is “quite unique” but maintained the company’s view that “graduated supervision” is more effective than blanket bans.

“What we worry about is pushing teens into places that are less safe,” he said. “We think our approach strikes the right balance. Teens want to be online. Parents want to be involved. This is a way to meet both those needs.”

Although the event was light on government attendance, Meta’s messaging was clear: it wants to position itself as part of the solution, not the problem.

As Sinha put it: “The overwhelming majority of people who use our products do so because they’ve had positive experiences. The goal is not to eliminate online time - it’s to make sure that time is safe, constructive, and connected.”

share on

Follow us on our Telegram channel for the latest updates in the marketing and advertising scene.
Follow

Free newsletter

Get the daily lowdown on Asia's top marketing stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.

subscribe now open in new window