Australia Banned Social Media for Kids — One Month Later, Teens Found a Way Around It

Australia Banned Social Media for Kids — One Month Later, Teens Found a Way Around It

Australia • Technology • Youth Safety

By Swikriti • Updated: Jan 2026

Australia’s world-first social media ban for kids under 16 is one month old. But teenagers haven’t disappeared — they’ve adapted.

A month after Australia began enforcing a world-first rule that blocks children under 16 from holding social media accounts, the early picture is messy: some families report quieter evenings and fewer scrolling battles, while schools and youth workers say the conversation has shifted from “how much time online” to “how are they getting online at all”.

The policy is often described as a “ban”, but regulators frame it as a delay: under-16s aren’t targeted with penalties, and neither are parents. The legal burden sits with platforms, which must take “reasonable steps” to prevent Australians under 16 from creating or keeping accounts. In practice, that’s turned a simple idea—“kids shouldn’t be on social media”—into a fast-moving test of enforcement, privacy, and tech reality.

What changed: From 10 December 2025, age-restricted social platforms must actively prevent under-16 accounts and remove existing underage accounts where identified—backed by significant penalties if they fail to comply.

How the rule works (and who it targets)

Australia’s minimum-age framework applies to “age-restricted social media platforms” that meet the government’s criteria. The official list has included major services where feeds, recommendations and social graphs drive engagement—think large mainstream platforms rather than simple messaging tools.

Regulators have repeatedly emphasised a key point: the goal isn’t to police every household, but to force platform-level accountability. That means companies need systems that can detect likely underage users, stop new sign-ups, and respond quickly when under-16 accounts are flagged—without turning the entire internet into a passport checkpoint.

For families trying to understand the practical impact, Australia’s online safety regulator has published plain-language guidance on what “age restrictions” mean and what parents can do next via the eSafety Commissioner’s social media age restrictions hub.

Why teens are still online: the workaround problem

Within weeks, a predictable pattern emerged: determined teens didn’t “quit the internet”—they changed tactics. The most common workarounds described by parents, teachers and digital safety researchers fall into a few buckets:

  • Borrowed identities: using a parent’s or older sibling’s account on shared devices, or signing up with an adult’s details.
  • Multiple accounts: keeping a “clean” visible profile while operating a second private account under a different email or handle.
  • VPNs and region-switching: masking location to avoid Australian-specific checks or prompts.
  • Age-check loopholes: slipping through light-touch verification such as self-declared ages, basic prompts, or imperfect “visual” checks.
  • Migration: shifting social life into group chats, gaming servers, and private communities that feel less visible to adults.

That last point—migration—may become the biggest story of the year. When teens move away from large platforms into smaller, more private spaces, harmful content can become harder for schools and parents to spot. It can also reduce the chance of stumbling across public “help” resources or trusted reporting tools on bigger platforms. In other words: less mainstream social media doesn’t automatically mean safer digital life.

“Symbolic” vs “successful”: what the first month is really measuring

Critics argue the law risks becoming symbolic if platforms can claim compliance while underage users remain widely present. Supporters counter that early weeks were never going to be a clean switch-off; the first month is about building friction—making it harder, slower and less automatic for kids to end up in algorithmic feeds and endless recommendations.

That friction can still matter. Public health researchers often note that small barriers—extra steps to access a product, added time between desire and action—can reduce overall consumption, especially for younger users. Even a partial reduction in exposure to bullying, harassment or sexual content could be meaningful, supporters say, if it’s paired with better digital literacy and clearer pathways to age-appropriate online communities.

The privacy dilemma: age checks without “ID for everyone”

The most politically sensitive question is how platforms prove someone is over 16 without collecting excessive data. Australia’s enforcement approach leans on “reasonable steps” rather than a single mandated method, and officials have warned against blanket verification for all users.

Some companies have explored tools such as video selfies, third-party age estimation, and targeted checks for accounts that look underage. But each method triggers its own concerns: biometrics, data retention, bias in facial age estimation, and whether children could be pushed into handing over sensitive documents just to chat with friends.

The Australian government’s own explainer page outlines the minimum-age framework and what platforms are expected to do: social media minimum age information.

What to watch next

The next phase is likely to be defined by enforcement signals and platform reports: how many underage accounts are removed, how quickly companies respond to complaints, and whether regulators pursue penalties against firms that repeatedly fall short. Globally, other governments are watching closely—both for a blueprint and for warning signs.

For parents, the immediate takeaway is simpler than the political debate: rules can slow access, but they can’t replace family conversations. Teens who feel locked out may look for darker corners of the internet; teens who feel supported may be more willing to stay in safer spaces, ask for help, and report harmful experiences.

Read more on Swikblog: Technology & culture updates