
Meta has renewed its call for Australia to rethink its nationwide ban on social media use for children under 16, warning that the policy is pushing teenagers toward less regulated platforms and weakening online safety outcomes. The appeal comes after Meta removed more than half a million suspected underage accounts across its platforms within a single month to comply with the new law.
Australia’s Online Safety Amendment Act 2024, which came into force on December 11, restricts under-16s from accessing 10 major digital services, including Instagram, Facebook, Threads, YouTube, TikTok, Snapchat, Reddit, and X. The law represents one of the most aggressive government interventions globally aimed at limiting young people’s exposure to social media.
In a recent blog post, Meta disclosed that it removed close to 550,000 accounts believed to belong to users under the age of 16 between December 4 and December 11 alone. Instagram accounted for the largest share, with roughly 330,000 accounts taken down. Facebook saw approximately 173,500 removals, while nearly 40,000 accounts were deleted on Threads.
Meta said the scale of enforcement underscores its compliance with the legislation but also highlights what it sees as flaws in the current approach. According to the company, large-scale removals do not eliminate teen demand for social connection online but instead redirect it elsewhere.
Meta argues that a universal ban is less effective than a coordinated industry-wide effort focused on age-appropriate experiences and stronger safety standards. The company has urged Australian regulators to work more closely with technology firms to develop systems that protect minors without fully cutting them off from digital communities.
The company reiterated that it supports regulation but believes incentives for safer product design, privacy-preserving age verification, and parental controls would be more sustainable than outright prohibitions.
As part of its safety strategy, Meta has partnered with the OpenAge Initiative, a non-profit organization, to develop age verification tools known as Age Keys. These tools allow users to confirm their age through multiple methods, including government-issued identification, financial credentials, facial age estimation, or national digital identity wallets.
However, Meta stressed that such measures are limited in impact unless they are applied consistently across app stores. The company noted that teenagers use more than 40 apps per week on average, many of which are not covered by Australia’s current law and do not implement robust age verification or safety controls.
Meta warned that without app store-level enforcement, regulators risk a “whack-a-mole” effect, where teens simply migrate to newer or lesser-known platforms that fall outside the ban.
Despite the restrictions, many Australian teenagers have already found ways to bypass the ban. Media reports and interviews indicate that under-16s are increasingly turning to alternative platforms that are not explicitly included in the legislation, such as Yope, Lemon8, and Discord.
Others have reported using virtual private networks or accessing social media through their parents’ accounts. Meta has repeatedly cautioned that these workarounds expose teens to platforms without the safeguards, moderation systems, and parental oversight features offered by major providers.
Meta is not alone in questioning the effectiveness of the ban. Reddit has launched a legal challenge against the Australian government, arguing that the law limits political discussion and isolates young people from age-appropriate community engagement. The company has said that young people’s views play a role in shaping broader political discourse, influencing parents, educators, and future voters.
According to Reddit, restricting access entirely may undermine civic engagement rather than protect it.
Australian Prime Minister Anthony Albanese has defended the ban as a necessary step to rebalance power between families and large technology companies. In public remarks, he said the policy is designed to give parents greater control and allow children to grow up with fewer digital pressures.
Australia’s eSafety Commissioner has also supported the move, stating that the ban reduces the likelihood of exposure to harmful or distressing content and shifts responsibility for youth safety from parents to technology platforms.
Australia’s approach is being closely watched by policymakers worldwide as governments grapple with growing evidence linking social media use to mental health challenges among young people. In 2023, U.S. Surgeon General Vivek Murthy warned that excessive social media use contributes to higher rates of anxiety, depression, body image issues, and disordered eating among teenagers.
These concerns have fueled parent-led movements across multiple countries advocating delayed smartphone use and restricted access to social platforms. Prominent voices, including NYU professor Jonathan Haidt, have argued that children should avoid smartphones before age 14 and social media before age 16.
Early feedback on Australia’s ban has been mixed. While some teens report healthier habits and reduced screen time, others say they feel more isolated or disconnected, reinforcing concerns that restrictions without viable alternatives may create unintended consequences.
As Meta and other platforms continue to engage regulators, the debate highlights a central tension facing policymakers globally: how to protect young users online without pushing them into riskier digital environments beyond the reach of established safeguards.









