Australia has now officially passed legislation that bans children under 16 from accessing major social media platforms, marking a global first in social media regulation. The new law, passed by the Senate on November 28, 2024, mandates platforms like TikTok, Facebook, Instagram, and X (formerly Twitter) to implement robust measures to verify users’ ages. Failure to comply could result in fines of up to 50 million AUD, reflecting the government’s commitment to addressing the perceived risks of social media use among young people.
The law is slated to take effect at the end of 2025.
The enforcement of this legislation hinges on the adoption of effective age verification technologies. The Australian government has outlined several approaches under consideration, including biometric age estimation using facial analysis, document-based verification, third-party services, and behavioral analysis. Trials managed by the Department of Communications are expected to guide the selection and implementation of these tools, underscoring the technical and ethical complexities of age verification.
Biometric age estimation, a key candidate for enforcement, relies on analyzing facial features to estimate a user’s age. The technology offers the advantage of seamless verification, minimizing friction for users while maintaining compliance. Document-based verification, another potential method, involves uploading government-issued IDs, offering high accuracy but raising privacy concerns.
Behavioral analysis and device fingerprinting could provide supplementary insights, though their reliability in determining age is less established.
Despite the innovative scope of the legislation, it has sparked significant debate. Proponents argue that the ban will shield children from the harmful effects of social media, such as cyberbullying, exposure to inappropriate content, and excessive screen time. However, critics, including digital rights advocates and technology experts, warn of unintended consequences. Concerns include privacy risks associated with biometric data, potential exclusion of vulnerable youth from support networks, and the likelihood of driving children toward unregulated platforms or using circumvention tools like VPNs.
The absence of exemptions for parental consent or existing underage accounts also complicates the law’s implementation. Critics argue that enforcing these measures retroactively is technically challenging and risks alienating users.
Internationally, Australia’s move is being closely watched as governments grapple with the dual pressures of protecting young users and respecting privacy. Countries like the UK and the U.S. have introduced similar age verification requirements for adult content or are considering restrictions for social media. Australia’s approach could serve as a blueprint for these initiatives, but its success will depend on the robustness, transparency, and public trust in the age verification systems adopted.
Companies specializing in facial recognition, document verification, and behavioral analysis are likely to see increased demand for their solutions. However, the industry must address privacy concerns and develop tools that are both accurate and minimally invasive. The role of Australia’s eSafety Commissioner in overseeing these efforts will be critical in ensuring that the chosen technologies meet the dual demands of effectiveness and ethical compliance.
Source: Reuters
–
November 28, 2024 – by Cass Kennedy
Follow Us