Australia’s eSafety Commissioner has outlined plans for implementing the country’s new law banning children under 16 from social media, which takes effect in December 2025. The law, passed in late 2023, requires social media companies to verify user ages and remove accounts of users under 16, marking a global first in social media regulation.
Julie Inman Grant, who heads the eSafety Commission established in 2015 as Australia’s independent online safety regulator, detailed several potential age verification methods under consideration. While government-issued IDs were rejected due to privacy concerns, other options include facial recognition technology from digital identity providers like Yoti, which has recently made significant advances in its facial matching accuracy. Grant noted accuracy and bias must be carefully evaluated. She also referenced an AI system that analyzes hand movements, claiming 99 percent accuracy in age verification.
“There are really only three ways you can verify someone’s age online, and that’s through ID, through behavioral signals or through biometrics. And all have privacy implications,” said Grant. Her assessment corresponds with Australia’s ongoing trials of age verification technologies, which are being evaluated by independent experts.
The implementation process will be gradual rather than immediate. Social media companies must first identify and remove existing under-16 users from their platforms. Current data indicates 84 percent of children aged 8-12 already maintain social media accounts, often with parental knowledge and assistance. The high penetration rate presents a significant challenge for platforms working to achieve compliance.
The eSafety Commission maintains working relationships with social media platforms to facilitate compliance. “We would not have the success that we do through our complaint schemes if we didn’t have proactive relationships with the companies,” said Grant. “It’s always a little bit of a dance.” The relationships have been built through years of collaboration since the Commission’s establishment and the implementation of Australia’s Online Safety Act.
The Commission’s broader mandate includes research, prevention, education, and complaint handling for issues like cyberbullying and image-based abuse. “Part of our function is to provide research, prevention, education — and then we’ve got complaint schemes for kids who are being cyberbullied. For all Australians who’ve experienced image-based abuse with the non-consensual sharing of deepfakes and intimate images,” Grant explained. The comprehensive approach reflects Australia’s position as a global leader in online safety regulation, particularly in protecting young users from digital harm.
Sources: KUOW, Identosphere, CFR, eSafety.gov.au
—
December 26, 2024 – by the ID Tech Editorial Team
Follow Us