Australia’s Parliament has enacted a new law prohibiting children under the age of 16 from accessing social media platforms such as Facebook, Instagram, TikTok, Snapchat, Reddit, and X. However, the implementation of this ban is still a considerable distance away. The Australian Senate approved the legislation on Friday, November 29, with a vote tally of 34-19, following the House of Representatives’ endorsement of the unprecedented bill by a margin of 102 votes to 13 the previous day.
According to the new law, social media companies could face fines of up to 50 million Australian dollars (approximately Rs 275 crore) if they do not effectively prevent minors under 16 from creating accounts on their platforms. “Platforms now have a social responsibility to ensure the safety of our kids is a priority for them,” stated Australian Prime Minister Anthony Albanese on Friday. This social media restriction aims to address the physical and mental health challenges that children may encounter due to excessive social media usage. It has also been introduced in response to the prevalence of misogynistic content and negative body image portrayals that affect teenagers on these platforms.
Nonetheless, the journey toward enforcing the social media ban for children under 16 is fraught with challenges.
Aparajita Bharti, co-founder of the tech policy organization The Quantum Hub (TQH), remarked, “Despite the passage of the law, there remains considerable discussion in Australia regarding whether a ban on social media is the most effective approach to mitigate these risks.”
To begin, what does the legislation entail?
The recently enacted law is known as the Online Safety Amendment (Social Media Minimum Age) Act 2024. This legislation prohibits individuals under the age of 16 from maintaining a social media account; however, they are permitted to access these platforms in a “logged out state.” For instance, they may view a business or service’s Facebook page without needing to log in.
Responsibility for preventing age-restricted users from accessing social media lies with the platforms themselves. Consequently, there are no penalties imposed on users or their guardians. The law mandates that platforms undertake “reasonable steps” to ensure compliance with the minimum age requirement, yet it does not delineate what these measures should entail. The legislation states, “Whether an age assurance methodology meets the ‘reasonable steps’ test is to be determined objectively, considering the range of available methods, their relative effectiveness, implementation costs, and the implications for user data and privacy, among other factors.”
The law delineates that the minimum age requirement applies solely to “age-restricted social media platforms,” which is defined based on the established interpretation of “social media service” within the framework of the country’s Online Safety Act.
Exemptions from this regulation include messaging applications, online gaming services, and platforms primarily dedicated to health and educational purposes. Notably, the legislation does not provide exceptions for age-restricted users who possess parental or guardian consent. To safeguard user privacy, the law mandates that platforms utilize the information gathered exclusively for age verification purposes, unless explicit and informed consent is obtained from the user for other uses. Furthermore, platforms are obligated to erase any data collected once a user’s age has been verified. Additionally, it is prohibited for platforms to request government-issued identification from Australians for age verification.
What are the shortcomings of Australia’s social media ban?
With the enactment of the law, focus has shifted to the difficulties associated with its enforcement. Bharti noted that children are skilled at finding methods to bypass restrictions on internet and social media access, stating, “I’ve heard of kids using Google Docs to do things they want to do on social media because they want to go around their parents. In that sense, it might put children at more risk as you wouldn’t know what platform they are using and whom to regulate.”
Moreover, Australia’s social media ban may disproportionately affect youth from marginalized communities who depend on social media for connection and information. Pallavi Bedi, a senior policy researcher at the Centre for Internet and Society (CIS), remarked, “In cases of rape, abuse or trauma, support for victims may come from social media and it could be detrimental to them if you cut them off from these platforms.”
Bharti further underscored the importance of formulating policies aimed at safeguarding children from online dangers, emphasizing that such policies must take into account the socio-economic realities of the country and the complexities of technology usage, including practices like device sharing.
In the context of India, she referenced a TQH survey conducted among 5,000 children from low-income families in Delhi, Rajasthan, and Jharkhand. The findings revealed that over 70 percent of these children reported using their parents’ phones to access social media, while 80 percent indicated that they assist their parents in navigating online platforms.
How practical are age-verification techniques?
Various methods are available for verifying a user’s age; however, most of them present significant challenges. Self-disclosure is easily manipulated, as it merely requires users to check a box confirming they meet the minimum age requirement. Conversely, ID-based verification poses privacy concerns and could undermine the online anonymity that is a hallmark of the internet.
In addition, Instagram’s ‘Teen Accounts’ feature necessitates ‘social vouching,’ whereby three followers must confirm that the user is above the minimum age. This approach is only viable if users already possess accounts on the platform. Similarly, estimating age based on user behavior encounters the same limitations.
Due to these challenges, there has been a noticeable transition from age verification to age estimation and age assurance. Rather than confirming a user’s exact age, platforms can utilize tools that estimate age to determine whether the user is above or below a specified threshold. Age assurance methods serve to indicate whether a user falls within a restricted age category.
Highlighting an innovative tool developed by a French company that estimates a person’s age based on their hand movements, Bharti remarked, “There is significant innovation occurring in the age-verification technology sector, driven by the recognition that ID-based verification is not the most effective solution.”
The objective of the trial is to ascertain the methods of enforcing the social media ban in the country. Nevertheless, the government has been subjected to criticism for swiftly advancing the new legislation through parliament before the trial’s findings are disclosed.





















