Tech companies Meta Platforms (owner of Facebook and Instagram), Snapchat, and TikTok are calling on Australia to reconsider its decision to exclude Alphabet's YouTube from new laws prohibiting social media access for children under 16.

The groundbreaking legislation, which imposes some of the strictest social media restrictions globally, was passed by Australia’s parliament in November. It requires social media platforms to block minors from logging in or face fines of up to A$49.5 million (approximately $31 million).

However, YouTube is set to be exempt from the ban, which is scheduled to take effect by the end of the year. The platform is deemed an essential educational resource and is the only service permitted for children under a family account with parental oversight.

Meta has argued that YouTube offers the same features the Australian government cited as reasons for the ban, such as algorithm-driven content recommendations, social interaction tools, and exposure to potentially harmful material.

In a blog post on Wednesday, Meta stated, “YouTube’s exemption contradicts the stated purpose of the law. We urge the government to ensure the law is applied equally across all social media platforms.”

TikTok has also expressed concerns, warning that exempting YouTube from the age restriction could lead to a law that is “illogical, anticompetitive, and short-sighted.” In a submission to the government, TikTok emphasized the need for consistent rules for all social media services.

Similarly, Snapchat’s parent company, Snap Inc., has opposed preferential treatment for any specific platform. In a submission on Friday, the company stated, “Exemptions must be applied fairly and impartially, and all platforms should be held to the same standards.”

Mental health and extremism experts have also raised concerns, telling Reuters that YouTube exposes children to addictive and harmful content, much like other social media platforms.

In response, YouTube has publicly stated that it is intensifying its content moderation efforts and expanding its definition of harmful content flagged by its automated detection systems.