Australian regulators are poised to require app stores to block AI services lacking age verification to protect younger users from mature content. This move comes ahead of a March 9 deadline, with potential fines for non-compliant AI companies. Only a fraction of leading AI chat services in the region have implemented such measures.
Australia's eSafety Commissioner is signaling a firm approach to safeguarding children online, particularly regarding AI chatbots. Regulators may mandate that app storefronts prevent access to AI services that fail to verify users' ages for restricting mature content, with a deadline of March 9. A representative for the commissioner stated, "eSafety will use the full range of our powers where there is non-compliance." This could involve actions against gatekeeper services, including search engines and app stores that serve as entry points to these technologies.
A Reuters review of 50 prominent text-based AI chat services in the region revealed limited preparation. Only nine have introduced or announced plans for age assurance mechanisms. Meanwhile, eleven services have applied blanket content filters or intend to restrict access for all Australian users entirely. Many others have yet to disclose any public steps, heightening the risk of enforcement just a week before the cutoff.
Non-compliance could result in fines reaching A$49.5 million ($35 million) for AI firms. This initiative aligns with Australia's broader child protection efforts, including last year's ban on social media and certain interactive digital platforms for those under 16.
Globally, debates continue over responsibility for shielding minors from harmful content. In the United States, for example, Apple and Google advocate shifting this duty to the platforms themselves rather than app store operators. Australian authorities' emphasis on app stores remains tentative, but it reflects their priority on stringent digital safeguards.