
Efforts Are Just Beginning in Australia
In late 2024, the Australian Parliament passed the Online Safety Amendment (Social Media Minimum Age) Act 2024 — meaning that from 10 December 2025, certain major social-media platforms will be required to prevent Australians under 16 from having accounts. (Prime Minister of Australia) Under the law, platforms — not parents or children — carry the responsibility to take “reasonable steps” to prevent under-16s from creating or keeping accounts, or face civil penalties up to A$49.5 million. (Australian Parliament House)
The intention behind this legislation is to protect children from online harms: the addictive design of social media, exposure to harmful or misleading content, and the pressures that come with social media use during formative years. (eSafety Commissioner)
At the same time, society must reckon with trade-offs. Social media can — and often does — offer young people access to connection, peer interaction, creative expression, and information. Cutting off or heavily restricting access risks removing these potential benefits.
Moreover, the reality of online influence and misinformation — particularly given the increasing role of AI in generating and distributing content — adds another dimension to the challenge. As Australian Communications and Media Authority (ACMA) notes, many Australians are more concerned about misinformation online than people in other countries; gaps in media literacy make it difficult for many to sort fact from fiction. (ACMA)
Given all this — and given that simply banning or restricting access feels like a blunt tool — what do people think would be appropriate and effective responses? Some of the questions worth asking:
- Should there be an emphasis on media literacy and critical-thinking education (in schools and at home), so children learn to evaluate content — rather than relying on age bans only?
- Should regulatory efforts focus more on holding platforms and content-producers accountable: demanding transparency, forcing moderation or stricter rules around content (especially AI-generated content, paid propaganda, or manipulative algorithms)?
- How can we balance protection with opportunity — ensuring children and young people still have access to positive, creative or supportive online spaces but protected from distortions, manipulations, or harmful influences?
- What roles should parents, schools, communities, and governments play — separately or together — to help guide children’s online experiences in a way that respects their autonomy but also mitigates risk?
- If age-based bans are in place (as they now are), how can we make sure they are enforced fairly and effectively — without causing undue exclusion, or creating new problems (like pushing kids to unregulated platforms)?
We at UF are deeply worried about children being misinformed or even subtly influenced to accept ideologies of those who pay for AI-powered media content. It’s easy to criticise the problem — but much harder (and more important) to thoughtfully consider what concrete steps we should take.
What do you think would be good, balanced ways to deal with the problem — to protect children and young people from undue influence, while preserving the positive potentials of online communication and community?