File photo — A photographer walks through the room ahead of the Liberal leadership announcement, Sunday, March 9, 2025 in Ottawa. THE CANADIAN PRESS/Adrian Wyld

Canada's Liberal Party is set to formally debate whether to impose age restrictions on social media platforms and AI chatbots, adding fuel to a growing global conversation about how governments should protect children from digital harms.

The debate will take place at the party's national policy convention in Montreal, April 9 through 11, 2026. Among 24 policy resolutions on the agenda, at least one addresses minimum age requirements for accessing social media and AI chatbot platforms. These resolutions come from Liberal grassroots members and are not binding on Prime Minister Mark Carney's government.

Why AI Chatbots Are Now Part of the Conversation

The inclusion of AI chatbots alongside social media in the same policy debate signals a meaningful shift. Until recently, the public conversation focused almost entirely on platforms like Instagram and TikTok.

Emily Laidlaw, a Canada Research Chair in cybersecurity law at the University of Calgary, has been direct: chatbots pose a specific and current risk to children that companies need to be managing. Taylor Owen, founding director of the Centre for Media, Technology and Democracy at McGill University, agreed - arguing that a credible AI adoption strategy cannot exist without taking consumer safety seriously. Both experts advised the Liberals on online harms legislation and pushed to have AI chatbots explicitly included.

Canada's Unfinished Business on Online Harms

In 2024, the Liberals introduced the Online Harms Act - a bill that would have required social media companies to disclose risk management plans, established a duty of care for protecting children, and created enforcement bodies. It died before becoming law when the 2025 election was called.

Under Carney, the government opted for narrower legislation rather than reviving the full bill. Canada currently has no legislated social media ban for minors. Platforms set their own minimum age requirements, typically 13, but enforcement is widely acknowledged as weak.

Where Global Momentum Is Heading

Australia has already passed legislation banning children under 16 from social media - one of the most aggressive moves by any government on this issue. Reports from earlier in 2026 indicated Canada had drafted a plan for a ban on children under 14, though no legislation has been tabled.

For AI developers and businesses deploying consumer-facing chatbots, the direction of travel is clear. Age verification, parental consent frameworks, and product-level safety standards are moving from best practices to likely regulatory requirements.

What This Means

The explicit mention of AI chatbots in the Liberal convention agenda is significant. It reflects growing awareness that generative AI tools are not categorically different from social media when it comes to child safety. Companies building on models from OpenAI, Anthropic, and others are now being discussed in the same policy conversations as Meta and TikTok.

The April convention will signal whether Liberal grassroots support age restrictions as formal party policy. That won't force government action, but combined with existing legislative momentum on online harms, the regulatory window in Canada is clearly open. For anyone building or deploying AI products in 2026, this is a story worth watching

Keep Reading