Canada’s Proposed Online Harms Act Could Shape Regulations Globally
Canada recently unveiled plans for the new Online Harms Act that could have significant implications for how artificial intelligence is regulated around the world. The proposed law places obligations on large tech platforms to minimize harmful content amplified by their algorithms and AI systems. It aims to curb the spread of problematic areas like deepfakes and content that incites hatred or violence.
A key element of the Online Harms Act targets the distribution of intimate images created by AI, such as deepfakes, which must be removed within 24 hours. While the creation of such content alone may not be illegal, the social media networks that help distribute them pose real societal risks. By focusing on platforms instead of broad, undefined AI risks, the new law takes a pragmatic approach to governing emerging harms.
How Could This Shape Global AI Governance?
By requiring companies to mitigate the risks of AI amplification, the Online Harms Act provides a model for regulating advanced technologies through established entities rather than vague concepts. If adopted, it may influence international standards that adapt older rules on issues like privacy, consent and cyberbullying to the new challenges of artificial intelligence. The legislation offers a balanced, harm-focused path for assuaging concerns around rapidly developing tech.
Experts argue Canada’s new Online Harms Act illustrates how policymakers can start grappling with complex AI issues by addressing tangible harms through established internet governance. As algorithms and generative models continue advancing, outcomes-based frameworks like this may prove pivotal in building public trust and accountability while avoiding overreach. The proposed law’s approach holds valuable lessons as governments worldwide seek to ensure the responsible development of artificial intelligence.