guide

AI in Canada AI in Canada

A legal guide to developing and using artificial intelligence
September 10, 2025 51 MIN READ
Download the PDF

Application and compliance with foreign laws

Things to know

  • AI is increasingly regulated across jurisdictions. Cross-border compliance is critical for companies operating globally, especially where AI systems may be classified differently (e.g., “high-risk” in the E.U. but not in Canada).
  • The E.U.’s Artificial Intelligence Act is the world’s first comprehensive AI regulation. It introduces a risk-based framework. Some AI systems and practices are prohibited, specific rules apply to identified “high-risk” AI systems and general purpose AI models, and transparency requirements apply to certain AI systems considered to be low risk. The rules are being phased in between February 2025 and August 2026. On July 10, 2025, the European Commission published The General-Purpose Code of Practice, a voluntary tool to assist developers of general purpose AI models to comply with the Act.
  • U.S. AI policy remains fragmented. While some sector-specific rules exists, there is no broadly applicable federal AI law. While some state laws have been enacted, Congress is considering the enactment of a 10-year moratorium on state-level AI legislation.
  • Canada has been an active participant in various international initiatives in respect of AI, including the creation of standards emphasizing human rights, accountability, and interoperability. These initiatives include:
    • the OECD’s Recommendation of the Council on Artificial Intelligence (which sets out the first intergovernmental standard addressing AI, which is to be used by members to shape policies and create an AI risk framework across jurisdictions)
    • the UNESCO Recommendation on the Ethics of Artificial Intelligence (which addresses ethical issues)
  • Canada is a founding signatory of the Council of Europe Framework Convention on AI, the first legally binding international treaty on AI, focused on human rights, democracy, and the rule of law.

Things to do

  • Identify where your AI systems are developed, deployed, or sold, and determine if you are subject to foreign regulatory regimes such as the E.U. AI Act, U.S. sectoral laws, or treaty-based obligations.
  • Consider AI procurement standards and model clauses when contracting with third-party AI developers or vendors and consider referencing model procurement standards (e.g., E.U. model clauses) to mitigate legal, operational, and reputational risks.
  • Assess AI system classification under applicable foreign AI regulations:
    • Determine if your system falls under “prohibited,” “high-risk,” or “limited-risk” categories pursuant to the E.U.’s AI Act.
    • Prepare to implement technical documentation, conformity assessments, and human oversight for high-risk systems.
  • Consider aligning your AI practices with international standards to demonstrate proactive governance and reduce compliance friction across jurisdictions.
  • Implement internal compliance programs that integrate international standards (such as ISO/IEC 42001 and NIST AI RMF) to create a globally compatible AI governance framework.
  • Monitor international developments in AI regulation and coordinate legal strategies across jurisdictions.

Useful resources

National and international legal frameworks:

Multilateral and treaty-based initiatives:


Next