AI standards
Things to know
- In the absence of legislative guidance, standards are critically important to guide the responsible development, deployment and use of AI.
- Standards setting organizations are rapidly developing a variety of standards applicable to AI. The most prominent include:
- ISO/IEC 42001 – Information technology — Artificial intelligence — Management system, the first international standard for AI management systems.
- ISO 42001 provides a series of controls for embedding responsible AI practices across an organization.
- ISO 23894 – Information technology — Artificial intelligence — Guidance on risk management — supports lifecycle-based risk assessments and risk communication strategies.
- NIST AI RMF 1.0 [PDF] – Artificial Intelligence Risk Management Framework, a standard published by the U.S. National Institute of Standards and Technology (NIST) is framework for managing risks across the AI lifecycle.
- The NIST AI RMF Playbook describes action for achieving outcomes outlined in the Risk Management Framework.
- ISO/IEC 42001 – Information technology — Artificial intelligence — Management system, the first international standard for AI management systems.
- The Government of Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, introduced in October 2023 and expanded in 2025, encourages organizations to commit to key principles such as safety, transparency, accountability, fairness, and human oversight.
- The Voluntary Code of Conduct distinguishes between organizations that develop advanced generative AI systems and those that manage or deploy them, with different expectations and commitments for each.
- In March 2025, the federal government published the Guide for Managers of AI Systems, offering a useful practical tool for implementing the Voluntary Code.
Things to do
- Understand the content of the standards and which parts of the standards apply to your particular development and use of AI.
- Consider the role that compliance with leading international AI standards can play in policy making and contracting for effective and responsible AI development and deployment.
- When contracting and procuring AI tools, consider utilizing these standards to create obligations around the effectiveness and responsible deployment of AI.
- Consider whether it is to your advantage to follow, adopt or sign on tothe Government of Canada’s Voluntary Code of Conduct on AI.
- Build flexibility into policies, contracts and procurement requirements to take into account evolving standards.
Useful resources
International standards:
- ISO/IEC 42001 – AI Management System
- ISO/IEC 23894 – AI Risk Management
- NIST AI Risk Management Framework (AI RMF 1.0)
Canadian frameworks and guidance:
- Ethical Design and Use of Artificial Intelligence by Small and Medium Organizations, CIO Strategy Council, CAN/DGSI 101:2025
- Voluntary Code of Conduct for Generative AI
- Guide for Managers of AI Systems (2025)
Commentary and guidance:
- “The role of ISO/IEC 42001 in AI governance,” Osler, July 10, 2024