Michael Fekete, Sam Ip
June 6, 2019
On April 1, 2019, the Government of Canada’s new Directive on Automated Decision-Making (the Directive) took effect, with compliance required by no later than April 1, 2020.
The Directive sets out minimum requirements for federal government departments that wish to use an Automated Decision System – essentially technology that either assists or replaces the judgement of human decision-makers.
The objective is to ensure that such technology is deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent and interpretable decisions.
The core tenets of the Directive can be summarized as follows:
- Scope of Application: The Directive applies to any system, tool or statistical model in production that is “used to recommend or make an administration decision about a client.” The scope is broad and, on its surface, includes not only purpose-built software applications, but also tools such as Excel models and mathematical formulas.
- Algorithmic Impact Assessment: An algorithmic impact assessment must be conducted prior to the production of any Automated Decision System, in order to assess the risks of the system. The assessment must be updated at regular intervals when there is a change to the functionality or scope of the Automated Decision System. The prescribed tool for making this assessment comprises approximately 60 questions and results in an impact categorization level between I (presenting the least risk) and IV (presenting the greatest risk). Based on the impact level (which will be published), the Directive may impose additional requirements. The most current version of the Algorithmic Impact Assessment can be found here.
- Transparency: Prominent notices must be provided to impacted individuals regarding decisions undertaken, in whole or in part, by the Automated Decision System. These notices must include meaningful, plain-language explanations to affected individuals of how and why the decision was made.
- Software: Government departments are encouraged to use open source software. If proprietary software must be used, the service provider must deliver a copy of the software to the department and provide the department the right to access and test the Automated Decision System, including the right to authorize external parties to perform an audit. All custom source code owned by the Government of Canada must be released, subject to prescribed exemptions.
- Quality Assurance Requirements: Processes must be put in place to ensure the Automated Decision Systems are tested for unintended data biases and other factors that may unfairly impact the outcomes on an ongoing basis. In addition, impacted individuals must be provided the option to challenge the automated decisions. Other quality assurance requirements may apply depending on the impact level based on the Algorithmic Impact Assessment, including the following:
- peer reviews performed by qualified experts;
- the involvement of humans in the decision-making processes;
- additional training and documentation on the design and functionality of the Automated Decision System;
- contingency plans and backup systems in the event that the Automated Decision System becomes unavailable; and
- approval by the Deputy Head or the Treasury Board.
Implications of the Directive on service providers to government departments
While the onus to comply with the Directive falls on the government department making use of the Automated Decision System, there remain significant potential implications for organizations that provide products or services to such departments. These implications include the following:
- service providers may need to evaluate whether existing products or services fall under the scope of the Directive and evaluate steps to support compliance with the Directive by April 1, 2020, as the Directive does not contain exemptions for systems used in production prior to this date;
- service providers will need to consider the risk that a potential competitor may be engaged by the government to act as an expert to perform peer reviews;
- service providers will need to consider if they have the necessary licence rights (or, in the case of a SaaS provider, the operational ability) to provide the government with a self-contained software package necessary to satisfy the “software” requirements; and
- services providers will need to consider if they are able to satisfy the transparency and explainability requirements, particularly in respect of machine learning algorithms and other AI-techniques.
The Directive represents a first step by the government in framing how it manages the risks of deploying Automated Decision Systems within its operations. As April 1, 2020 approaches, we expect both the Directive and the Algorithmic Impact Assessment to be refined to reflect the results of public workshops and consultations. We also anticipate that the private sector will look to it as a potential model for how they should implement, develop and procure Automated Decision Systems.