Authors
Partner, Employment and Labour, Toronto
Partner, Technology, Toronto
Associate, Employment and Labour, Toronto
Articling Student, Toronto
Artificial intelligence (AI) tools are changing how employers are making decisions and managing their workforces. As AI becomes increasingly integrated in the day-to-day operations of the modern workplace (e.g., recruitment, performance management and workplace investigations), employers in Ontario should be aware of their legal obligations, best practices and the potential risks.
In the employment context, what makes AI unique is it generates probabilistic outputs without revealing how those conclusions were reached. This unpredictability is giving rise to familiar issues but in unfamiliar forms, including with respect to bias, privacy, fairness and accountability.
Against this backdrop, Ontario has begun to take incremental steps towards regulating how AI is used in the workplace.
AI under the Ontario ESA
Unlike federally regulated employers and provincially regulated employers in British Columbia, Alberta and Québec, Ontario does not currently have private sector privacy legislation. However, the Ontario Employment Standards Act, 2000 (the ESA) includes requirements that relate or potentially relate to the use of AI in the workplace:
- Employers with 25 or more employees are required to maintain a written policy with respect to the electronic monitoring of employees. If an employer uses AI to monitor employees (which could include, among other things, monitoring productivity, performance, attendance, communications or internet activity), the policy must describe how and under what circumstances the employer conducts such monitoring and the purposes in which the information collected may be used by the employer. (See the November 2022 Osler Update, “Ontario’s electronic monitoring policy now in effect,” for more information.)
- Effective January 1, 2026, employers who use AI to screen, assess or select job applicants must disclose the use of AI in publicly advertised job postings. (See the December 2024 blog, “Working for Workers Four: ‘artificial intelligence’ disclosure requirement,” for more information.)
Together, these requirements signal that Ontario’s employment law framework is beginning to recognize and regulate practical ways in which AI intersects with the workplace, even in the absence (or in addition to) dedicated AI or privacy legislation.
Accuracy and fairness matters: human oversight is recommended
At common law, employers owe employees an implied duty of good faith, which includes making employment decisions based on accurate and complete information. What is changing with AI is not this duty itself, but the manner in which it can be affected through the use of AI. As AI becomes embedded in tools used by employers, there is a growing potential for those systems to distort, omit or misrepresent information upon which human decision-makers rely. Employers should therefore be alert to potential inaccuracies or omissions with the use of AI to ensure the tool is being used fairly in the workplace.
For example, an employer may use an AI tool to assist with recording, summarizing and analyzing an interview with an employee during the course of a workplace investigation. The AI output may contain errors based on an incorrect interpretation of slang, tone or idiomatic expressions, resulting in summaries that are inaccurate or misleading. Further, the AI output could fail to capture non-verbal relevant information, such as the employee’s tone, facial expressions or general demeanor, all of which may be critical in an assessment of the employee’s remorse for misconduct or the employee’s credibility. These errors could also undermine fairness or reduce the evidentiary value of such outputs in the event they are introduced as evidence in potential litigation. For these and other reasons, AI outputs should be reviewed by an informed human decision maker (e.g., in the prior example, by the investigator who conducted the interview) before they are used to inform any employment decisions or record.
Compliance with internal policies and contractual obligations
Employers should ensure their use of AI is consistent with existing internal policies, employment contracts and collective agreements (if applicable). If it is not, the use of AI could be subject to challenge, even if it is otherwise consistent with recommended best legal practices.
Before implementing AI systems, employers should review these documents to confirm that the introduction or use of AI does not conflict with existing rights, obligations or processes, particularly where AI may influence monitoring or evaluation. Employers should consult with legal counsel to discuss any risks before changing existing policy, procedures or contractual terms.
In particular, employers with unionized employees may be subject to grievances over whether the use of AI in the workplace is reasonable. For example, if AI tools are introduced to monitor productivity or evaluate performance, unions may argue these AI systems amount to increased surveillance, which could be viewed as infringing privacy rights, particularly as the arbitral jurisprudence in the unionized context has developed case law related to employees’ right to privacy. If AI is intended to be used in a way that could implicate employees’ privacy, employers should proactively consult with legal counsel before deployment.
AI and human rights
Employers must not use AI in a way that contravenes the Ontario Human Rights Code, whether directly or indirectly.
AI systems are only as effective as the datasets on which they are trained, which means bias or discrimination can arise in numerous ways throughout the training process, including if datasets are incomplete or contain biases. For example, a resume screening tool might inadvertently favour candidates listing traditionally gender-specific or demographic-specific work experiences, or an application sorting program may inadvertently rank candidates lower due to language patterns associated with a prohibited ground of discrimination.
In November 2024, the Ontario Human Rights Commission (the Commission) published the Human Rights AI Impact Assessment (HRIA) [PDF]. The HRIA is a practical question-based framework to help organizations identify, assess and mitigate human rights risks across the AI lifecycle. This framework emphasizes that consideration for possible discrimination and bias in AI systems be integrated into every stage of AI design and implementation. The HRIA further emphasizes the importance of reviewing AI systems for discriminatory effects, even where the analysis is seemingly neutral on its face. Employers should exercise caution when relying on AI-generated information and be alert to the possibility of inherent bias in the AI system. Incorporating tools like the HRIA into procurement and governance processes also helps build trust with employees and the public that AI is used responsibly.
Other best practices and takeaways
The risks of deploying AI in the workplace are real, but manageable. Employers can meaningfully reduce legal and reputational exposure by embedding responsible AI practices into procurement, governance and day-to-day operations, including by:
- Beginning with procurement: Understand the AI tools you are acquiring, including what data they use, how they generate outputs, and what steps vendors have taken to test for bias, accuracy and security. Procurement is the best stage to establish accountability and set expectations for responsible AI use.
- Enhancing AI governance: Oversight of AI should not rest solely with IT or legal teams. HR plays a central role in understanding how AI systems affect employees, privacy and fairness. HR should be involved in both governance and procurement decisions to ensure tools align with organizational values and employment obligations.
- Adopting of controls: Update existing policies on monitoring, privacy and data use to explicitly address AI. Maintain an internal inventory of AI systems, assign clear accountability for oversight, and regularly review AI outputs for accuracy, bias and fairness.
- Training: Even the best policies, controls and processes are only as effective as people applying them. Provide ongoing training for HR professionals and teams on how AI tools work, their limitations, and best practices.
Together, these practices signal to employees, unions and regulators that the organization is approaching AI adoption with accountability.
Conclusion
AI systems should be used as a complement to human judgment, rather than as a substitute. While AI can improve efficiency, it lacks the contextual understanding and judgement that human decision makers bring to the workplace.
Employers should treat AI as one component within a broader decision-making toolkit and ensure that important employment decisions, particularly those involving decisions that are or could be adverse to employees, involve meaningful human oversight. HR professionals, who tend to be the internal stakeholders most familiar with internal policies, workplace culture and compliance requirements, should have a “seat at the table” when it comes to procurement and implementation of AI systems that could affect the employer’s potential risks and liabilities.