Elizabeth Sale, Adam Kardash, Wendy Gross, Simon Hodgett, Sam Ip
May 28, 2021
The increasing use of AI facial technology for identification purposes is creating concerns about potential discriminatory biases and risks associated with privacy, human rights and reputation. For organizations planning to use this technology, the best way to guard against these risks is to conduct expanded due diligence and to have a governance process and risk mitigation plan in place. This was the consensus of Osler’s multidisciplinary team of experts who presented during the “Legal, regulatory and commercial considerations in the use of AI facial technology” webinar held May 27.
The presenters were Elizabeth Sale, partner, Banking and Financial Services, who also served as the webinar’s moderator; Adam Kardash, partner, Privacy and Data Management; Wendy Gross, partner, Technology; Simon Hodgett, partner, Technology; Josh Fineblit, associate, Employment and Labour; and Sam Ip, associate, Technology.
In almost all circumstances, Canadian privacy regulatory authorities view biometric data, including facial biometric data, as sensitive personal information. The collection, use and processing of this type of information is seen as a potentially privacy-invasive activity. Facial recognition processes also have been the subject of significant privacy regulatory authority scrutiny.
When contracting for the use of facial recognition technology, typical due diligence questions will not address unique risks. Possible ways of managing AI facial technology risks include:
Watch the Webinar on Demand
- representations and warranties regarding accuracy
- reporting requirements
- transparency and explainability requirements
- commitments to continuous improvements
- terms relating to data quality, sources, standards
- data use, rights and restrictions
- compliance with laws
- governance, including processes to address customer issues
- service levels
- risk allocation provisions
- insurance coverage