Canadian Arbitration Blog

Using artificial intelligence in arbitration: the Silicon Valley Arbitration and Mediation Center’s guidelines Using artificial intelligence in arbitration: the Silicon Valley Arbitration and Mediation Center’s guidelines

February 5, 2025 4 MIN READ

It has become apparent over the past few years that a wide array of industries are utilizing artificial intelligence (AI), the legal industry included. Indeed, the legal profession is increasingly embracing AI as a transformative force, and many legal technology companies are integrating AI into their products to make them faster and more precise. According to Thomson Reuters’ 2024 Future of Professionals Report, AI is transforming the legal profession by automating routine tasks and boosting lawyer productivity.

However, AI’s use in the adjudicative context continues to mature and remains somewhat controversial. To address some of these concerns and standardize the approach taken with respect to AI tools, the Silicon Valley Arbitration and Mediation Center published the Guidelines on the Use of Artificial Intelligence in Arbitration [PDF] (the Guidelines) in 2024, which introduce a principle-based framework for the use of AI tools in arbitration.

Guidelines on the use of AI in arbitration

The Guidelines can be used in domestic or international arbitrations and are meant to serve as a point of reference for arbitral institutions, arbitrators, parties and their representatives (including counsel), experts and, where relevant, other participants in the arbitral process. They are intended to guide rather than dictate and may be adopted, in whole or in part, in the arbitration agreement or by the parties and/or the tribunal at any time.

The Guidelines are organized into three chapters and consist of seven guidelines. The three chapters delineate between

  1. guidelines that generally apply to all participants in the arbitration process
  2.  guidelines that address specific uses of AI by parties and party representatives (including counsel)
  3. guidelines that address particular considerations that may arise when arbitrators use AI

Guidelines that generally apply to all participants in the arbitration process

Guideline 1 encourages all participants in the arbitration process to understand the uses, limitations and risks of AI applications, including by becoming familiar with the AI tool’s intended uses and by making reasonable efforts to understand each AI tool’s relevant limitations, biases and risks.  

Guideline 2 focuses on safeguarding confidentiality, and states that

  1. all participants in international arbitration are responsible for ensuring that their use of AI tools is consistent with their legal obligations to safeguard confidential information
  2. only AI tools that adequately safeguard confidentiality should be used with confidential information
  3. where appropriate, participants should redact or anonymize materials submitted to an AI tool

Guideline 3 notes that while disclosure on the use of AI tools in connection with an arbitration is not necessary as a general matter, decisions regarding disclosure of the use of AI tools should be made on a case-by-case basis.

Guidelines that address specific uses of AI by parties and party representatives (including counsel)

Guideline 4 discusses the duty of competence and diligence in the use of AI and requires party representatives to observe applicable ethical rules and professional standards of competence and diligence when using AI tools. Guideline 4 also requires parties to review the output of any AI tool used to verify its accuracy. Parties and party representatives are responsible for any errors or inaccuracies in any output produced by an AI tool.

Guideline 5 encourages respect for the integrity of the proceedings and the evidence, requiring parties, party representatives and experts

  1. not to use AI in ways that affect the integrity of the arbitration or disrupt the conduct of the proceedings
  2. not to use AI to falsify evidence, compromise the authenticity of evidence or otherwise mislead the arbitral tribunal or the opposing party

Guidelines that address particular considerations that may arise when arbitrators use AI

Guideline 6 prohibits the delegation of decision-making responsibilities by an arbitrator to any AI tool. The use of AI tools by arbitrators shall not replace their independent analysis of the facts, the law and the evidence.

Guideline 7 discusses respect for due process, and states that an arbitrator shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties beforehand. Further, where an AI tool cannot cite sources that can be independently verified, an arbitrator shall not assume that such sources exist or are characterized accurately by the AI tool.

The Guidelines also provide a model clause that can be incorporated into procedural orders to make the Guidelines applicable to all participants involved in a particular arbitration proceeding, which states

The Tribunal and the parties agree that the Silicon Valley Arbitration & Mediation Center Guidelines on the Use of Artificial Intelligence in Arbitration (SVAMC AI Guidelines) shall apply as guiding principles to all participants in this arbitration proceeding.

Conclusion

The adoption of uniform principles on the use of AI in arbitral proceedings is a welcome development, particularly as arbitration commonly involves parties and laws from different jurisdictions. The non-binding nature of the Guidelines is useful in the arbitration context, as it allows arbitrators and parties to flexibly adapt their use to the needs of the case (a common aspect of arbitration). It remains to be seen how Canadian law will evolve to address the use of AI in other adjudicative contexts, including in the court system.