M&A transactions involving AI companies: due diligence considerations

Jul 26, 2024 6 MIN READ

Artificial intelligence (AI) has become a strategic imperative for organizations seeking a competitive edge, leading to a surge in AI-related mergers and acquisitions (M&A). In 2023, AI companies accounted for 15% of all venture capital financings [PDF], underscoring significant investment interest in this sector. This rapid growth and investment in AI technologies has drawn the attention of regulators striving to keep pace.

The Canadian government has taken steps to regulate AI governance with the proposal of the Artificial Intelligence and Data Act (AIDA) as Part 3 of Bill C-27. This bill aims to establish a robust national framework for AI regulation, emphasizing innovation and the protection of individual rights. The Standing Committee on Industry and Technology paused its current study of Bill C-27 and intends to continue its clause-by-clause review starting September 16, 2024. To bridge the current gap and until the AIDA comes into force, Innovation, Science, and Economic Development Canada has implemented a Voluntary Code of Conduct for the Responsible Development of Advanced Generative AI Systems.

In the United States, there has been a similar interest in regulating AI at the federal level. The White House demonstrated such interest by releasing a non-binding Blueprint for an AI Bill of Rights in October 2022. Additionally, in May 2024, the Council of the European Union enhanced its regulatory efforts by approving the Artificial Intelligence Act. This act builds on the existing General Data Protection Regulation and introduces a comprehensive, risk-based approach to AI governance within the economic bloc. Beyond these regional and continental efforts, international standards such as ISO/IEC 42001 provide guidelines for standardized AI management systems, ensuring consistency and coherence in AI governance globally.

These varied and rapid advancements in regulation, financing and technology underscore the increasing complexity of AI-related M&A transactions, a topic recently explored in depth by Osler partners Sam Ip (Technology) and Sophie Amyot (Corporate) in their webinar on key issues in M&A for AI companies.

The role of AI in the company

An effective AI due diligence process begins with accurately assessing the extent to which AI plays a role in the target company. AI companies generally use machine learning as a fundamental and foundational element of their products and services, whereas traditional software companies tend to develop software based on established programming paradigms without the primary use of AI technologies. The difference between both lies in core technology and architecture.

Machine learning, a subset of AI, involves self-learning algorithms and techniques enabling computers to improve performance through experience. These algorithms learn from data, identify patterns, and make decisions with minimal human intervention. This iterative process enhances efficiency and cost-effectiveness across various applications. However, this process also introduces unique risks requiring careful consideration, such as biases present in training data leading to unfair outcomes, or the accuracy of AI models being affected by the quality of data used in training.

These distinctive features, coupled with a rapidly evolving regulatory landscape, render traditional software M&A evaluation methods inadequate for assessing the full spectrum of value, risks and opportunities inherent in companies that utilize AI technologies. A tailored approach to due diligence is necessary to account for the specific challenges and benefits associated with AI integration.

Due diligence scoping: key differences in AI M&A

When acquiring a technology company, the due diligence process differs on whether the target is a traditional software company or an AI company. For traditional software companies, due diligence typically centers on evaluating the source code, with a strong emphasis on considerations such as functionality, performance, and user experience. In contrast, due diligence for AI companies requires a parallel and comprehensive examination of back-end processes, including data sourcing, manipulation, training and usage.  AI systems are often the digital crown jewels of an AI company, and the assessment of such systems can necessitate a different approach to diligence focused on:

  • data quality and compliance. Assess data used to train, fine-tune, and test AI models and algorithms, ensuring it is of high quality, legally obtained, and in compliance with regulations, including privacy regulations.
  • core AI models and related technology. Evaluate AI models underpinning the company’s core products and services, including related technology to deliver such models effectively.
  • AI and compute infrastructure. Examine the target company’s AI and compute infrastructure, including the capability to support scaling the target company’s business.
  • AI governance framework. Review the target company’s AI governance framework throughout the AI lifecycle, including its processes for responsible development, deployment and use of AI, and to mitigated related risks, such as those related to accuracy, bias and transparency.
  • development approach. Review the target company’s approach to development of AI-related products and services, including the practices and methodologies employed.
  • key AI personnel.  Assess expertise and retention strategies of the target’s key AI personnel, particularly those in critical roles including, in particular, the relationships such key AI personnel may have with other institutions and third-party organizations (e.g. universities or AI research institutions).

Common areas of risk with AI companies

Identifying common risks associated with AI technologies is essential during due diligence. While additional risks may arise based on the specific context of the business, some common areas of risk include:

  • ownership uncertainty of models. There may be uncertainty regarding the target company’s exclusive ownership of AI models, algorithms, and related technology.
  • data rights and authority. The target company might lack the necessary rights or lawful authority to train, test, develop or deploy their AI models, often because the company lacks necessary agreements to use such data.
  • foundational model reliance. The target company may rely on foundational models without having the necessary rights to use in the context of its business or products (e.g., if using the LLaMa model, exceeding usage limits under the community license).
  • open-source risks. The target company’s use of open-source AI software, open-data, or open-models is under viral licenses and encumbers the company’s ownership of its AI models, algorithms and related technology.
  • high-risk AI applications. The target company’s use of AI in a higher risk area may lack robust validation processes, including human oversight, to verify the accuracy of AI outputs.
  • inadequate development practices. The target company may have inadequate AI development practices, including the absence of accountability frameworks, risk mitigation approaches, and explainability measures.
  • regulatory non-compliance. The target company may not be compliant with, or prepared to comply with, forthcoming AI regulations.

By focusing on these potential key risk areas, acquirors can better mitigate potential issues and gain a clearer understanding of the true value and challenges of their investment.

Conclusion

Given the unique characteristics and inherent risks of machine learning, a specialized approach to M&A due diligence is essential, extending beyond traditional software evaluation methods. Acquirers must focus on critical areas such as data integrity, model robustness, ethical design processes, and the expertise of specialized personnel to better assess the value and potential risks of AI companies. Sellers, on the other hand, must be prepared to demonstrate the robustness and ethical soundness of their AI systems to attract and secure potential buyers.

As the regulatory landscape continues to evolve, it is crucial to consider how the distinctive aspects of AI companies may impact transaction structures and contractual protections. This encompasses not just due diligence, but also key transaction elements such as representations and warranties, indemnification clauses, closing conditions, and post-closing commitments. By taking these factors into account, acquirers can safeguard themselves effectively and capitalize on the advantages offered by AI-driven acquisitions.