Canada's artificial intelligence legislation is here

June 28, 2022

 

On 16 June 2022 the Canadian federal government introduced Bill C-27, also known as the Digital Charter Implementation Act 2022. If passed, this package of laws will:

  • Implement Canada’s first artificial intelligence (AI) legislation, the Artificial Intelligence and Data Act (AIDA).
  • Reform Canadian privacy law, replacing the Personal Information Protection and Electronic Documents Act with the Consumer Privacy Protection Act.
  • Establish a tribunal specific to privacy and data protection.

The AIDA establishes Canada-wide requirements for the design, development, use, and provision of AI systems. The AIDA may have extra-territorial application if components of global AI systems are used, developed, designed or managed in Canada. As outlined in our previous post, the European Union recently proposed an Artificial Intelligence Act, which also would have some extra-territorial application. Multi-national companies should develop a co-ordinated global compliance program.

The AIDA requirements are designed to protect Canadians from the harms and biased outputs AI systems are capable of generating. Significantly, the AIDA mandates impact assessments. If an AI system is assessed as “high-impact”, there are further requirements, including public disclosure. The AIDA authorises the Minister of Innovation, Science and Industry (the Minister) to order the production of records related to AI systems and to publish information about contraveners (in other words, naming and shaming).

Under the AIDA, certain conduct in relation to AI systems can result in administrative monetary penalties or even criminal offences. The AIDA regulates activities carried out in the course of international or inter-provincial trade and commerce and does not apply to government institutions. There is a separate directive in effect focused on the federal government’s use of AI in the context of automated decision-making systems.

AIDA requirements

The AIDA requires individuals and legal entities who are “responsible for” an AI system (meaning those who, in the course of international or inter-provincial trade and commerce, design, develop or make available for use the AI system or manage its operation) to:

  • Establish measures to manage anonymised data.
  • Conduct an impact assessment to determine if the AI system is “high-impact” (a threshold that will eventually be defined by regulations).
  • Maintain general records.

If an AI system is assessed as “high-impact,” persons responsible for the AI system must:

  • Develop a risk mitigation plan.
  • Monitor those risk mitigation measures.
  • To the extent the system is being used or being made available for use, publish a plain-language description of the system on a website, including the system’s intended use, types of content it generates, and mitigation measures in place.
  • To the extent the use of the system results or is likely to result in “material harm,” notify the Minister.


The Minister’s authority

Under the AIDA, the Minister may, by order, initiate or require:

  • Information and records about an AI system.
  • An audit.
  • The adoption of measures to address anything referred to in an audit report.
  • The cessation of an AI system’s operation, should the Minister have “reasonable grounds” to believe it gives rise to a “serious risk of imminent harm”.
  • The publication of information about contraventions of the requirements in order to encourage compliance (this does not extend to “confidential business information”).
  • The sharing of information with other regulators and enforcers, such as the Privacy Commissioner or the Canadian Human Rights Commission, as appropriate.

The Minister is also able to designate an AI and Data Commissioner, whose role would be to assist and support the Minister in ensuring compliance with the requirements.

Penalties and offences

The administrative monetary penalties regime has been largely left to the regulations to define. Notably, the stated purpose of these administrative penalties is to “promote compliance” and “not to punish.”

The AIDA establishes that it is a criminal offence to contravene any AIDA requirements or to obstruct or provide false or misleading information during an audit or investigation. Criminal offences can also result from circumstances involving: (1) possessing or using personal information that is unlawfully obtained; and (2) making an AI system available for use where such use causes serious harm and the harm was likely.

Reflections and takeaways

Bill C-27 introduces Canada’s first AI legislation and a novel AI regulatory framework. All businesses designing, developing, operating, licensing or selling AI systems in Canada in the context of international or inter-provincial trade and commerce will be expected to comply, and should consider current activities in view of the proposed AIDA requirements. This is particularly important for those using or providing “high-impact” systems.

Some questions remain. While the AIDA is directed to “high-impact” systems and has a notification requirement triggered if there is a likelihood of “material harm,” these and other key terms are not yet defined. Further, the quantum of administrative penalties has been left to regulations.

Moreover, the AIDA sets out publication requirements, but it is unclear if there will be a public register of high-impact AI systems and what level of technical detail about the AI systems will be available to the public. More clarity is expected to come through Bill C-27’s second and third readings in the House of Commons, and subsequent regulations if the bill is passed.