There are a range of crucial points that are becoming unavoidable in discussions around artificial intelligence legislation. Most glaring among them are the need to develop governance frameworks for the creation and deployment of AI, and training a future workforce that will be tasked with implementing such frameworks at each level of an AI model's life cycle.

Addressing the House of Commons of Canada's Standing Committee on Industry and Technology 7 Dec., IAPP AI Governance Center Director Ashley Casovan said the proposed Artificial Intelligence and Data Act is drafted in such a way that it would require AI developers to meet the requirements of improved global standards to prevent downstream harms as time goes on and more AI technology solutions are introduced in the public and private sectors.

But most critical, Casovan said, will be filling out the professional ranks with the knowledge and expertise to implement the rules of the road for AI.

"The reality is that the general adoption of AI is still new, and these technologies are being used in diverse and innovative ways in almost every sector," Casovan said. "Creating perfect legislation that will address all the potential impacts of AI in one bill is difficult. Perhaps more importantly, we need professionals to put those rules into practice."

AIDA is one of three pieces of legislation contained with the omnibus Bill C-27, the Digital Charter Implementation Act. The other two pieces of legislation being the Consumer Privacy Protection Act, which attempts to modernize Canada's private sector privacy laws, and the Personal Information and Data Protection Tribunal Act, which would establish an appeals tribunal for Office of the Privacy Commissioner decisions, respectively.

In October, Minister of Innovation, Science and Industry François-Philippe Champagne introduced several amendments to the AIDA, including defining high-impact AI systems and evaluating potential harms caused a high-impact system in the context of specific uses.

Bill C-27 has yet to be voted out of committee and stakeholders are skeptical the bill will pass, as drafted, before the deadline to hold a federal election prior to 2025.

Incoming Working Group on the Future of Work at the Global Partnership on Artificial Intelligence Co-chair Alexandre Shee said he supported passing the AIDA. However, he does not consider the legislation to be complete.

According to Shee, the AIDA does not account for the manual front-end work done by humans to compile and label data and engineer the infrastructure to build the foundation of an AI model. A recent investigation by Wired was cited as it found minors in Pakistan were manually uploading and labeling data to train an AI model, and received meager compensation.

He recommended the inclusion of a disclosure requirement on all data fed into a given AI model to eventually create ethical AI supply chains.

"The AIDA fails to address (three) key portions of the AI supply chain: data collection, annotation and engineering, which represents 80% of the work done in AI," Shee said. "No disclosure mechanism is put in place to ensure that Canadians are able to make informed decisions on the AI systems they choose to ensure that they're fair, high quality and respecting their rights."

Remaining witnesses were generally more critical of the AIDA as currently constituted.

McGill University AI Governance Professor of Practice Ana Brandusescu called for the AIDA to be stripped out of the C-27 package and overhauled. Her recommendations included adding language that will increase accountability for AI-related harms caused to individuals as a result of both public and private sector uses of AI systems, and enshrining stronger worker protections.

Brandusescu said the AIDA would simply promote an AI-powered economic ecosystem that entrenches already established Big Tech giants.

"The AIDA is a missed opportunity for shared prosperity," Brandusescu said. "We need meaningful public participation. A strong legislative framework demands meaningful public participation, because participation will actually drive innovation, not slow it down, and the public will tell us what's right for Canada."

Digital Public Partner Bianca Wylie was the most vocal critic of the AIDA, advocating for scrapping the legislation and starting over from an "adaptive perspective." Such a foundation would focus on existing sectoral regulatory agencies to establish laws and uses by which AI systems can legally operate in each industrial silo.

Wylie said one aspect on which the AIDA especially misses the mark is the attempt to define "high-risk systems," because "harm is always contextual" and the law lacks the specificity to adequately remedy harms in any number of use contexts.

"From an adaptive perspective, we don't reinvent the world in the name of artificial intelligence," said Wylie, who specializes in public interest digital governance advocacy. "It's disrespectful to the existing status of a government, of a democracy and of accountability, so you at least start there (to reform AIDA)."

Should the AIDA become law in some form, Casovan said sector-specific laws for AI also need to be developed eventually to better account for harms in specific use contexts.

The core tenements of any AI regulation, according to Casovan, must include "one point of accountability" for AI developers and deployers, and a robust pre-market auditing standard for before high-risk systems can be sold commercially.

"What the AIDA does is it provides a framework that then is dependent on other types of sector-specific regulation," Casovan said. "The requirement of harmonization across different ministries, is really important. I would also flag the requirements of harmonization both within Canada, so (harmonized) provincially, as well as provincial to national government (harmonization) and with local government as well."