Like other aspects of the modern economy, privacy stands to be fundamentally revolutionized by the rapid development of generative artificial intelligence and algorithmic decision-making systems.

Across numerous sessions at the IAPP Canada Privacy Symposium 2023 last week, attendees took an interest in how organizations ensure they build comprehensive AI governance frameworks, while keeping an eye on how the technology could be regulated in Canada, potentially through the proposed Artificial Intelligence and Data Act, which is contained within the omnibus Bill C-27.

Transparency key to getting AI development right

Microsoft Canada Privacy, Risk and Compliance Director Jason Bero said AI first achieved "human parity" with its ability to recognize human speech in 2017. Just six years later, he said, the rapid evolution of creative generative AI models such as ChatGPT and image generator DALL-E is indicative of the seemingly limitless potential of the technology. 

Bero said Microsoft views AI as having the capacity to dramatically benefit broad sectors of society, such as in health care, where company officials said it could facilitate the development of a cure for Alzheimer’s Disease within five years. He said such advancements will not come without some initial drawbacks and will require technology companies to improve transparency.

"Transparency is probably the area that I think — not just us at Microsoft, but OpenAI, Google — we could probably be better on the transparency side," Bero said, adding that he expected global regulatory investigations to focus attention on transparency.

Microsoft’s search engine Bing recently partnered with OpenAI's ChatGPT to enhance search results. Bero said Bing's algorithm only learns based on individual user' queries to tailor specific results for each user, rather than the algorithm training off the numerous queries of collective Bing users.

Governance a top area of concern

For companies beginning to deploy automated decision-making algorithms, ensuring it handles personal information in keeping with the broader organizational data governance structure is paramount.

Pentavare Chief Privacy Officer Robin Gould-Soil, CIPP/C, and CIBC AI Governance and Standards Senior Manager Tiffany Wong, CIPP/E, said organizations should implement protocols to ensure the algorithm does not run afoul of jurisdictional regulations and organizational values.

Wong said the top concern for organizations working to operationalize trustworthy AI-powered decision-making systems is accountability, in addition to transparency and fairness.

Wong said accountability is "quite familiar to privacy professionals," the difference with AI is that determining who is accountable is more complicated.

"Is it the data scientist that wrote one line of code?" Wong said. "Is it the corporate executives who signed off on its deployment, but didn't understand how it worked technically or how to do appropriate due diligence?"

Gould-Soil said privacy pros whose companies are deploying decision-making systems should have a solid grasp of the API their company utilizes, which will better help IT teams operationalizing an automated decision-making system that instills the company's values into the system's output. She said, without a working knowledge of the company's API, it would be difficult for a privacy pro to adequately evaluate how a decision-making algorithm processes personal data on the back end. 

"You're going to have to be able to describe what you want to come through (your organizational) principles," Gould-Soil said.

Microsoft's Bero said, while utilizing generative AI may produce efficiency within organizations, privacy pros will still be tasked with ensuring the system protects personal information, even if they are relying on open-source AI systems like OpenAI. He said turning organizational data governance over to AI will eventually become a necessity due to the sheer volume of data generated across a company's departments, and privacy risks could be compounded if the company doesn't have organizational controls for the model they deploy. 

"Because (AI) is data, and data is often translated to records or data governance in that context, we have databases that have billions of records. We have files that are generated by bots so we can generate a lot more documents and do more with less," Bero said. "When you're using open-source AI to help process that data, there's still a responsibility of the organization that's processing it, (and there are) privacy risks that privacy professionals need to be aware of."

ARI Principal and Founder Serena Nath said it is critical that organizations do not get intimidated by alarmist concerns over what could go wrong by implementing AI systems. She said stakeholders within organizations should all get on the same page before choosing a specific AI system.

"I've heard many people talk about AI as some monolithic technology, and it's not," Nath said. "To assess the technology in terms of individual tools is necessary because that is the foundation you can start to really assess the risks and the benefits of each individual tool."

Canadian AI regulation lags behind EU

With regard to Canada's first attempt to regulate AI — the Artificial Intelligence and Data Act within the proposed omnibus C-27 legislation — the general consensus among CPS presenters was the law's proposed language leaves more questions than answers at this point in time. Compared to the EU's proposed Artificial Intelligence Act, several CPS presenters felt the AIDA does not contain nearly as many hard-law regulations.

The legislation passed its second reading in Parliament in April and is now before the House of Commons' Standing Committee on Industry and Technology.

Borden Ladner Gervais Partner and National Artificial Intelligence Leader Francois Joli-Coeur, CIPP/C, CIPP/E, CIPP/US, said the AIDA represents a "horizontal approach" to regulating AI, in the same vein as the EU AI Act, not a "theoretical approach" where industries' collective use of AI is regulated by industry-tailored laws.

The AIDA regulates "essentially two types of activities," Joli-Coeur said. "The first category is with respect to the data that's actually used for the design, development and use of an AI system. The second category is about the organization involved in the AI system's lifecycle."

While the AIDA was drafted in the spirit of the EU AI Act, the general feeling at the conference was the AIDA is nowhere near as comprehensive and could leave Canadian businesses to fend for themselves without more definitive regulations. 

University of Ottawa Canada Information Law and Policy Research Chair Teresa Scassa said the AIDA was flawed from the start because it did not offer an opportunity for consultation from a large swath of stakeholders. If passed, Scassa said, the law creates an environment akin to the fill-in-the-blank game "Mad Libs" because the government in power would create the final regulations under the law without input from Parliament.

Scassa said Canada would be better served by a bill with more precise language proscribing certain types of dangerous activity AI systems could produce. She said there will still be regulatory lag in developing various regulations under the AIDA if passed, and referenced the government's own proposed two-year implementation timeline. She also called for the government to produce "a set of principles" to help shape future AI regulations.

"The lack of consultation is a democratic problem, but it's also a problem with the substance of the law," Scassa said. "It's certainly the case that, with a technology that's quickly evolving, you don't want to bake in legislative principles or legislative rules that are not sufficiently flexible to adapt with the changing and evolving technology. There's nothing particularly agile about leaving things to regulation. Regulations take time to enact."

The Privacy Pro President and Principal Consultant Lauren Reid, CIPP/E, CIPP/US, CIPM, FIP, said responsible companies are finding ways to be proactive in "the absence of clarity" ahead of the pending AIDA legislation.

"You take the more risk-averse approach and try to think about everything that might happen," Reid said. "In the absence of regulation, that is resulting in better business outcomes."