Nearly five years after the implementation of the EU General Data Protection Regulation, Europe is immersed in a digital market strategy that is giving rise to a host of new, interconnected regulation. Among this complexity resides the proposed Artificial Intelligence Act. Originally presented by the European Commission April 21, the AI Act is now in the hands of the Council of the European Union and European Parliament.

The AI Act would be the first horizontal regulation of AI in the world. The proposal uses a risk-based approach across industry, both public and private, whereby only high-risk systems would be regulated.

For Dragoș Tudorache, MEP and co-rapporteur for the AI Act, that means 80-90% of AI activities would not be regulated by the act. “To be honest,” Tudorache explained in an extensive interview in his office at the European Parliament, “that’s the only reasonable entry point by which to regulate AI. We see AI to be the driver of the new industrial revolution, so it makes sense that we only regulate the use of technology that brings about high risks.”

Topping the high-risk tier includes systems that affect employment, credit and health care, among others. If an AI application is deemed high risk, organizations would then be obligated to conduct specific operations, including an impact assessment, as well as record-keeping requirements and transparency obligations. A list of banned uses — that of biometric surveillance and predictive policing — is also on the table. More details on the substance of the proposed AI Act can be found in an analysis from the IAPP’s Jetty Tielemans.

AI Act state-of-play

The current state of the proposed AI Act is a bit of a moving target at the moment, but the Council of the European Union, under the current Czech presidency, released its “draft general approach” to the AI Act Nov. 11, and plans to present it to the Telecommunications Council Dec. 6.

The European Parliament, however, still has more work to do, but it aims to vote on the text sometime in the first quarter of 2023. The parliamentary discussions are led by two committees and co-rapporteurs, the Committee on Internal Market and Consumer Protection — led by Italian MEP Brando Benifei — and the Committee on Civil Liberties, Justice and Home Affairs — led by Romanian MEP Dragoș Tudorache.

During a panel session at the IAPP Europe Data Protection Congress in Brussels, Belgium, Benifei said it’s a rare parliamentary procedure to have co-rapporteurs, but “that’s because AI is so complex.” He said they are “keen on keeping a balance” regarding concerns among a wide range of stakeholders in business, government, academia and civil society. Benifei also said having co-rapporteurs is helpful as each group represents the second and third biggest groups in parliament. “We’re almost a majority when we’re together,” he said.

Points of contention

The European Parliament is still working through a slew of amendments — it received more than 3,300, a few hundred more than was received during crafting of the GDPR — and there are clearly some areas that lawmakers are working through.

Near the top of that list resides the very definition of AI. For Kai Zenner, the head of office for German MEP Axel Voss, a narrow definition of AI is their redline. He said that if the definition becomes too broad, it will ostensibly increase overlap with existing regulation, and not just in the data protection space.

On the other end of the spectrum, Access Now EU Policy Analyst Caterina Rodelli said a narrow definition of AI “is problematic and would lead to legal uncertainty.” The group argues that a narrow definition excludes too many uses of AI that could be harmful to humans, effectively making it a “paper tiger.”

In considering the scope and definition of AI, however, Tudorache also said he wants to avoid a regulation that effectively regulates too much. He said, “there’s a good majority behind this in parliament,” and that it is important the definition not become so broad that it regulates virtually all software. “There are many complex softwares out there that we believe are AI but really are not AI. We all agree that we should not use too generic a label on AI.”

Benifei said MEPs are also working on the details of the scope of the proposal. So, for example, what is excluded. He said they are working to decide how much of research and development would be in and out of scope. Defining high-risk AI is another area they’re working to clarify.

Tudorache explained that MEPs are also hammering out the definition and measurement of the AI Act’s underlying risk pyramid. He said lawmakers on the left side of the political spectrum are calling for things like fundamental rights impact assessments, while lawmakers on the right find relevance in listing out benefits of an AI solution when conducting risk assessments. “Though they’re coming at this from different ideological standpoints,” he said, “the still see it as risk-based.”

“We are at the start of this conversation,” Tudorache said, “and I anticipate we will spend some time on this.”

Zenner said he and Voss are fine with the construct of the risk pyramid, but Zenner said problems start when product safety is involved. The European Commission merged fundamental rights and consumer protection obligations in its AI Act proposal. Zenner points out that certain product uses that would be deemed high risk, which then requires obligations such as human oversight, could be dangerous. For example, he cites advanced eye surgery that leverages AI to precisely guide an operation. This use would fall under the high-risk category, but, Zenner argues, human oversight in this case — inserting a doctor to oversee the AI — could cause health issues to the patient, as the AI system is conducting something more advanced than what is possible for a doctor.

DIGITALEUROPE Director for Infrastructure, Privacy and Security Policy Alberto Di Felice, CIPP/E, expressed similar concerns with the merging of product safety and fundamental rights. He noted that elevators, for example, are classified as high-risk systems, though they would not be high-risk to fundamental human rights.

Di Felice also expressed concerns about the lack of a generally recognized global standard for AI. “There are no standards available anywhere,” he said. “They’re being developed while we’re writing the law.” He said this is scary for industry, as it will only have two years to comply with the AI Act after it becomes law. “In two years, there’s going to be zero harmonized standards,” he warned, saying this will put industry in a difficult position, especially as other regulations, like the Digital Markets Act, Digital Governance Act, Digital Services Act and Data Act all come into effect.

Will there be an EU AI board?

The enforcement mechanism under the proposed AI Act is another area currently under negotiations. Though Benifei said he could not go into details about the powers of an AI board or agency, he said, “We are going toward a stronger institution” and that it “will be stronger an entity than the one conceived in the original text from the European Commission. I’m convinced this will stay in the end and that the Council will agree to a stronger board,” though he doubted the Parliament would support a “full-fledged agency.”

Benifei also noted the European Commission wants to understand its role here. He said, it would likely be granted powers in cases of “widespread infringements and cross-border infringements where there is inaction by national authorities, similar to the approach seen in the DSA.”  

Tudorache acknowledged existing regulatory authorities are already resource-strapped, and a new AI Act will also require a deep layer of expertise. “This will be a huge challenge,” he said. That’s why he believes there should be a centralized EU-level place to oversee the governance of the AI Act. He said it will require more than an EU AI board or Commission-level board. Rather, he calls for a “more reinforced board with a secretariat at the EU level.”

Sandboxes

A key attribute for Tudorache in the governance and enforcement of the AI Act is the use of regulatory sandboxes. He imagines that each member state should have at least one sandbox, but he would also like to see sandboxes as close to the innovators as possible, meaning sandboxes could be useful at EU, regional, member state and municipal levels, depending on the system being tested.

Tudorache has also paid attention to enforcement of the GDPR. He has “peered over the fence” to look at what the European Data Protection Supervisor and European Data Protection Board have been doing as they’ve been resource-strapped as well. To help mitigate any bottlenecks, the EDPB rolled out a new enhanced enforcement strategy earlier this year, aimed at gaining more expertise so it can share best practice more widely. Tudorache believes that best practice and expertise — extracted from various sandboxes — could be instrumental in helping the broader governance of the AI Act.

“In my mind, the governance model would combine the attributes of an EDPB and EDPS.” He said the AI framework should have a structure that shares expertise and allows for interaction with stakeholders. “Whenever the board wants to find the impact of a potential decision, they can go to the stakeholders and ask,” he said.

Benifei also stressed his backing of embedding more stakeholder consultation within the proposed AI Act. “One thing we are trying to do is to strengthen the stakeholders’ involvement, from business, academia and civil society. We are entrenching their presence and consultation in various steps for updating the legislation, for establishing the governance, and for enforcement.”

DIGITALEUROPE’s De Felice said sandboxes “have been a relatively neglected part of the proposal. There are questions about whether there is any value here.” He said that industry wants more protections from liability if using a sandbox. “That needs to be clear. If I put myself in the hands of regulators and I’m exposing myself, at least be kind to me.”

Sandboxes may also have an interesting overlap with the GDPR. Though Tudorache said there are no attempts to “chip away at or add to the GDPR,” one area under discussion is whether rules could be “bent” when in a sandbox. So, for example, could personal data be processed in an AI system under the protections of a sandbox? The answer to that is still under negotiation.

Biometric surveillance and national security — there’s a chasm

One clear area of divergence between the European Parliament and the European Council involves surveillance in public spaces. Civil society organizations, like Access Now and EDRi, are pushing to have real-time biometric surveillance of individuals in public spaces flat out banned.

At the moment, the European Parliament is on board with civil society. Benifei was clear that Parliament will strengthen the ban of surveillance through cameras and so-called predictive policing, noting that the “Council is going in the opposite direction. These will be tough negotiations, for sure,” he said.

For Access Now’s Rodelli, the ban on remote biometric identification schemes must have “no exceptions.”

She also highlighted the importance of fundamental rights impact assessments, not only for developers of AI systems, but for the users (or deployers) as well. “It would be unfair to ask providers to take care of all the compliance aspects,” she said. With a FRIA, a user would conduct one prior to deployment to assess its intended purpose, geographic scope and impact on people, particularly those with disabilities.

Next steps

For Zenner, head of office for German MEP Axel Voss, the timeline ahead will be tight. Clearly, Parliament still has work to do, but Zenner expects the committees to vote on the proposal by early-to-mid March and that the full plenary would vote by the end of that month. That would mean the trilogue process with the European Council would commence in April. “We would need to finish by the end of 2023,” he said. That’s because the next parliamentary election takes place in May 2024. If the AI Act isn’t in the books by then, it could face a similar purgatorial fate as that of the proposed ePrivacy Regulation.

Reflecting on the AI Act, Zenner said it’s exciting “because it’s the first time we are trying to regulate this area. So, it’s a shot in the dark.” He gives it a 50-50 chance at the moment, saying “we could either create something that is terrible or something that espouses a lighthouse effect. If we create a system that has a fair share of the burden between big and small players, we could fix things with the AI Act and put a nice standard approach to it around the globe.”

But for Zenner, recognizing other global standards and forging strong cooperation with international partners will be key. “If we create a regulation that features strange approaches for other countries to implement a similar law,” Zenner said, then this won’t work.

Tudorache also recognizes the importance of forging international cooperation and standards. “I want to make sure we aim for convergence on trans-Atlantic rules,” he said. “Politically, we all understand the sense of urgency and the geopolitics of AI these days. This can be an enabler for our trans-Atlantic relationship to grow. We will likely have different norms, but as long as we use the same principles and work on standards, we will see more alignment.”