With the EU Artificial Intelligence Act now officially entered into force, developers and deployers face a six-month timeline to ensure none of their AI use is considered unacceptable.
Six months may seem like a long time, especially considering the regulation was formally adopted nearly two and a half months before the 1 Aug. entry into force. And for the majority of AI uses, the lead-in is even longer. General purpose AI rules come online August 2025, while those who use “high-risk” AI — the area the act focuses most on — have another two years to get into compliance.
The European Commission estimates the majority of AI use in the EU falls into limited or minimal risk category, meaning they face little or no additional legal requirements. But the reliance on additional guidance around some types of AI, as well as the daunting task of introducing and training institutions about what rules apply to what types of AI, has some stakeholders nervous about reaching compliance.
Pressure points
While deployers and developers can take some concrete steps now to prepare for each subsequent deadline, some industry leaders say there is uncertainty around how the AI Act fits into the EU's regulatory framework and are calling for governing bodies to work together and move quickly to nail down the details well ahead of the compliance deadlines.
"There has been an intensification of the conversation on where the AI Act fits into this puzzle between the practices that companies are already undertaking and also the obligations that companies are undertaking under different legal frameworks," said Marco Leto Barone, the senior manager of policy for the Information Technology Industry Council, which counts Microsoft, IBM and Anthropic among its members.
Barone said the areas where members are most focused include how the act interacts with the EU General Data Protection Regulation, the extent to which AI falls under it has already been challenged in some high-profile cases. How new rules set by the Cyber Resilience Act apply to AI is also an area the ITIC is pushing regulators to examine closely, he said.
ITIC argues in its EU AI policy priorities that member states and the Commission need to coordinate on implementation requirements and curtail any regulatory overlap to give the AI industry a sense of predictability. It also says standing up governing bodies like the AI Office, nailing down definitions of what constitutes AI and writing guidance relying on international standards need to be down as quickly as possible.
And while the act allows member states some discretion around who should oversee implementing its rules, Barone said there needs to be uniformity in how those rules are interpreted to create a "single market" for AI operators.
"If something is allowed in Spain, it should be allowed in Sweden," he said.
Implementation oversight
The European Data Protection Board opined that data protection authorities are best suited to the task of implementing the act. But who within those departments should take the lead is another story.
Spain, for instance, approved the creation of the Spanish Artificial Intelligence Supervisory Agency as part of its AI strategy. Italy proposed an AI law which would set up two new authorities to handle innovation and cybersecurity, although its local DPA, the Garante, has said it would be able to handle the task.
Others are adapting their own authorities — Denmark said in April the Danish Agency for Digitalisation would be the coordinating body. In Germany, the federal accreditation body, Deutsche Akkreditierungsstelle, will serve as the notifying authority and the federal network agency, Bundesnetzagentur, will serve as the market authority. France's data protection authority, the Commission nationale de l'informatique et des libertés, has yet to designate an authority but published a detailed breakdown on how the AI Act will work.
European Data Protection Supervisor Secretary-General Leonardo Cervera Navas said he plans to call a meeting in September to bring together member states' current AI handlers to talk about the best path forward to ensure those with the right knowledge oversee regulation.
"Some people think it should be a (data protection officer), others think it's their heads of IT, others think it’s their departments of digital innovation," he said. "So we are trying to clarify how those networks should work."
Navas’ office is tasked with overseeing the implementation of the law in EU institutions and can administer fines under certain circumstances. His office faces a tough task: he anticipates working with a network of about 80 people from various member states who work with AI at the ground level, in addition to those who are tasked with enforcing the law at the EU level.
Adding to the anxiety is an uncertainty about how much manpower Navas will receive to do his job. He said he is working with the European Commission to secure "minimal" additional resources to start implementation, but it will not be until November when the European Parliament adopts its 2025 budget that Navas will know how many additional people he can hire to implement the act.
"We will recycle some colleagues from the data protection pool, because it's very difficult to find out of the blue AI experts," he said. "But we also need completely new colleagues with completely different expertise."
That means people with market surveillance and product safety backgrounds, as well as those versed in running AI testing grounds and engineering AI systems, Navas said. Those working on AI will be kept separate from the EDPS' data protection divisions to draw clear lines between when the authority is working as a DPA versed an AI regulation, he said.
Navas added that more substantive guidance will likely come as the first deadlines approach, noting the EU AI Board — one of the stakeholders charged with setting standards for the AI Act at the EU level — will not meet until September. For now, he urged organizations to get familiar with the act's timeline, figure out who will oversee compliance and start recruiting now if that person is not immediately clear.
Compliance starting point
For some businesses, understanding how to comply with the act begins with knowing what types of AI are employed.
Multinational biotechnology firm Roche has a variable AI portfolio that raises questions around AI Act applicability. Their offerings include high-risk medical device software, AI used in research that is exempt under the regulation's development clauses and minimal risk AI used in lifestyle application.
"As you can imagine, there are also shades of grey between these categories," Roche Global Head of Digital Health and Innovation Policy Johan Ordish said. "The challenge being to comply but not paint all AI systems with the same brush."
Ordish indicated the company will be looking at all its AI uses to determine where they fall under the act and increasing its considerations around where use cases interact with sectoral regulations.
Roche already follows a series of international standards for areas including ISO 14385 for quality management systems, ISO 14971 for risk management and IEC 62304 for software life cycle management. While these will be helpful in preparing for compliance, Ordish said perhaps the biggest task the company faces for now is getting everyone on board with the AI Act's literacy requirements.
"It's worth paying attention to this provision early on and developing a strategy to meet it," he said, noting future streamlined adoption may be easier down the road through this early focus.
Caitlin Andrews is a staff writer for the IAPP.