President of the European Commission Ursula von der Leyen announced the new college of Commissioners that will form her team during her second term. EU observers are now looking at the week of 4 Nov., when members of European Parliament will conduct a series of hearings to grill the Commissioners-designate.

The role of Parliament in the process is not a simple rubber-stamping. During previous transitions, several Commissioners-designate were not confirmed, either because they weren't convincing enough about the issues at hand or for grimmer reasons like lack of decency or political games.

Hearings are also a good way to read through the footprint of mission letters that each commissioner and executive vice-president received. Mission letters describe the overall approach von der Leyen wants to take, as well as the list of initiatives commissioners will be asked to launch and lead.

When it comes to digital governance, the list is long. Despite the many new pieces of legislation that have entered into force during the past five years, the Commission still sees opportunities to do more in areas of artificial intelligence, cybersecurity, data sharing, industrial development supporting the digital economy and new technologies, consumer protection online, and enforcement.

In addition to individual initiatives, it is clear the European Commission will need to look at the interplay and compatibility of all these instruments — for instance, the future commissioner in charge of the justice portfolio will be tasked with ensuring the EU General Data Protection Regulation "remains in line with the digital transformation and responds to law enforcement and commercial needs."

It is also abundantly clear that the European Commission, and the broader EU apparatus, will continue to promote and export its regulatory model beyond its borders.

Elsewhere, September was a busy month for AI-related news in Brussels.

The Commission announced 4 Sept. that over 100 companies had signed the AI Pact's voluntary pledges. The AI Pact was initiated in 2023, while the AI Act was in the making, as a way for organizations to start their compliance journeys ahead of the regulation's entry into force. Signatories are asked to adhere to three types of voluntary commitments: set up an AI governance strategy to foster the uptake of AI in the organization and work towards future compliance with the AI Act; identify AI systems likely to be categorized as high-risk under the AI Act; and promote AI literacy and awareness among staff, ensuring ethical and responsible AI development.
Over half of the signatories also committed to additional pledges, including "ensuring human oversight, mitigating risks, and transparently labelling certain types of AI-generated content, such as deepfakes."

In addition, European Parliament proposed an alternative AI liability framework by requesting a complementary impact assessment of the original Commission AI Liability Directive proposal presented in 2022 and stuck in limbo ever since.
The complementary impact assessment study identifies key shortcomings in the initial impact assessment, not least of which is an incomplete exploration of regulatory policy options and an abridged cost-benefit analysis, in particular of the strict liability regime.
The study proposes the AILD should extend its scope to include general-purpose and other high-impact AI systems, as well as software. It also discusses a mixed liability framework that balances fault-based and strict liability.
Notably, the study recommends transitioning from an AI-focused directive to a software liability regulation, to prevent market fragmentation and enhance clarity across the EU.  

Isabelle Roccia, CIPP/E, is the managing director, Europe, for the IAPP.