By Kai Zenner, Head of Office, MEP Axel Voss; Cornelia Kutterer, Managing Director, Considerati
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.
The (vice) chairs’ expertise and vision will be crucial in guiding GPAI rules for the future and ensuring that the European way to trustworthiness in the AI ecosystem will endure, Kai Zenner and Cornelia Kutterer write.
Within the next three weeks, the AI Office will likely appoint one important group of external individuals that will shape the implementation of a key part of the EU AI Act: the chairs and vice-chairs of the Code of Practices for General-Purpose AI (GPAI) models.
To provide some background: the rise of generative AI — including popular applications such as OpenAI’s ChatGPT — was not only disruptive in economic terms, it also created a political nail-biter at the end of the AI Act trilogue negotiations.
Member states such as France, Germany and Italy were concerned that a regulatory intervention at the foundation of the AI stack was premature and would curb EU start-ups such as Mistral or Aleph Alpha, although — one may recall — France managed to flip-flop a few times before landing on this stance.
The opposite was true for the European Parliament. Concerned by market concentration and potential fundamental rights violations, it proposed a comprehensive legal framework for generative AI, or as baptised in the final law, GPAI models.
With such contrary views, the EU co-legislators opted for a third way, a co-regulatory approach that specifies the obligations of the providers of GPAI models in codes and technical standards.
It was particularly Commissioner Thierry Breton who suggested using this instrument, borrowing a page from the 2022 Code of Practice on Disinformation.
Core similarities lie hidden beyond the governance approach, making flexible codes particularly appropriate for AI safety: the fast-evolving technology, socio-technical values, and the complexities of content policies and moderation decisions.
Not everyone would, however, agree that codes are the appropriate regulatory instrument. Their critics point to the risk of companies committing merely to the minimum, and doing too little too late.
That was at least also the impression many auditors had of the initial version of the EU’s Code of Practice on Disinformation in 2018.
After a stern review, the Commission’s disinformation team pushed companies to do better, brought civil society to the table, and strong-armed participants to appoint an independent academic to chair the process.
Technically feasible and innovation-friendly
The good news is — coming back to the upcoming appointments of (vice) chairs in mid-September — that the AI Office has also used that specific experience as a blueprint for its co-regulatory approach to GPAI.
On 30 June, it proposed a sound governance system for the drafting of the GPAI Codes of Practice by means of four Working Groups.
All interested stakeholders are thereby given multiple chances to contribute and shape the final text, in particular via a public consultation and three plenary sessions. GPAI companies will still dominate the drafting process as they are invited to additional workshops.
They are also not required to adhere to the final outcomes as the codes are voluntary.
Looking back to the Code on Disinformation, it is therefore fair to say that the independence criteria of the (Vice) Chairs will become crucial for safeguarding the credibility and proper balance of the drafting process.
The appointed individuals will have a lot of influence, as they are the de facto pen holders responsible for drafting texts and chairing the four working groups.
One additional ninth chair could even feature a coordinating role. Together, they could aim to strike the right balance between ambitious rules in light of systemic risks while keeping the obligations technically feasible and innovation-friendly.
Their goal should be to achieve a GPAI Code that reflects a pragmatic interpretation of the state of the art. To achieve the highest quality, the AI Office should select the (vice) chairs by merit: strong technical, socio-technical, or governance expertise on GPAI models combined with practical experience on how to run committee work on a European or international stage.
A choice of paramount importance
The selection process will be challenging. AI safety is a nascent and evolving research field marked by trial and error.
The AI Office must navigate a diverse array of professional backgrounds, balance a vast number of vetted interests, and adhere to the EU’s typical considerations for country and gender diversity, all while acknowledging that many leading AI safety experts are based outside of the EU.
Naturally, the GPAI Code should focus on EU values, and it is important to ensure strong EU representation among the (vice) chairs. However, given its global significance and the fact that the AI Act requires international approaches to be taken into account, numerous esteemed international experts have expressed interest in these roles.
It would also be a win for the EU to appoint a significant number of internationally renowned experts to chairs or vice-chairs. Such a step would make a successful outcome more likely, ensure the legitimacy of the code throughout, and make it more conducive for non-EU companies to align with the process.
In conclusion, the selection of the (vice) chairs for the GPAI Code is of paramount importance at this stage.
Their leadership will set the tone for how the co-regulatory exercise evolves over time, especially as it navigates complex socio-technical challenges and sensitive policies such as IP rights, CSAM, and the critical thresholds that determine the obligations the respective GPAI models will have to face.
The (vice) chairs’ expertise and vision will be crucial in guiding GPAI rules for the future and ensuring that the European way to trustworthiness in the AI ecosystem will endure.
Kai Zenner is Head of Office and Digital Policy Adviser for MEP Axel Voss (Germany, EPP) and was involved in the AI Act negotiations on technical level. Cornelia Kutterer is Managing Director of Considerati, Adviser to SaferAI, and researcher at the multidisciplinary institute in AI at UGA.
At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.