· 

EU’s AI Act negotiations hit the brakes over foundation models

The three biggest EU countries are pushing for codes of conduct without an initial sanction regime for foundation models rather than prescriptive obligations in the AI rulebook, according to a non-paper seen by Euractiv.

 

The AI Act is a flagship EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. The file is currently at the last phase of the legislative process, where the EU Commission, Council, and Parliament gather in ‘trilogues’ to hash out the law’s final dispositions.

 

The negotiations on the world’s first comprehensive AI law have been disrupted by the rise of ChatGPT, a versatile type of AI system known as General Purpose AI, which is built on OpenAI’s powerful foundation model GPT-4.

 

On 10 November, Euractiv reported that the entire legislation was at risk following mounting opposition from France, which gained support from Germany and Italy in its push against any regulation on foundation models.

 

The EU heavyweights – France, Germany, and Italy – asked the Spanish presidency of the EU Council, which negotiates on behalf of member states, to retreat from the tiered approach on which there seemed to be a consensus at the last political trilogue in mid-October.

 

In response, European Parliament officials walked out of a meeting to signal that leaving foundation models out of the law was not politically acceptable. In recent weeks, the Spanish presidency attempted to mediate a solution between the EU parliamentarians and the most reluctant European governments.

 

However, the three countries circulated on Sunday (19 November) a non-paper that shows little room for compromise, considering that horizontal rules on foundation models would go against the technology-neutral and risk-based approach of the AI Act that is meant to preserve innovation and safety at the same time.

 

“The inherent risks lie in the application of AI systems rather than in the technology itself. European standards can support this approach following the new legislative framework,” the document said, adding that the signatories are “opposed to a two-tier approach for foundation models”.

 

“When it comes to foundation models, we oppose instoring un-tested norms and suggest to instore to build in the meantime on mandatory self-regulation through codes of conduct,” the non-paper further said, noting that these follow the principles defined at the G7 with the Hiroshima process.

 

Instead, the three countries argue that regulating General Purpose AI systems that can be available for specific applications rather than foundation models would be more in line with the risk-based approach.

 

To implement this approach, Paris, Berlin, and Rome propose that foundation model developers would have to define model cards, technical documentation that summarises the information about trained models to a broad audience.

 

“Defining model cards and making them available for each foundation model constitutes the mandatory element of this self-regulation,” the non-paper noted, stressing that these cards will have to include the relevant information on the model capabilities and limits and be based on the best practices within the developers’ community.

 

The examples provided include the number of parameters, intended uses, potential limitations, results of studies on biases, and red-teaming for security assessment.

 

The non-paper proposed that an AI governance body could help develop guidelines and check the application of model cards, providing an easy way to report any infringement to the code of conduct.

 

“Any suspected violation in the interest of transparency should be made public by the authority,” the document continued.

 

The three countries also do not want sanctions to apply at the beginning. According to them, a sanction regime would only be set up following systematic infringements of the codes of conduct and a ‘proper’ analysis and impact assessment of the identified failures.

 

For the three, European standards could also be an important tool to create the adaptive capacity needed to take into account future developments.

 

The approach to foundation models will be at the centre of a discussion of the Telecom Working Party, a technical Council body, on Tuesday (21 November). On the same day, MEPs will hold an internal meeting on the matter, followed by a dinner with the Council presidency and the Commission.

 

“This is a declaration of war,” a parliament official told Euractiv on condition of anonymity.

 

Source: Euractiv

Write a comment

Comments: 0