Strong debate and several reports in the European Parliament on the draft AI Act
Two new reports from the European Parliament kickstart the Parliament’s public debate on the AI Act. The reports are from Axel Voss (JURI) and Eva Maydell (ITRE) , MEPs belonging to the European Parliament’s center-right political group the European People’s Party (EPP) in the legal affairs (JURI) and the industry (ITRE) committees respectively.
The legislative process in the EP on the AI Act is complex: after months of legislative and political battles, there are 4 Committees in charge with 2 lead Committees (IMCO and LIBE) and 2 Opinion Committees ( JURI and ITRE), which nevertheless have full responsibility on some parts of the AI Act.
IMCO & LIBE views: The centre-left negotiated a deal that saw MEPs Brando Benifei (S&D, Italy) in the interior market (IMCO) and Dragoş Tudorache (Renew, Romania) in the civil liberties (LIBE) committees be jointly in charge of amending the AI Act. The rapporteur MEP shared some key points he is pushing for:
- Definition of AI: “We have pressures to try to reduce the scope of the definition. But I don’t think we will move very much from where it is now.”
- List of high-risk AI: Benifei thinks AI uses that relate to the inclusion of democracy and democratic processes, finance and telecommunications might differ quite a bit from what EU countries are proposing.
- Self-assessments: Benifei said he was not happy how most AI applications would comply through self-assessments. He said he will try to add checks by third parties into AI uses that apply to “health, life, family issues to citizenship status.”
- Responsibility: Benifei wants to introduce more checks to ensure products comply with EU rules before they are rolled out in Europe. “We know that if we go too much in this direction, we create a lot of burden for the developers, so we need to be very careful,” he said. “We want to avoid that big businesses can be freed from responsibility and leave it all to the users of medium and small sized enterprises,” he added.
JURI opinion: MEP Axel Voss (EPP, Germany) and his team unveiled their draft report on the AI Act for the legal affairs committee. The report is 149 pages long with a whopping 309 amendments. It was debated in JURI on March 15. JURI has exclusive competences on articles 13 (transparency and provision of information to users), 14 (human oversight), 52 (transparency obligations for certain AI systems) and 69 (codes of conduct). His text tweaks these only slightly.
- Defining ‘trustworthy AI:’ Voss calls for a clearer definition of “trustworthy AI,” which calls on providers of AI systems to “acknowledge the EU Charter of Fundamental Rights and ensure that the AI system is lawful, ethical and robust,” and for European standardization organizations to take this into account.
- High risk AI: Voss thinks the AI Act’s definition of “high risk” AI systems is far too broad and vague. He proposes labelling AI systems as high risk that “fulfil clear and transparent criteria” instead of entire sectors. This is something the Commission proposed in a white paper in 2020.
- Enforcement: Voss suggests beefing up the enforcement of the AI Act by improving on the GDPR. Voss suggests that authorities across Europe will be able to bring forward cases on their own if the national authority who is supposed to be in charge is
ITRE opinion: MEP Eva Maydell (EPP, Bulgaria) presents the first concrete amendments to the AI Act in her draft report. ITRE has an exclusive say on articles 15 (accuracy, robustness and cybersecurity) and 55 (measures for small-scale providers and users).
- AI Definition: Both the Maydell and the Voss report call for aligning the EU’s definition of AI with a definition by the OECD. “We want to make sure that we’re using the same definition of AI as our international partners,” Maydell said.
- Exclude general purpose AI: Both Voss and Maydell want to exclude “general purpose AI,” or AI that can be used for multiple purposes such as speech recognition, from the regulation. They say that this is because the AI Act focuses on particular use cases, and an AI system without an “intended purpose” cannot fulfil all requirements. “This clarification is essential to allowing European businesses to compete and innovate — rather than stifling off an industry with regulation that has not even fully matured yet,” Maydell’s report argues.
- European benchmarks: Maydell suggests creating a new European Benchmarking Institute, which could operate under the new AI board, that is tasked with creating European benchmarks and metrics for the accuracy of AI.
- Start-ups in the room where it happens: Maydell’s big focus is involving tech start-ups and SMEs in the European AI debate. She suggests creating an EU AI Regulatory Sandboxing Programme, including start-ups in the AI Act’s standardization process and calls on the European Commission to lower compliance costs for smaller companies.
Question of AI definition
What is AI? The AI Act has numerous technical mistakes that, as they are, risk excluding most machine learning techniques from the scope of the bill, argues Kris Shrishak, a computer scientist working at the Irish Council for Civil Liberties. The bill’s requirements for AI systems apply to techniques used by supervised learning, but not unsupervised and reinforcement learning. Supervised learning is used when a model is trained using a labelled dataset with guidance. In unsupervised learning, models learn on unlabelled data without any guidance. Reinforcement learning is a method that rewards the AI model when it behaves in a desired way. The AI Act refers to “validation and testing data sets,” which is not something unsupervised and reinforcement learning techniques rely on. The text also calls for AI systems to be “accurate.” Shrishak said accuracy is not the best metric, as unsupervised and reinforcement learning use other metrics than accuracy. Reinforcement learning, for example, uses “reliability” as a metric. Shrishak suggests referring to “performance” generally. “Using the wrong performance metric can be dangerous and an additional risk to health, safety and fundamental rights.
Next legislative steps: The deadline for the IMCO-LIBE report on the AI Act has been pushed back to April 11, and their committees will discuss it on May 11, The committees will vote on the report a month later than planned on November 26-27.
FEBIS Regulatory Committee is working on a statement which outlines the key issues of AI definition (and its implications to machine learning and reinforcement learning) and of the scope of high-risk AI.