Council of the EU: Trilogue agreement adopted on AI Act

Today the Council of the European Union adopted the agreement in trilogue on the proposed regulation on common rules on artificial intelligence (AI Act). The regulation follows a 'risk-based' approach, which means that the higher the risk of causing harm to society, the stricter the rules. It is the first of its kind in the world and can set a global standard for AI regulation.


The regulation aims to promote the development and adoption of safe and reliable AI systems in the EU single market by public and private actors. At the same time, it aims to ensure respect for the fundamental rights of EU citizens and to stimulate investment and innovation on artificial intelligence in Europe. The AI Act applies only to areas covered by EU law and provides exemptions, e.g. for systems used exclusively for military and defence purposes and for research.


Classification of AI systems as high-risk and prohibited AI practices

  • It classifies different types of artificial intelligence according to risk. AI systems that present only a limited risk would be subject to very light transparency requirements, while high-risk AI systems would be authorised but subject to a number of requirements and obligations to access the EU market. AI systems such as, for example, cognitive behaviour manipulation and social scoring would be banned by the EU because their risk is considered unacceptable. It also bans the use of AI for predictive policing based on profiling and systems that use biometric data to classify people according to specific categories such as race, religion or sexual orientation.


AI models for general purposes

  • It also deals with the use of general purpose AI (GPAI) models.
  • GPAI models that do not pose systemic risks will be subject to some limited requirements, e.g. regarding transparency, but those that do pose systemic risks will have to comply with stricter rules.

 A new governance architecture

  • To ensure proper implementation, several governance bodies have been set up:
  • an AI Office within the Commission to enforce common rules across the EU
  • a scientific group of independent experts to support enforcement activities
  • an AI Committee with Member State representatives to advise and assist the Commission and Member States in the consistent and effective implementation of the AI law
  • a Stakeholder Consultative Forum to provide technical expertise to the AI Committee and the Commission.
  • Sanctions
  • Fines for infringements of the AI Act are set as a percentage of the offending company's annual global turnover in the previous financial year or a predetermined amount, whichever is higher. SMEs and start-ups are subject to proportional administrative penalties.

 Transparency and protection of fundamental rights

  • Before a high-risk AI system is used by certain entities providing public services, the impact on fundamental rights must be assessed. The regulation also provides for greater transparency regarding the development and use of high-risk AI systems. High-risk AI systems, as well as certain users of a high-risk AI system that are public entities, will have to be registered in the EU database for high-risk AI systems, and users of an emotion recognition system will have to inform natural persons when they are exposed to such a system.

 Measures to support innovation

  • The regulation provides for an innovation-friendly legal framework and aims to promote evidence-based regulatory learning. The new law stipulates that AI regulatory sandboxes, which allow for a controlled environment for the development, testing and validation of innovative AI systems, must also allow for the testing of innovative AI systems under real-world conditions.

To ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from simpler software systems (as scoring), the compromise agreement aligns the definition with the approach proposed by the OECD. Art. 3(1) EU AI Act defines an AI system as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."


Furthermore, the European Central Bank has stated that creditworthiness assessments models which leverage on the standalone use of traditional, relatively simple statistical techniques such as linear or logistic regression or decision trees under human supervision are not under the definition of an AI system.


As far as the next steps are concerned, the regulation will now be published in the Official Journal of the EU. It will enter into force on the 20th day after publication in the OJEU and will apply from 24 months after the entry into force of the regulation



Sources: The text is available here.

Write a comment

Comments: 0