On 21 April 2021, the EU released its long-awaited overarching plan on Artificial Intelligence. The proposed regulation, which aims at being a horizontal, comes as a combination of the first-ever legal framework on AI and a new Coordinated Plan with Member States will guarantee the safety and fundamental rights of people and businesses while strengthening AI uptake, investment and innovation across the EU.
The proposal wants to foster a European approach to trustworthy AI. The new rules will be applied directly in the same way across all Member States based on a future-proof definition of AI. They follow a risk-based approach, organised in 4 levels:
- Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will (e.g., toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow “social scoring” by governments.
- High-risk: AI systems identified as high-risk include AI technology used in a number of sectors. Among them are AI used for essential private and public services and in there the Commission specifically identifies credit scoring denying citizens opportunity to obtain a loan as high -risk AI.
- Limited risk, i.e., AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
- Minimal risk: The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens' rights or safety.
High-risk AI systems will be subject to strict obligations before they can be put on the market:
- Adequate risk assessment and mitigation systems.
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes.
- Logging of activity to ensure traceability of results.
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance.
- Clear and adequate information to the user.
- Appropriate human oversight measures to minimise risk.
- High level of robustness, security, and accuracy.
FEBIS Regulatory Committee will analyse the draft regulation deep-dive and assess what impact it will have on members’ activities. As credit scoring has been identified as one of key sector defined in high-risk AI, it is crucial that a thorough assessment is made about the new rules and compliance obligations enshrined into the proposal.
Write a comment