Potentially conflicting legal requirements, national governance approaches to ensure regulatory consistency, as well as clear legal advice to minimize the compliance burden, are the main issues seen by EU member states in the interplay between the AI Act and the General Data Protection Regulation (GDPR). European governments have pointed out that the two laws' diverging regulatory approaches might lead to conflicting outcomes, which should be avoided with systematic cooperation between the responsible authorities.
Representatives of EU countries discussed this interplay on March 14, with a view to identifying compliance challenges from the perspectives of both regulated entities and regulators.
Based on this exchange of views, Poland, the current chair of the EU diplomatic discussions, compiled a summary document (obtained by MLex), which outlined the key takeaways on the main issues identified. The summary will be discussed at a meeting of national representatives tomorrow.
— Differing regulatory approaches —
Most European governments stressed the differing regulatory approaches of the two laws. The GDPR protects personal data from a fundamental rights perspective, while the AI Act is product safety legislation with targeted requirements based on the level of risk.
For some EU countries, this divergence in underlying logic might lead to contradictory regulatory outcomes, whereby an AI system is deemed compliant with the AI Act but not with the GDPR, or vice versa.
“On the other hand, some member states referred to a risk that an entity deploying an AI system would infringe both the GDPR and the AI Act, and thus be subject to sanctions under both regulatory frameworks for the same action,” the summary continues.
To avoid these scenarios, the consensus seems to be that the two laws need to be interpreted and enforced coherently, and the respective authorities should cooperate closely and systematically.
— National governance —
Several European governments stressed the importance of establishing a proper national governance structure that promotes cooperation between the responsible authorities, particularly concerning AI systems processing personal data.
These cooperation mechanisms could take the form of joint task forces, technical working groups, coordination bodies, or networks. For EU countries, the goal of this cooperation should be to avoid contradictory decisions. But they also acknowledged that it demands significant technical expertise and funding.
The development of best practices, guidelines, codes of practice and a single auditing framework were also proposed. In particular, the development of joint guidelines is seen as a way to harmonize interpretations and supervision practices.
At the EU level, the European Commission announced the upcoming establishment of an administrative cooperation group for market surveillance authorities under the AI Act. Member states have until Aug. 2 to complete their appointments.
Some EU countries suggested that such coherence would be best achieved if data protection authorities were also tasked with enforcing the AI Act, although only a couple have taken that path so far.
An integrated supervision model was suggested for the establishment of regulatory sandboxes, including the involvement of privacy regulators in the early stages to provide regulatory guidance.
— Minimizing the administrative burden —
The document notes that the AI Act’s complexity poses several challenges in terms of legal interpretation, especially when combined with other laws, such as the GDPR.
To address this, EU countries called for clear guidelines on how the two landmark legal acts should work together, explaining key concepts such as the scope and complementarity of legal applications, such as risk assessments or redress mechanisms.
The guidelines should focus on the interplay between AI compliance requirements and GDPR principles, be cross-sectoral, and cover several legal areas.
The commission and the European Data Protection Board are developing guidelines on the GDPR-AI Act interplay, seeking to ensure consistency and legal certainty.
“The guidelines could also help minimize the administrative burden, for example by elaborating on how to reuse or share documentation regarding risk assessment and impact assessments relating to both the GDPR and the AI Act,” the document continues. “The call for coordination is also important considering the likelihood of cross-border cases under the GDPR that might also concern AI systems.”
Standardized templates, clear legal advice and streamlined reporting are also seen as ways to minimize the compliance burden imposed by the two landmark laws.
— Fundamental rights impact assessment —
Some member states pointed out that the AI Act’s requirement to carry out a fundamental rights impact assessment might require balancing several rights that potentially conflict with one another.
“A suggestion was made to evaluate to which extent the methodology of the data protection impact assessment is suitable to evaluate all other types of impacts, or if providers and deployers of AI will resort to regularly favor data protection interests because they are better known and more strongly emphasized,” the document adds.
To avoid duplicating efforts, it was suggested to develop standardized templates and model cases. The commission has informed EU countries that it's working on a template to comply with this requirement in the form of a questionnaire.
Source: MLex
Write a comment