Why the EU’s AI Act Must Be a Framework for Learning

28.8.2025
Blogs
Photo by Yukon Haughton on Unsplash

The EU’s AI Act represents a significant step toward regulating AI technologies across diverse sectors, with the dual aims of ensuring safety and safeguarding fundamental rights. Yet its success depends not only on compliance but on the capacity of regulators, standardisation bodies, and stakeholders to learn systematically from real-world implementation. Given the speed of AI innovation, regulatory learning must be rapid, iterative, and transparent.

The AI Act builds on existing EU mechanisms for regulating the health and safety of products. Still, it extends them to protect fundamental rights and to address AI as a horizontal technology across multiple application sectors. In their submission to the EU’s Apply AI Strategy, FORSEE consortium members Dave Lewis, Marta Lasek-Markey and Delaram Golpayegani in collaboration with their colleagues Harshvardhan J Pandit and Julio Hernandez from ADAPT Centre at Trinity College Dublin, argue that this leads to multiple uncertainties in the enforcement of the AI Act, which coupled with the fast-changing nature of AI technology, will require a strong emphasis on comprehensive and rapid regulatory learning for the Act.

Structural barriers

So what is hindering this learning process? The authors identify several structural barriers to effective regulatory learning:

Fragmented data flows. National authorities, conformity assessment bodies, and providers all generate relevant compliance and risk data, but these datasets are siloed. There is no shared infrastructure for systematically comparing how risk mitigation measures perform in practice.

Limited transparency. Much of the information about compliance processes, especially when handled by private actors, remains confidential. While some confidentiality is necessary, excessive opacity undermines trust and makes cross-learning nearly impossible.

Inconsistent methodologies. Risk assessments and Fundamental Rights Impact Assessments (FRIAs) are conducted differently across sectors and countries. Without shared standards, evaluating whether one method is more effective than another is difficult.

Slow adaptation cycles. Traditional regulatory processes are not designed for fast iteration. When lessons are incorporated into updated guidance, the technology and its risks may already have shifted.

From compliance to learning

To address all the abovementioned complexities and uncertainties surrounding the AI Act’s implementation, the authors recommend interpreting the Act as a framework for learning, advocating for collaborative knowledge development among economic actors, affected groups, and regulators. This learning process is envisioned through a structured framework involving diverse actors, AI system classifications, and regulatory learning activities, with a strong emphasis on open data, standardised information exchange, and semantic web technologies to ensure efficient, transparent, and legitimate implementation and ongoing adaptation of the AI Act. By institutionalising regulatory learning, the AI Act could become a global benchmark for responsible AI governance, combining legal compliance with social legitimacy.

Read the submission in detail: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14625-Apply-AI-Strategy-strengthening-the-AI-continent/F3563742_en


Points of contact

Project lead
Dr Elizabeth Farries
Director of the UCD Centre for Digital Policy
elizabeth.farries@ucd.ie 

Lead of Communication and Impact
Johannes Mikkonen
Demos Helsinki 
johannes.mikkonen@demoshelsinki.fi

Project Manager
Evangelos Papadamakis
UCD Centre for Digital Policy
vangelis.papadamakis@ucd.ie

Follow us

LinkedIn Bluesky

Newsletter

FORSEE is Horizon Europe funded Research and Innovation Actions project consisting of eight partners: ADAPT Centre, The School of Computer Science and Statistics at Trinity College Dublin; European Digital SME Alliance; Demos Helsinki; TASC; Tilburg Institute for Law, Technology and Society; UCD Centre for Digital Policy; University of Toulouse and WZB – Wissenschaftszentrum Berlin für Sozialforschung