This week, ADAPT and the EC Joint Research Centre, Italy, will host a workshop at CPDP.ai in Brussels, Belgium.
In this interactive workshop, early-stage researchers (ESRs) take the stage to present their cutting-edge work on AI and data governance—ranging from regulatory divergence and risk-based oversight to algorithmic transparency and ethical concerns. Further, renowned experts will explore the challenges and opportunities for ESRs to engage in the global digital regulatory space. Bringing together voices from research, policy, and practice, the session invites participants to actively engage in discussions that bridge theory and implementation. With experts from academia, industry, civil society, and EU institutions in the room, this workshop offers a unique space for dialogue on the future of responsible and evidence-based digital governance in the EU and beyond.

Details:
Workshop: Friday 23rd May, from 10:30 – 13:05
Location: Music Room
Agenda
First slot – 10:30-11:45
10:30-10:40 – Welcome & Opening Remarks
10:40-11:10 – Pitches by Early-Career Researchers
- César Augusto Fontanillo López (KU Leuven), Risk-based Approach to Data Regulation
- Ludovica Robustelli (CNRS, Nantes University (DCS Lab)), The AI Act’s “Brussels Effect” Undermined by the Trump Effect?
- Tytti Rintamäki (ADAPT Centre, Dublin City University), An Impact Assessment Tool for AI: Combining the GDPR and AI Act requirements using Semantic Web Technologie
- Shruti Kakade (Hertie school of Governance), Operationalizing AI Governance: Systematic Methodologies for Evidence-Based, Participatory, and Cross-Sector Approaches
- Wout Slabbinck (University Ghent – imec), Interoperable Interpretation and Evaluation of Usage Control Policies
- Naira López Cañellas (Technological University of Dublin), Does the EU legislative framework sufficiently address the effects of digital technologies in the manufacturing workplace?
11:10-11:40 – Table Discussions (hosted by Early-Career Researchers and contributing experts):
11:10-11:15 – Introduction
11:15-11:30 – Discussions related to the researcher’s presentation 20 min
11:30-11:40 – Discussions on career path
11:40-11:45 – Report-back & Reflections
Second slot – 11:55-13:05
11:55-12:00 – Introduction
12:00-13:00 – Interactive Panel on Career Development in digital policy moderated by Olivia Waters (ADAPT Centre at Trinity College Dublin)
- Panel Members
- Prof. David Hickton (University of Pittsburgh)
- Alex Moltzau (EU AI Office)
- Dr. Lucie-Aimée Kaffee (Hugging Face)
- Karolina Iwanska (the European Center for Not-for-Profit Law (ECNL))
- Sophie Tomlinson (Datasphere)
- Prof. Dave Lewis (ADAPT Centre at Trinity College Dublin)
13:00-13:05 – Concluding remarks
Hosts
Dave Lewis (ADAPT Centre, Trinity College Dublin)

Professor Dave Lewis is a Professor at the School of Computer Science and Statistics at Trinity College Dublin and the head of its Artificial Intelligence Discipline. He is Principle Investigator at the ADAPT Research Ireland Centre for Digital Content Technology. His research focuses on the use of open semantic models for Trustworthy AI and Data Governance, including open models for Data Protection and AI Ethics. He has led the development of international standards in AI-based linguistic processing of digital content at the W3C and OASIS. He is currently active in international standardisation of Trustworthy AI at ISO/IEC JTC1/SC42 and CEN/CENELEC JTC21.
Eimear Farrell (EC Joint Research Centre, Italy)

Eimear Farrell is a Scientific Project Officer at the EC Joint Research Centre Digital Economy Unit, where she conducts research on AI and data ecosystems, with a particular focus on digital transformation of the public sector and responsible data governance. Prior to this position, she led development of Ireland’s national AI strategy and has 20 years of experience working across government, international organisations, business and civil society, including a decade shaping digital law and policy at the UN, World Economic Forum and Amnesty Tech.
Olivia Waters (ADAPT Centre, Trinity College Dublin)

As Head of Impact and Growth Strategy at ADAPT Centre, Olivia leads strategic communications and marketing at ADAPT, working closely with university partners and influential connections in industry, academia, research, civil society, and government. Her work has led to collaborations with world-renowned organisations, successfully highlighting ADAPT’s research impact and building its reputation, leading the public conversation on AI and digital content technology.
Presentations by Early-Career Stage Researchers
César Augusto Fontanillo López (KU Leuven, Belgium)

César is a lawyer and a PhD candidate at KU Leuven, currently serving as a La Caixa Postgraduate Fellow. He previously held the position of Marie Skłodowska-Curie Doctoral Fellow at KU Leuven. César’s PhD thesis focuses on the risk-based approach. He attempts to measure the appropriateness of this regulatory model. César has two bachelor’s degrees in social sciences and law (ULPGC), a master’s degree in Lawyering (Camilo Jose Cela) and a master’s degree in European Law and Economic Analysis (ELEA) (College of Europe). His first master’s thesis was awarded the first prize in the Spanish National Research Council’s national thesis competition 2021, and his second master’s thesis was shortlisted in this year’s Young Scholars Award. César’s academic journey has included research collaborations and stays at institutions such as the Alexander von Humboldt Institute for Internet and Society, the European University Institute, Cambridge University, and Harvard University.
Risk-based approach to data regulation
My PhD focuses on the risk-based approach to data regulation. This is a regulatory approach which started in areas where impacts are measurable (like the environment or health sectors, where one can measure the effects of toxic substances on living organisms), but has recently expanded to areas that are harder to measure, such as fundamental rights and freedoms, particularly in new digital laws. Essentially, the risk-based approach involves using risk assessment tools to evaluate how modern technologies affect individual rights and freedoms, such as those protected by the AIA, DSA, and GDPR. The application of the risk-based approach to areas of qualitative constructs is not very straightforward, so I want to test this approach by asking two questions: 1) Is the risk-based approach appropriate on its own? and 2) Is it appropriate with respect to other approaches, like the harm-based, process-based, or rights-based models?
To help with this, I compare risk assessment frameworks to rulers. Just like a ruler measures the size of an object, these frameworks are used to measure the impact of technologies on rights and freedoms. When analyzing whether the risk-based approach is appropriate, I ask two sub-questions: one about reliability and the other about justifiability. For reliability, a ruler is considered reliable if it always gives the same measurement, no matter who uses it. In the same way, I test if risk assessment frameworks are consistent in measuring risks to rights. For justifiability, I explore whether it is ethically right to use these tools to measure something as important as fundamental rights, looking at ethical theories like consequentialism and deontology. Finally, regarding my second main question—whether the risk-based approach is appropriate compared to other methods—I evaluate these approaches against key benchmarks to highlight the strengths and weaknesses of the risk-based approach. By answering these questions, I aim to develop a comprehensive stress test to evaluate the appropriateness of the risk-based approach.
Ludovica Robustelli (CNRS, Nantes University (DCS Lab), France)

Ludovica Robustelli is a post-doctoral researcher working at the University of Nantes. She holds a PhD in EU law and defended a thesis on the subject of the EU right to informational self-determination. Her research focuses on personal data protection, the territorial limits of personal data protection, and AI regulation
The AI Act’s “Brussels Effect” Undermined by the Trump Effect?
The Brussels Effect is a legal theory introduced by Anu Bradford, a scholar from Columbia Law School, which asserts that businesses comply with EU standards even when selling products in more permissive jurisdictions. More broadly, this phenomenon refers to the influence that the EU exercises on lawmaking activities elsewhere. For example, the Brussels Effect facilitated a convergence between the USA and the EU on personal data regulation when the GDPR came into force. Indeed, American legislation has a more innovation-focused approach to personal data protection and a more limited scope of privacy, which contrasts with the EU’s risk-based approach, aiming to strike the right balance between innovation and fundamental rights.
A similar approach has been adopted by the EU to regulate the AI sector since the adoption of the AI Act. This comprehensive legal framework aims to ensure safe and trustworthy AI that respects fundamental rights, without stifling innovation. However, there are concerns about this balance. According to the Draghi report published in September 2024, the regulation should be “lightened” to enhance the EU’s technological development and enable it to catch up with the USA. In fact, most major tech companies, such as Open AI, Meta, and Alphabet, are based in the USA, where AI regulation is less stringent. Therefore, lobbying activities from the USA could undermine the diffusion of EU standards related to the AI Act, particularly given that the Trump administration had already criticized the EU for overregulating the AI sector. This paper explores the potential impact of the Brussels Effect in the context of the AI Act, considering the contrasting regulatory approaches of the USA and Europe. It also examines possible future scenarios concerning AI regulation in these two regions.
Tytti Rintamäki (ADAPT Centre, Dublin City University, Ireland)

With experience as a data analyst and background in emerging technology regulation from the European University Institute, Tytti is exploring the intersection of data protection and AI regulation in the European Union. She is specifically examining how “high-risk” is defined and operationalised under both the General Data Protection Regulation and the EU Artificial Intelligence Act, and where there is overlap and divergence. Her research involves creating an automated impact assessment tool using semantic web technologies to harmonise the requirements of the Data Protection Impact Assessment (GDPR) and Fundamental Rights Impact Assessment (AI Act).
An Impact Assessment Tool for AI: Combining the GDPR and AI Act requirements using Semantic Web Technologies
The regulatory landscape of AI governance presents unprecedented challenges for organisations navigating the dual requirements of the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act. My research addresses the critical intersection of these frameworks, specifically examining how their respective impact assessments, Data Protection Impact Assessments (DPIA) and Fundamental Rights Impact Assessments (FRIA), overlap and diverge. Central to my investigation is Article 27(4) of the AI Act, which explicitly allows for reusing DPIA elements when conducting a FRIA. This overlap presents opportunities for streamlining compliance, yet my research reveals concerning inconsistencies in DPIA requirements across EU and EEA member states, leading to regulatory uncertainty and potential barriers to innovation. The labour-intensive, document-based approaches currently employed lack technological support, placing disproportionate burdens on organisations. In response, I propose using semantic web technologies to transform regulatory requirements into machine-readable formats through the creation of an ontology and taxonomy that build upon existing resources, such as the Data Privacy Vocabulary (DPV), and AI Risk Ontology (AIRO), to capture the information requirements and high-risk use cases of AI and personal data processing. These high-risk use cases will then be mapped to their potentially impacted fundamental rights, drawing from legal sources, technical standards, risk repositories, and case law to build a robust knowledge base capable of inferring risks and their impacts on rights. This knowledge base will inform the creation of a user-friendly impact assessment web tool that will guide organisations through their regulatory obligations, highlight risks and propose mitigation strategies, to reduce inconsistencies and lower compliance burdens. My approach will enable organisations to reduce redundancy in compliance processes while strengthening protections for individual rights and promoting regulatory interoperability through a universal language for AI governance that bridges technical and legal domains, contributing to a more harmonised EU approach to responsible AI development.
Shruti Kakade (Hertie school of Governance, Germany)

Shruti Kakade is a Master of Data Science student at the Hertie School. Having a background in Computer engineering, she is interested in AI Ethics, Digital Governance, Responsible AI & related regulations.
Operationalizing AI Governance: Systematic Methodologies for Evidence-Based, Participatory, and Cross-Sector Approaches
The rapid advancement of Artificial Intelligence (AI) necessitates robust governance frameworks that can address ethical challenges, ensure transparency, and foster trust across global systems. This paper presents a comprehensive methodology for operationalizing three complementary approaches to AI governance: evidence-based policymaking, participatory governance, and cross-sector collaboration. Building on Newman and Mintrom’s (2023) framework for evidence-based policy analysis, the paper introduces structured methodologies such as risk vector taxonomy, marginal risk assessment protocols, and policy intervention blueprints. These tools enable policymakers to systematically gather and validate evidence through defined criteria for source identification and quality evaluation. For instance, Victoria State Government’s syndromic surveillance program showcases how AI-driven predictive models can inform public health interventions while embedding accountability mechanisms to address algorithmic bias and transparency.
Participatory governance methodologies are designed to integrate diverse stakeholder perspectives throughout the AI lifecycle—from design to deployment. This paper proposes stakeholder mapping protocols that categorize participants based on impact, expertise, and representation, alongside deliberative engagement techniques aimed at equalizing power dynamics. Structured mechanisms such as weighted consensus models and iterative feedback loops transform stakeholder engagement into actionable governance outcomes. Examples like citizen juries on algorithmic decision-making illustrate the practical application of these methodologies in aligning technological advancements with societal values. Furthermore, cross-sector collaboration frameworks are introduced to facilitate knowledge transfer across domains through interdisciplinary translation protocols and sectoral expertise integration models. Regulatory sandboxes such as Singapore’s AI Governance Testing Framework operationalize these methods by balancing competing priorities among governments, industry, academia, and civil society. By providing globally applicable methodologies and evaluation metrics for evidence quality, participation diversity, and cross-sector integration, this paper offers a pathway for harmonizing AI governance systems that balance innovation with ethical accountability across diverse regulatory contexts.
Wout Slabbinck (University Ghent – imec, Belgium)

I am a PhD Researcher in the KNowledge On Web Scale (KNoWS) Research Team [1], part of IDLab [2].My research focuses on the Semantic Web and Linked Data, with a particular emphasis on Interoperability and Usage Control Enforcement in relation to the Solid Protocol [3] and Dataspaces. Specifically, I explore how the Open Digital Rights Language (ODRL) [4] can be leveraged to model and enforce Usage Control Policies.For more details, visit my website: https://woutslabbinck.com/[1] KNowledge On Web Scale (KNoWS): https://knows.idlab.ugent.be/[2] Internet technology and Data science Lab, a collaboration between Ghent University and imec: https://idlab.ugent.be/[3] The Solid Protocol: https://solidproject.org/TR/protocol[4] The Open Digital Rights Language (ODRL): https://www.w3.org/TR/odrl-model/
Interoperable Interpretation and Evaluation of Usage Control Policies
On the Web, consent banners (cookies) are the prevailing response to legislation such as GDPR for handling protected data. These banners are meant to inform users about how their personal data will be managed by services and third parties. Unfortunately, there is no room for negotiation over these privacy preferences; users either accept the terms to use the Web services or deny and likely end up with limited or no access. For negotiation to arise, parties must share a common language able to describe rights, duties and constraints over digital assets. Given the technical, societal and legal requirements for dealing with privacy preferences, this language must be sufficiently expressive in order to guarantee accuracy. The Open Digital Rights Language (ODRL) standard meets such requirements. However, the lack of a formalism regarding enforcement hinders its adoption. To this end, we propose a systemic approach to interpret and evaluate ODRL Policies, facilitating the creation of interoperable policy engines for enforcing expressive policies. In this paper, we introduce i) the Compliance Report Model to denote the result of an ODRL evaluation in an interoperable manner, ii) test cases comprised of policies, context and the aforementioned model to ensure correctness of policy engines, iii) and the ODRL Evaluator, an implementation that systematically evaluates ODRL policies. We show the expressiveness of our model and the effectiveness of our implementation through an evaluation over the test suite. Addressing the lack of formalisation of ODRL paves the road for negotiation over privacy preferences and establishes the foundations for interoperable policy exchange and evaluations over the Web. Future work includes the further formalisation of the context of policy evaluation and the need for inter-policy strategies for conflict resolution.
Naira López Cañellas (Technological University of Dublin, Ireland)

PhD Candidate at the Technological University of Dublin, researching the Labour-Related Implications of the Use of AI-powered Digital Technologies in the Workplace. She is part of a team of Early Stage Researchers working in a HORIZON 2020-funded Marie Skłodowska-Curie project called Collaborative Intelligence for Safety Critical Systems (CISC). While pursuing her PhD, she also worked at the European DIGITAL SME Alliance (Brussels) supporting the Focus Group on AI and the Working Group on Intellectual Property Rights, as well as other relevant policy work on upcoming European legislation in the realm of digital and tech.
Does the EU legislative framework sufficiently address the effects of digital technologies in the manufacturing workplace?
The EU’s recent legislative efforts to regulate technology have arguably failed to integrate the protection of workers’ rights in highly digitally automated workplaces (Altenried, 2020; Oosthuizen, 2022). The present research addresses this issue through the following hypothesis: the EU’s legislative framework insufficiently addresses how digital technologies influence workers’ agency and working conditions within current manufacturing workplaces. 25 semi-structured interviews were conducted with two stakeholder groups: representatives of (a) employees and (b) managers of workplaces which deploy digital technologies. The interviews are analysed through the lens of Labor Process Theory (LPT) (Braverman, 1974).
Current policy initiatives to upskill workers, increase algorithmic transparency and limit the presence of AI in the workplace (Rafner et al., 2021; Buchbinder et al, 2022; Selwyn et al, 2023) highlight the timeliness of the author’s analysis. Thus, LPT helps in identifying the influence of current digital technologies, which subsequently points to the relevant areas of improvement in the policy realm. The interviewees, in alignment with Braverman’s concerns, highlighted the impact of AI- powered technologies on decision-making, task allocation, autonomy, workplace relations, cognitive overload, stress, managerial control, and bargaining power. They also brought up the limitations associated with self-regulation (seen as burdensome, insufficient, unrealistic) and the need to update the legal framework, health and safety regulations, and training requirements. Overall, the interviewees’ recommendations fall into four main categories, starting with the need for revisions to (i) soft/hard law and (ii) related instruments (e.g., standards, enforcement bodies, certification schemes, and design protocols). They also demanded changes to (iii) digital technologies (focusing on local infrastructure, intermediaries, greening policies, and non-profit-driven solutions) and (iv) how employees interact with them (addressing skills, agency, privacy, and job satisfaction). Overall, the research participants pointed at the fact that currently, both policy and practice of the deployment of digital technologies fail to secure long-term trust between stakeholders both inside the workplace and outside of it. Therefore, my policy recommendations will be tailored to help foster it instead.
Expert Collaborators
Prof. David J. Hickton

Founding Director of the University of Pittsburgh Institute for Cyber Law, Policy, and Security.
Founding director of the University of Pittsburgh Institute for Cyber Law, Policy, and Security, where he leads interdisciplinary research and policy initiatives on cybersecurity, digital governance, and AI regulation. A former United States Attorney in the Obama administration, he has made groundbreaking contributions to cybercrime enforcement and digital security. Beyond cybersecurity, Professor Hickton has been a champion of ethical AI governance and digital justice, advocating for policy frameworks that balance technological innovation with public interest protections.
Alex Moltzau

Policy Officer at the European AI Office in the European Commission
Alex Moltzau joined the European AI Office in the European Commission the day it went live, as a Policy Officer Seconded National Expert sent from the Norwegian Ministry of Digitalisation and Governance to DG CNECT A2 Artificial Intelligence Regulation and Compliance. In addition to his role at Commission he is also a Visiting Policy Fellow at the University of Cambridge and was recently a guest speaker at Harvard Law School. He mainly works on AI regulatory sandboxes and works directly to coordinate this subgroup within the confines of the AI Board. Furthermore, he recently undertook the work in the AI Pact including gathering the signatories of the commitments for proactive implementation of the AI Act.
Dr. Lucie-Aimée Kaffee

EU Policy Lead & Applied Researcher at Hugging Face
Dr. Lucie-Aimée Kaffee is the EU Policy Lead and Applied Researcher at Hugging Face, working on the intersection of policy and AI technology. Equipped with a PhD in Computer Science, her research focuses on harnessing AI to support online communities, such as on Wikipedia. Driven by a commitment to community-driven decision-making, she integrates her expertise in machine learning, AI ethics, and policy to address societal challenges. She has previously worked at the University of Copenhagen, University of Southampton, and HPI.
Karolina Iwanska

Digital Civic Space Advisor at the European Center for Not-for-Profit Law (ECNL)
Karolina Iwańska is a lawyer working at the intersection of human rights and technology. She is a Digital Civic Space Advisor at the European Center for Not-for-Profit Law – a non-governmental organization based in the Hague, Netherlands working on empowering civil society through creating enabling legal and policy frameworks. Previously she worked at Warsaw-based Panoptykon Foundation where she was involved in GDPR litigation related to online advertising and in advocacy on EU digital policy files, including the ePrivacy regulation, the Digital Services Act and the AI Act. In 2019/20 she was a Mozilla EU Tech Policy fellow working on regulatory frameworks for privacy-friendly advertising. She holds a masters in law from the Jagiellonian University in Krakow, Poland.
Sophie Tomlinson

Director of Programs at Datasphere
Sophie Tomlinson is the Director Programs at the Datasphere Initiative. She is an experienced policy communications professional with a background in technology policy, international trade and development.
Joining the Internet & Jurisdiction Policy Network to oversee communications and outreach in 2019, Sophie previously worked as Deputy Director, Global Policy at the International Chamber of Commerce (ICC) where she led work on sustainable development, digital transformation and stakeholder engagement. Managing ICC’s digital economy policy work between 2015-2018, Sophie developed policy and communication strategies to advance global business perspective on issues from Internet and telecoms, to privacy and data protection as well as cybersecurity, emerging technology and digital trade.
Prior to joining ICC Sophie worked in Westminster at the Airport Operators Association. Sophie holds a Master of Social Science in International Public Policy, University College London, United Kingdom and studied English Literature to earn her Bachelor’s degree from the University of Sussex.