Despite the central role of AI in EU policy, there is no shared understanding of what success in its development and deployment should look like.
This mid-term policy brief synthesises insights from FORSEE’s first 11 research publications, examining how different stakeholders across the AI lifecycle define success, and what this means for European AI governance.
FORSEE’s findings reveal divergences between stakeholders. Views differ on what constitutes successful AI, the role of digital sovereignty, and how to address bias and inequality. At the same time, there is notable convergence: stakeholders broadly support stronger, binding regulation to manage risks, guide innovation, protect jobs, and address challenges posed by generative AI.
Importantly, the research challenges dominant narratives around overregulation and fragmentation. Instead, stakeholders widely welcome the EU’s shift from voluntary ethics principles to enforceable, risk-based regulation, as exemplified by the AI Act. Differences lie primarily in how these rules should be implemented and improved.
One striking gap across stakeholder perspectives is the limited attention given to sustainability, which remains underexplored in current AI debates.
The policy brief outlines eight key recommendations to foster a more comprehensive understanding of AI success, one that combines technical excellence with societal and environmental goals. These include strengthening Europe’s strategic autonomy in AI, expanding multistakeholder engagement beyond industry actors, and embedding objectives such as sustainability, fairness, and civil society empowerment into funding and governance frameworks.
This publication can be downloaded from Zenodo. As FORSEE’s research reports are currently under review until mid 2026, the policy brief may require revision pending any review decisions.
Authors: Elizabeth Farries, Alexandros Minotakis, Loredana Bucseneanu, Dave Lewis, and Nikos Smyrnaios