This publication analyses how “successful AI” is defined by supranational institutions. It examines 13 key policies, guidelines, and regulations issued by supranational bodies, including the EU, OECD, UN, and Council of Europe, between 2018 and 2025.
Using a mixed-method approach that combines qualitative thematic analysis with unsupervised topic modelling (BERTopic), the study maps how social expectations of AI success are articulated across four recurring meta-themes: AI Technical Issues, AI Uses, AI Risks and Harms and AI Governance.
The findings reveal a clear evolution from early, principle-based and aspirational visions of AI success, centred on ethics, trustworthiness, and human-centric values, towards more operational, risk-based, and governance-oriented approaches. In particular, later documents increasingly emphasise implementation mechanisms, compliance structures, and institutional responsibility, reflecting the growing regulatory role of supranational bodies, most notably in the context of the EU AI Act.
Conversely, documents produced by non-EU institutions place greater emphasis on human rights, democratic values, and the Sustainable Development Goals, framing AI success more explicitly in terms of societal and global public-interest outcomes.
Overall, the publication highlights a shared expectation of “innovation with protections”, in which AI success depends on a multi-level governance structure that balances economic and technological benefits with the mitigation of societal risks. This evolution illustrates how AI governance is moving from vision-setting toward enforceable, coordinated frameworks, while continuing to balance the relationship among innovation, risk management, and societal impact.
This publication can be downloaded from Zenodo. The publication hasn’t yet been reviewed and approved by the European Commission.
Authors: Delaram Golpayegani, Marta Lasek-Markey, Arjumand Younus, Aphra Kerr, Dave Lewis, and Alexandros Minotakis