This publication explores how success criteria for Artificial Intelligence are articulated in 22 policies and guidelines issued by legal, healthcare, and engineering professional associations in the EU between 2018 and 2025.
Employing a mixed-method approach that combines manual thematic analysis with unsupervised topic modelling (BERTopic), the study maps social expectations across sector-specific themes, including human oversight, professional accountability, and risk management.
Across sectors, the findings show that AI success is framed as conditional rather than absolute. Professional associations consistently emphasise human oversight, ethical responsibility, and institutional accountability, positioning AI as a supportive technology that must remain subordinate to professional judgement. While legal and healthcare bodies prioritise rights protection, patient safety, and trust, engineering organisations display more diverse approaches, ranging from precautionary, ethics-oriented governance to performance- and efficiency-driven adoption.
Ultimately, the publication highlights how professional associations function as key intermediaries in AI governance, defining AI success through a micro-level lens of professional responsibility and institutional readiness to manage uncertainty and maintain trust.
This publication can be downloaded from Zenodo. The publication hasn’t yet been reviewed and approved by the European Commission.
Authors: Linnet Taylor, Merve Öner Kabadayi, Princy Marimuthu, Delaram Golpayegani, Marta Lasek-Markey, Arjumand Younus, Aphra Kerr, and Dave Lewis.