This publication examines how “successful AI” is defined by leading ICT professional bodies, focusing on the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE). It analyses 11 key policies, codes of conduct, guidelines, and standards issued between 2018 and 2025, exploring how professional expectations shape responsible AI development and practice.
Using a mixed-method approach that combines manual thematic analysis with unsupervised topic modelling (BERTopic), the study identifies how AI success criteria are articulated across four shared meta-themes: AI technical issues, AI uses, risks and harms, and AI governance.
The findings show that ACM and IEEE documents place a strong and persistent emphasis on ethics, human well-being, professional responsibility, and individual competence. Unlike supranational policy frameworks, these professional bodies primarily frame AI success at the level of individual practitioners and organisations, rather than through enforceable regulatory or policy mechanisms. Ethical AI is largely operationalised through voluntary codes of conduct, professional standards, training, and certification.
The publication highlights the enduring centrality of ethics and human well-being in professional guidance, in contrast to the growing shift toward risk-based regulation observed in supranational governance. It shows how ICT professional bodies project expectations of AI success onto everyday professional and organisational practice, thereby shaping norms and responsibilities that complement, but do not replace, regulatory and institutional approaches to defining successful AI.
This publication can be downloaded from Zenodo. The publication hasn’t yet been reviewed and approved by the European Commission.
Authors: Victoria Wiegand, Delaram Golpayegani, Marta Lasek-Markey, Arjumand Younus, Monique Munarini, Yuening Li, Aphra Kerr, and Dave Lewis