This publication examines how “successful AI” is defined in international and national technical standards. It analyses 36 AI-related standards and reports issued by ISO/IEC JTC 1/SC 42 and the German Institute for Standardization (DIN) between 2018 and 2025, tracing how social expectations around AI performance, risk, and governance are formally articulated.
Using a mixed-method approach that combines thematic analysis with unsupervised topic modelling (BERTopic), the study identifies four overarching meta-themes shaping these standards: AI technical issues, AI uses, risks and harms, and AI governance.
The findings show that standards primarily frame AI success at the organisational (“micro”) level, emphasising operational consistency, risk management, documentation, and process control. By contrast, ethical, societal, and environmental concerns are more often addressed in non-normative technical reports or deferred to regional and national policy frameworks.
Overall, the report highlights a multi-scalar model of AI governance: technical standards equip organisations with tools for compliance and failure management, while responsibility for defining normative values, acceptable risks, and societal priorities is shifted to institutions operating at meso and macro levels.
This publication can be downloaded from Zenodo. The publication hasn’t yet been reviewed and approved by the European Commission.
Authors: Marta Lasek-Markey, Delaram Golpayegani, Manushresth Mahesh, Victoria Wiegand, Arjumand Younus, Yuening Li, Aphra Kerr, and Dave Lewis