Artificial intelligence is big news. Its influence spans from workplaces and schools to policymaking and media discussions, shaping not only how we live our lives but also the foundations of our societies. Yet, we tend to approach AI solely in terms of innovation, business and market growth. It is not uncommon to see calls for “AI success” go hand in hand with calls for deregulation and competitiveness, with society taking a backseat in discussions.
This leads us to a vital question: what are the correct measures of success? Are there other metrics – something beyond technological or economic efficiency – that can measure the success of artificial intelligence for society as a whole?
While AI is advancing rapidly, societies continue to lack a shared understanding of what defines “successful” AI. Instead, different groups develop diverse and possibly even conflicting ideas on what constitutes success. A core belief of the FORSEE research project is that the success or failure of an AI application depends not only on its technical merits but also on its alignment with existing social norms, cultural practices, and institutional structures.
Our key argument is that AI applications, like other technologies, do not emerge in a vacuum. Instead, they are conceived, developed, deployed and received in a social context. They both reflect and reinforce existing social (mis)conceptions, biases and faults, as well as aspirations and hopes. In this sense, technology is inherently socially constructed—an idea that applies to AI in both its development and deployment
By adopting this framework and closely examining how different stakeholders define success, we believe that FORSEE can develop a nuanced and enriched notion of success for society as a whole, eventually helping to guide future AI applications and policy efforts.
Towards Successful AI Applications
The FORSEE research project analyses existing successful AI applications to strategically enhance the capabilities of the AI industry, policymakers and the public to address the future risks and opportunities of AI. We believe that this will broaden our understanding of success, emphasising resolving conflicts, empowering stakeholders, and ensuring alignment with fundamental rights and sustainable development.
The overall goal of FORSEE is to develop a novel approach to AI governance that can effectively steer AI development towards more successful outcomes for all, based on a new evaluative framework for assessing AI applications. Furthermore, FORSEE will develop a new prototype for registering risk and negative impacts. The project carries out this work through interconnected research projects, a total of eight working packages.
None of this work would be possible without our diverse and international consortium. The consortium brings together expertise in various disciplines, including legal studies, political economy, computational social science, information and communication studies, media studies, and platforms studies. This multidisciplinary approach ensures a convergence of insights.
Looking Ahead
Our work towards successful AI begins now. As we progress with our research, we invite policymakers, SMEs, the public and various other stakeholders to engage with our findings and contribute to this vital discussion. The future of AI isn’t just about how technically savvy a model we can build: it’s about the choices we make in building it, why we make them and how they fit into our societies.