
Who gets to shape artificial intelligence and who is left to adapt to it?
Across Europe, the actors most affected by AI often have the least capacity to influence its development. Smaller actors, such as small and medium-sized enterprises (SMEs) and civil society organisations (CSOs), lack the resources to shape technological development and remain dependent on infrastructures controlled by large technology companies.
Three recent FORSEE publications explore what successful artificial intelligence looks like from the ground up. Researchers from the UCD Centre for Digital Policy, European Digital SME Alliance, WZB Wissenschaftszentrum Berlin für Sozialforschung, TASC and the University of Toulouse analysed how small and medium-sized enterprises (SMEs), civil society organisations (CSOs), and labour actors understand AI success and where their expectations, concerns, and constraints diverge. What emerges is not a single definition of success, but a set of tensions that reveal who benefits from AI today, and who struggles to shape it.
SMEs welcome regulation, but face structural limits in realising AI opportunities
Small and medium-sized enterprises represent the vast majority of European businesses and face intense pressure within the market-driven landscape. They adopt artificial intelligence to boost productivity and reduce costs. AI success for them is tied to human empowerment, productivity, and efficiency.
However, structural limitations restrict SMEs’ capacity to act. Reliance on external cloud infrastructures and limited funding constrain their ability to scale and innovate. Unclear guidance and disproportionate compliance burdens of regulation, coupled with a lack of resources, create concerns for companies.
Despite what the current political debate suggests, SMEs don’t view regulation primarily as an obstacle. The European Union’s AI Act, competition law, and common principles for technology development help level the playing field, allowing small businesses to build their operations on equal footing with big players.
“If you look at the EU, you’ll see that a lot of new European companies have emerged because they address regulatory needs. Those regulations created entire business models. … So if we reduce regulations, we’re essentially helping big tech and large enterprises consolidate,” said one interviewee.
“So if we reduce regulations, we’re essentially helping big tech and large enterprises consolidate.”
Civil society is worried about the erosion of democracy and other AI-related risks
Where SMEs often see artificial intelligence as an opportunity for growth, civil society organisations view the technology from a fundamentally different vantage point. CSOs are deeply worried about the risks of AI, including the erosion of democracy and intensified surveillance, but find themselves unable to act effectively due to severe resource constraints.
AI gender bias illustrates this gap clearly. The European Policy Centre recently called for mainstreaming gender in European AI policy, noting that, for example, the proliferation of AI-enabled deepfake pornography and ‘nudifier’ apps poses a severe threat, disproportionately targeting women.
Civil society groups understand algorithmic biases and harmful AI-generated content not as mere technical glitches, but as systemic issues that reinforce existing social prejudices. However, these organisations struggle to intervene effectively. Shrinking funding and algorithmic content moderation limit their online visibility, leaving them at a disadvantage compared to large technology firms.
As one CSO participant explained: “There are capacity issues. We have our daily work, so how can we go out and learn about everything on top of that? Especially if funding is shrinking, CSO workers are having to do more and more.” Within a context of austerity and rising uncertainty, AI gender bias risks being treated as of secondary importance.
The CSO approach to gender bias differs from that of SMEs, who treat the problem as a technical defect rather than a systemic issue, attempting to clean their data and adjust their algorithms.
As one SME founder noted: “A critical thinking approach to data is the most important step before starting any machine learning model. … we can control a good percentage of all issues that can emerge in this kind of project.”
AI governance: from regulation to participation
While optimism about AI, particularly among SMEs, drives experimentation and innovation, structural vulnerabilities limit smaller actors’ ability to meaningfully shape AI development. These dynamics are further shaped by broader power asymmetries in the AI ecosystem, where smaller actors remain dependent on infrastructures and resources controlled by large technology companies.
Strengthening technology governance requires moving beyond a purely regulatory focus towards creating conditions for participation. This includes more targeted funding for SMEs and CSOs, improved access to data and infrastructure, and meaningful inclusion in policy discussions and decision-making processes.
Addressing AI gender bias and broader AI risks is not only a technical challenge. It’s also about empowering the actors who develop, use, and are impacted by these systems under real-world constraints. Without that shift, “successful AI” will continue to reflect the priorities of those with the most resources, rather than those most affected by it.
For detailed analysis and methodological notes, read the full FORSEE reports:
Narrative framework on success among SME representatives by Alexandros Minotakis, Elizabeth Farries, Loredana Bucseneanu, Sandra Sieron, Molly Newell, Eugenia Siapera, and Aphra Kerr
Gendered perspectives among SME representatives, by Alexandros Minotakis, Elizabeth Farries, Loredana Bucseneanu, Sandra Sieron, and Molly Newell
Analysis of Civil Society Organisations’ perspectives on AI impact on gender imbalance by Elizabeth Farries, Alexandros Minotakis, Pierre Ratinaud, Loredana Bucseneanu, Sandra Sieron, and Julia Pohle