This publication maps the emerging legal challenges landscape surrounding artificial intelligence in the European Union through a comparative analysis of 20 landmark cases between 2022 and 2025. While substantial academic attention has focused on the use of AI within legal proceedings or theoretical liability frameworks, there remains a scarcity of empirical analysis concerning litigation about AI systems themselves. This research addresses that gap by examining how courts are adapting existing legal doctrines to govern algorithmic tensions prior to the full implementation of the AI Act.
The findings reveal a “stage-based” judicial logic. Courts frequently adopt a laissez-faire posture regarding the upstream phases of the AI value chain, such as data collection and model training. In these instances, judicial reasoning often insulates early-stage development to protect innovation and economic competitiveness. By contrast, courts apply a significantly stricter, rights-centered approach to downstream applications, intervening robustly when AI systems directly impact individual decision-making or fundamental rights.
The analysis also identifies a blind spot in the current litigation landscape: a marked absence of case law grounded in AI-related bias or discrimination. Despite societal concern regarding algorithmic fairness, these issues remain largely absent from the courtroom.
This publication can be downloaded from Zenodo. The publication hasn’t yet been reviewed and approved by the European Commission.
Authors: Linnet Taylor, Merve Öner Kabadayi, and Pradyun Krishnan