
Researchers are increasingly required to evaluate how AI use in conducting, communicating, and evaluating research aligns with core values of reliability, honesty, respect, and accountability. In their submission to the European Strategy for AI in Science, FORSEE consortium members Dave Lewis, Marta Lasek-Markey, and Delaram Golpayegani in collaboration with their colleague Beyza Yaman from ADAPT Centre at Trinity College Dublin, argue that there is an urgent need to accelerate policy and best practice development in addressing challenges regarding the responsible use of AI in academic writing, editing, data analysis, and literature reviews. They propose three recommendations designed to advance an evaluation framework that is open, legally compliant, and aligned with the AI Act’s Fundamental Rights Impact Assessment (FRIA).
Advance research in understanding requirements for maintaining research integrity and protecting fundamental rights
Building a responsible AI ecosystem for research requires clear, co-created guidelines to protect research integrity and fundamental rights as AI becomes embedded across the research process. This means investing in research to define risk assessment requirements, building on existing policy work such as the CoARA’s (The Coalition for Advancing Research Assessment) working groups ERIP and OI4RRA, and aligning with the FRIA provisions of the AI Act. Informed by the GenAI guidelines of European Research Area Forum and UNESCO, these tools should help researchers anticipate and communicate AI risks to various stakeholders, ensuring that privacy, academic freedom, and other fundamental rights remain central in an AI-driven research landscape.
Develop open vocabulary and a self-assessment tool for AI risk management
Advancing responsible AI in research requires shared open tools for transparency and accountability. Developing an open semantic vocabulary aligned with international standards and formalised using the Resource Description Framework (RDF) would enable researchers and institutions to signal AI use and assess related risks in a consistent and interoperable way. Building on frameworks like the CoARA-ERIP self-assessment checklist, a self-assessment tool could help research teams evaluate the impact of fundamental rights, from privacy to academic freedom, while producing human- and machine-readable attestations. Such openly available tools would guide responsible practice and provide evidence for shaping future research assessment policies across Europe and beyond.
Support for trialling self-assessment tools and revising vocabulary
Developing open semantic models and risk assessment tools is a crucial first step. However, ongoing support is needed to test and refine their usability through co-creation. This can involve real-world trials with research organizations and expert groups to ensure tools effectively assess AI risks and adapt to rapid technological and regulatory changes under the AI Act. Encouraging participation from AI tool vendors, especially European SMEs, will strengthen this process. The outcomes, including semantic models and self-assessment tools, should be openly shared in standard formats to support transparency, align with Open Science principles, and inform policy discussions across research organizations and international fora.
These recommendations aim to ensure that AI complements rather than replaces human expertise. They will benefit the academic community and contribute to our understanding of how networks of collaborating stakeholders, such as those producing, reviewing, publishing, and citing publicly funded research, can adapt to maintain quality and integrity while using AI systems responsibly.
Read the submission in detail: