AI has become an integral part of how research is shared and improved—supporting everything from editorial workflows to personalized discovery. It supports:
Such integration enables better access to knowledge and streamlines the dissemination of trustworthy research. (Source: AI Ethics in Scholarly Communication).
The role of AI in peer review is at the center of an active and rapidly evolving debate. Perspectives are shifting, and the following resources offer valuable insights to help navigate this critical conversation.
Our members have long pioneered the responsible use of AI, applying it to content creation, workflow optimization, and the development of innovative tools.
To fulfill its promise, AI must be used responsibly and in alignment with the ethos of science. STM advocates for:
Source: AI Ethics In Scholarly Communication
Also reference: Recommendations for a Classification of AI Use in Academic Manuscript Preparation
Academic publishers are compiling guidelines for their authors providing guidance on correct and transparent use of AI in preparing manuscripts for publication. Reference: Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit | Feb 2025, Springer
Despite all the potential gains, AI could also negatively affect knowledge production and dissemination in the research ecosystem if not handled carefully – and exacerbate an already pervasive blur between fact and fiction.
AI’s capabilities can amplify misinformation, especially when used without proper guardrails. The “hallucination” problem—where AI generates false or misleading content—poses a threat to scientific credibility and public trust.
Much like the spread of fake news through social media, scientific misinformation could erode societal trust in research and decision-making. Responsible stewardship is vital to counter these risks.
Society and policy-makers need to be able to trust scientific information to make evidence-based decisions, and researchers need to be able to drive innovation and discoveries that are so central to competitiveness and other societal benefits.
Recognizing the dual nature of AI, STM is leading initiatives that use AI to protect research integrity. The STM Integrity Hub exemplifies this approach, offering:
This balanced approach ensures AI supports—not replaces—the essential work of human reviewers and editors.
On 9–10 December 2025, STM’s annual Innovation & Integrity Days brought together publishers, startups, funders, researchers and infrastructure providers for two days of focused, cross-sector collaboration in London. Now in its third year (building on the legacy of STM Week), this year’s Innovation & Integrity Days reflected a noticeable shift: more dialogue across traditional boundaries, more…
In early November, STM CEO Caroline Sutton spent several days in Tokyo meeting with funders, government leaders, research agencies, and publishing groups — alongside delegates from STM’s Japan Chapter. As in last year’s visit, the conversations were productive, wide-ranging, and grounded in strong local partnerships. And while open science dominated the agenda in 2024, this…
Meet the 14 startups and companies selected to present at this year’s STM Innovator Fair – a cornerstone of the upcoming STM Innovation & Integrity Days in London, 9-10 December. Selected from a record-breaking number of submissions this year, these innovators showcase some of the most promising technologies and ideas shaping the future of trusted…
Article 50 of the AI Act establishes obligations for transparent labelling of AI-operated systems and AI-generated content. Publishers’ use of AI in the publishing process is very likely to fall under exemptions from such obligations, but translations remain a grey area and good practices are encouraged.