AI has become an integral part of how research is shared and improved—supporting everything from editorial workflows to personalized discovery. It supports:
Such integration enables better access to knowledge and streamlines the dissemination of trustworthy research. (Source: AI Ethics in Scholarly Communication).
Our members have long pioneered the responsible use of AI, applying it to content creation, workflow optimization, and the development of innovative tools.
To fulfill its promise, AI must be used responsibly and in alignment with the ethos of science. STM advocates for:
Source: AI Ethics In Scholarly Communication
Also reference: Recommendations for a Classification of AI Use in Academic Manuscript Preparation
Despite all the potential gains, AI could also negatively affect knowledge production and dissemination in the research ecosystem if not handled carefully – and exacerbate an already pervasive blur between fact and fiction.
AI’s capabilities can amplify misinformation, especially when used without proper guardrails. The “hallucination” problem—where AI generates false or misleading content—poses a threat to scientific credibility and public trust.
Much like the spread of fake news through social media, scientific misinformation could erode societal trust in research and decision-making. Responsible stewardship is vital to counter these risks.
Society and policy-makers need to be able to trust scientific information to make evidence-based decisions, and researchers need to be able to drive innovation and discoveries that are so central to competitiveness and other societal benefits.
Recognizing the dual nature of AI, STM is leading initiatives that use AI to protect research integrity. The STM Integrity Hub exemplifies this approach, offering:
This balanced approach ensures AI supports—not replaces—the essential work of human reviewers and editors.
AI & Trusted Research How AI shapes — and is shaped by — the academic record From accelerating drug development to enabling green technologies, AI holds extraordinary promise for science — but it also introduces real risks, from AI-generated misinformation to large-scale manipulation of the academic record. As the pace of change accelerates, publishers and…
The CDN provider has recently announced new tools and services for the era of AI. The two latest ones are their Content Signals Policy, which makes robots.txt indications more specific, and responsible AI bot principles. These may help bridge the gap between current practices and evolving legislation by providing effective tools to manage AI bots.
We’re expecting a busy autumn: after conducting consultations over the summer, the EU Commission will release a strategy for AI in science on 7 October and a Data Union Strategy at the end of October. A consultation was just launched on a Digital Omnibus, a package to review and streamline digital legislation to alleviate compliance burdens…
The Danish Ministry for Culture tasked a dedicated working group to explore questions about the interplay between AI and copyright. They delivered a very good set of recommendations, including recommendations on effective transparency in training data. EU countries will continue debating this topic into the fall, exploring potential solutions to favour AI licensing.