AI in Science

Advancing Science with AI — and Advancing AI with Science

AI has become an integral part of how research is shared and improved—supporting everything from editorial workflows to personalized discovery. It supports:

  • Article recommendations and dynamic classification
  • Enhanced search and browse functionality
  • Image validation and annotation
  • Authorship and editorial matching
  • Content enrichment and structuring

Such integration enables better access to knowledge and streamlines the dissemination of trustworthy research. (Source: AI Ethics in Scholarly Communication).

Our members have long pioneered the responsible use of AI, applying it to content creation, workflow optimization, and the development of innovative tools.

To fulfill its promise, AI must be used responsibly and in alignment with the ethos of science. STM advocates for:

  • Accuracy and reliability: AI should operate on the final version of record (VoR) to ensure the most vetted, updated research is used.
  • Transparency and provenance: Systems must disclose sources, training data, and provide traceable references to maintain scholarly integrity. Many scientists are already concerned about downstream reuses of their works due to possible misrepresentations or misuse of their data for political gain.
  • Human oversight: Despite AI’s capabilities, human expertise remains essential to uphold the quality, trust, and accountability of scientific publishing.

Source: AI Ethics In Scholarly Communication

Also reference: Recommendations for a Classification of AI Use in Academic Manuscript Preparation

Despite all the potential gains, AI could also negatively affect knowledge production and dissemination in the research ecosystem if not handled carefully – and exacerbate an already pervasive blur between fact and fiction.

AI’s capabilities can amplify misinformation, especially when used without proper guardrails. The “hallucination” problem—where AI generates false or misleading content—poses a threat to scientific credibility and public trust.

Much like the spread of fake news through social media, scientific misinformation could erode societal trust in research and decision-making. Responsible stewardship is vital to counter these risks.

Society and policy-makers need to be able to trust scientific information to make evidence-based decisions, and researchers need to be able to drive innovation and discoveries that are so central to competitiveness and other societal benefits.

Recognizing the dual nature of AI, STM is leading initiatives that use AI to protect research integrity. The STM Integrity Hub exemplifies this approach, offering:

  • A shared, cloud-based infrastructure for detecting integrity issues
  • Integration with trusted tools like Springer Nature’s AI-powered text detection system
  • A human-in-the-loop model to ensure editorial discretion and accountability

This balanced approach ensures AI supports—not replaces—the essential work of human reviewers and editors.

LEARN MORE

Explore the STM Integrity Hub | A unified approach to safeguard research integrity

LEARN MORE

The latest AI news from STM

VIEW ALL NEWS

Announcing the selected presenters for the upcoming STM Innovator Fair

Meet the 14 startups and companies selected to present at this year’s STM Innovator Fair – a cornerstone of the upcoming STM Innovation & Integrity Days in London, 9-10 December. Selected from a record-breaking number of submissions this year, these innovators showcase some of the most promising technologies and ideas shaping the future of trusted…

LEARN MORE

STM responds to AI labelling consultation

Article 50 of the AI Act establishes obligations for transparent labelling of AI-operated systems and AI-generated content. Publishers’ use of AI in the publishing process is very likely to fall under exemptions from such obligations, but translations remain a grey area and good practices are encouraged.

LEARN MORE

EU adopts strategy for AI in science

The EU will establish a Resource for AI in Europe (RAISE), a virtual institute/network that coordinates key elements of the strategy: talent, compute/infrastructure, data, and funding. This will be complemented by the Data Union Strategy, expected by October/November, which aims to “ensure the availability of high-quality, large-scale datasets essential for training AI models.” | Access the…

LEARN MORE

White House seeks input on AI regulation

The U.S. Office of Science and Technology Policy (OSTP) has issued a Request for Information (RFI) seeking public input on Federal statutes and regulations that may impede the responsible development and adoption of artificial intelligence technologies in the US. STM is preparing a submission as part of our ongoing advocacy on AI. Deadline October 27th…

LEARN MORE