Context and problem statement
When LLM and GenAI technologies process scientific literature to generate information and insights, values upheld in scholarly publishing like attribution, verifiability, accuracy, transparency, and consistency are critical for the resulting outputs to be as trustworthy as the underlying content. A collective effort between all stakeholders, including STM publishers and technology providers, is needed to develop guidelines and recommendations to ensure that LLMs and GenAI tools treat scholarly content in a responsible way, supporting central tenets in scholarly communication such as attribution, verifiability, replicability, transparency, and trust.
If not addressed responsibly, the potential to misrepresent, distort, or omit critical elements risks impacting the integrity of the science, as well as public trust in scientific knowledge – with real-world implications e.g., on patient care.