Authors are expected to report findings accurately, acknowledge others’ work, and disclose contributions and conflicts.
Reviewers offer fair, thoughtful feedback while respecting confidentiality.
Editors and publishers apply consistent ethical policies, investigate concerns, and manage the process from submission to publication.
Institutions and funders shape environments that support ethical research and respond when standards are not met.
While AI tools may support research and writing, they cannot be held accountable for the work they generate. That’s why emerging guidelines emphasize that AI must not be credited as an author. Researchers are encouraged to clearly disclose when and how AI tools were used in the research or manuscript process.
The same tools raising concerns are also being deployed to safeguard the quality of reports on research findings. Publishers increasingly use AI systems to detect image manipulation, flag potential plagiarism, and spot patterns associated with paper mills or fabricated submissions—augmenting, not replacing, editorial review.
AI can support integrity efforts, but it is not a substitute for ethical standards, transparency, and accountability. Publishers, authors, and reviewers alike must approach AI tools critically and responsibly—ensuring that technological advancement strengthens, rather than undermines, trust in scholarly communication.