Responsible use of large language models in manuscript authorship, peer review, and editorial processes: a Delphi consensus among editors-in-chief of anaesthesia and pain medicine journals (RULE-AP).

TitleResponsible use of large language models in manuscript authorship, peer review, and editorial processes: a Delphi consensus among editors-in-chief of anaesthesia and pain medicine journals (RULE-AP).
Publication TypeJournal Article
Year of Publication2026
AuthorsDe Cassai A, Dost B, Augoustides JG, Azamfirei L, Alanoğlu Z, Azi LMaria, Calvache JAndrés, Cerny V, De Hert S, Eldawlatly A, Farber MK, Sobreira-Fernandes D, Fettiplace MR, Galante D, Garg R, Goldstein HV, Abad-Gurumeta A, Gupta L, Hemmings HC, Jones CA, Hochberg MC, Katz JD, Kang H, Talu GKöknel, Kraychete DC, Landau R, Lee S, Lum HD, Lundgren C, Makuloluwa TRekha, Martelletti P, Palermo TM, Peyton PJ, Poisbeau P, Rathmell JP, Roquilly A, Schwarz SKW, Shevade M, Sloan PA, Sweitzer BJ, Neto ASerpa, Stahel PF, Şatırlar ZÖzköse, Turan A, Turk DC, Valeriani M, Werner MU, Young PJ, Zabolotskikh IBorisovich, Zacharowski K, Zdanowski S
JournalBr J Anaesth
Date Published2026 Feb 25
ISSN1471-6771
Abstract

This article presents a Delphi consensus developed by a panel of editors-in-chief of anaesthesiology and pain medicine journals to guide the responsible use of large language models (LLMs) in academic publishing. LLMs offer potential benefits for scientific writing, including language editing, summarisation, translation, information organisation, and support for non-native English speakers, but their misuse raises concerns about accuracy, transparency, confidentiality, and research integrity. Through a three-round modified Delphi process involving 53 editors-in-chief or their delegates, 59 statements were generated and categorised into guidance for authors, editors, reviewers, and publishers with a particular attention to LLM disclosure practices and perceived risks. The consensus recognises that LLMs are useful tools in academic publishing for authors, reviewers, and editors. However, their use must be guided by ethics, legality, and principles of transparency and accountability. LLMs may assist with limited editorial and authorial tasks provided that their use is fully disclosed and all outputs are verified by humans. The consensus also emphasises the inappropriateness of using LLMs to generate original or ideative content, which should remain a strictly human responsibility. Moreover, LLMs must not generate data, references, conclusions, or entire manuscripts, nor be used for editorial decisions or peer-review reports. Editors expressed concerns about 'hallucinations', erosion of critical skills, confidentiality breaches, and the proliferation of low-quality LLM-generated manuscripts. The resulting guidance highlights transparency, human accountability, and careful verification as essential principles for integrating LLMs into scholarly workflows while preserving the integrity of scientific publishing.

DOI10.1016/j.bja.2026.01.029
Alternate JournalBr J Anaesth
PubMed ID41748337