As more and more authors turn to large language models (LLMs) and other generative artificial intelligence (genAI) tools for writing support, the scientific publishers have responded with a patchwork of guidance that varies considerably across journals and publishers. Here, we rise to the challenge of navigating this complex environment in an attempt to summarise very different requirements.
To support our clients, authors, and fellow writers, we have reviewed the most recent guidance from major publishers (Springer Nature, Elsevier, Wiley, etc.) and summarised the key considerations for genAI use by authors, reviewers and editors. We saw substantial differences in the level of detail, what is allowed, and what (and how) should be declared across publishers.
Only a few general trends were identified:
- Listing LLMs as authors is generally not allowed.
- We understand that most journals won’t use the mere fact of using genAI in preparing your paper as grounds for rejection – provided the use is appropriately disclosed and scientific integrity is maintained.
- In contrast, failing to disclose the use of genAI could get your article rejected or retracted.
- Corporate publishers seem to care a lot about the implications of using genAI for plagiarism, copyright and confidentiality, and most of them impose tight restrictions on the use of open-access genAI tools, especially for peer reviewers.
- Academic publishers are more likely to mention the ethical implications of genAI use: hallucinations, bias, racism, breach of research integrity.
- The current policy by Wiley (see chapter 3 below) stands out as the most detailed and comprehensive guidance on the use of genAI and its appropriate disclosure in scientific publications.
The summaries below include links to original sources (last accessed 06 January 2026). Where available, we included links to AI guidance for books and for journal articles. The bulleted summaries are based on the guidance for authors, peer reviewers and editors of journal articles.
If you are reading this, you probably already know that genAI guidance in scientific publishing is a very fast-moving field, and there’s a good chance they have changed since 06 January 2026. Even if the publishers’ guidelines haven’t changed, it may be a good idea to check your target journal for any additional guidance or requirements. It’s also important to stay informed about the latest developments of genAI capabilities and keep up to date with the community discussions of what constitutes ethical and appropriate use of AI in scientific research and publishing.
1. Springer Nature group/Nature portfolio
https://link.springer.com/brands/springer/journal-policies
https://www.nature.com/nature-portfolio/editorial-policies/ai
- LLM authorship is not allowed.
- AI-generated images are not permitted (with a few exceptions).
- GenAI use by authors should be disclosed in the methods section (with the exception of when AI tools were used to improve language and grammar – this doesn’t need to be declared).
- Peer reviewers should not use free genAI tools (however, Springer Nature is working on developing secure genAI tools to assist peer review).
- Nature portfolio journals use AI to generate accessory content, which includes, but is not limited to, key points, editorial summaries, glossary terms, plain language summaries and social media posts. According to their policy, this use is not disclosed.
2. Elsevier
https://www.elsevier.com/en-gb/about/policies-and-standards/generative-ai-policies-for-journals
- LLM authorship is not allowed.
- AI-generated images are not permitted (with specific exceptions).
- GenAI use by authors should be disclosed in the methods section (with the exception of when AI tools were used to improve language and grammar – this doesn’t need to be declared).
- Peer reviewers should not use genAI tools (not even to check grammar).
- Elsevier editorial process may be assisted by in-house AI tools (unclear whether this use is disclosed to authors or peer reviewers).
Note that high-profile journals that are part of Elsevier have additional guidance on the appropriate use of genAI and disclosure:
- Cell group: https://www.cell.com/cell/information-for-authors/journal-policies#generative
- Lancet group: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(24)02615-1/fulltext
3. Wiley
https://www.wiley.com/en-us/publish/book/resources/ai-guidelines/
https://www.wiley.com/en-gb/publish/article/ai-guidelines/
In our opinion, Wiley’s policy is the most comprehensive, transparent and constructive. We encourage you to follow the links and read their guidance in full. Here is a summary of key points:
- LLM authorship is not allowed.
- AI-generated images are generally allowed (with a number of specific exceptions).
- Detailed guidance is provided on disclosing GenAI use by the authors – what should be declared, where and how (example wording).
- Peer reviewers may use genAI tools for specific task and should disclose its use to editors.
- Wiley editors use AI tools for reviewer matching and research integrity assessment, and they foresee expanding AI support in future to other steps of the peer review process.
4. Cambridge University Press
https://www.cambridge.org/core/services/publishing-ethics/authorship-and-contributorship-journals
- LLM authorship is not allowed.
- GenAI use by authors should be disclosed.
- AI use should not breach the publisher’s plagiarism policy.
- Authors remain responsible and accountable for all content.
5. Oxford University Press
https://academic.oup.com/pages/for-authors/books/author-use-of-artificial-intelligence
https://academic.oup.com/pages/for-authors/journals/preparing-and-submitting-your-manuscript/ethics
- LLM authorship is not allowed.
- GenAI use by authors should be disclosed in the Methods or Acknowledgements section.
6. JAMA network journals
https://jamanetwork.com/journals/jama/fullarticle/2807956
- LLM authorship is not allowed.
- GenAI use by authors should be disclosed.
- Authors remain responsible and accountable for all content.
- Peer reviewers are warned against risks of using free genAI tools and are required to disclose use of genAI in peer review.
- JAMA also has guidance for disclosing the use of LLMs and chatbots in research: https://jamanetwork.com/journals/jama/fullarticle/2816213.
7. SAGE journals
https://www.sagepub.com/journals/publication-ethics-policies/artificial-intelligence-policy
- LLM authorship is not allowed.
- GenAI use by authors should be disclosed in the methods section (with the exception of using AI tools to improve language, grammar and structure – this doesn’t need to be declared).
- Peer reviewers may use genAI to improve language and readability of reviews; they remain responsible for the content of peer reviews.
- Editors may use genAI to identify suitable peer reviewers but must not use it to generate decision letters or summaries of unpublished research.
8. Taylor&Francis
https://taylorandfrancis.com/our-policies/ai-policy
- LLM authorship is not allowed.
- GenAI use by authors should be disclosed.
- GenAI-generated images are not permitted.
- Editors and peer reviewers must not upload files, images or information from unpublished manuscripts

Leave a comment