Generative AI policies for journals

For authors

The use of generative AI and AI-assisted technologies in scientific writing

This policy applies strictly to the manuscript writing stage and does not extend to the use of AI tools for data analysis or the generation of research findings within the scientific investigation.

Authors who choose to use generative AI or AI-assisted technologies during the preparation of their manuscripts must limit their application to enhancing language clarity and improving readability. The use of such tools must always be accompanied by human oversight and editorial judgment. Authors are required to carefully review and edit any content produced or modified by AI, as such technologies may generate text that is convincing in tone yet factually inaccurate, incomplete, or biased. Full responsibility and accountability for the content of the work ultimately lie with the authors.

The use of generative AI or AI-assisted tools must be transparently disclosed within the manuscript. A corresponding statement will be included in the final published article. This practice promotes transparency, supports the integrity of scholarly communication, and helps maintain trust among authors, editors, reviewers, readers, and other contributors. Furthermore, such disclosure ensures compliance with the terms of use of the AI tools involved.

Under no circumstances should AI tools or systems be listed as authors or co-authors, nor should they be cited as the source of intellectual contributions. Authorship confers responsibility and requires contributions that only human individuals are capable of making. Each author must be able to affirm the accuracy and integrity of all parts of the work, approve the final manuscript, and agree to its submission. Authors are also responsible for ensuring that the submitted work is original, that all listed authors meet the journal’s criteria for authorship, and that the manuscript does not infringe on the rights of third parties.

Before submitting their work, authors are expected to review and adhere to the journal’s Publishing Ethics policy, which outlines essential standards for responsible research and authorship.

For reviewers

The use of generative AI and AI-assisted technologies in the journal peer review process

When a researcher is invited to review another researcher's manuscript, the document must be treated as strictly confidential. Reviewers must not upload the manuscript, or any part of it, into a generative AI tool, as doing so may violate the confidentiality and proprietary rights of the authors and, if personal data is involved, may also breach data privacy regulations.

This confidentiality requirement also applies to the peer review report, which may contain sensitive information about the manuscript and/or its authors. Therefore, reviewers must not input their review reports into any AI tool, even if the sole purpose is to improve grammar or readability.

The peer review process is a cornerstone of the scholarly ecosystem, and De Jure is committed to upholding the highest standards of integrity in this regard. The act of reviewing a scientific manuscript involves responsibilities that must be carried out by human reviewers. Generative AI or AI-assisted technologies must not be used to aid in the scientific assessment of a manuscript, as the critical thinking and original judgment required in peer review are beyond the capabilities of such tools. There is a risk that AI may generate conclusions that are inaccurate, incomplete, or biased. Reviewers remain fully responsible and accountable for the content of their review reports.

For editors

The use of generative AI and AI-assisted technologies in the journal editorial process

Submitted manuscripts must be treated as confidential documents. Editors must not upload any part of a manuscript into a generative AI tool, as doing so may compromise the authors’ confidentiality and proprietary rights, and, where personal information is included, may violate data privacy regulations.

This confidentiality obligation also extends to all correspondence related to the manuscript, including notification or decision letters, as these communications may contain sensitive information about the manuscript and/or its authors. For this reason, editors must not input such letters into AI tools, even for the sole purpose of improving language or readability.

The peer review process is a foundational component of the scientific system, and De Jure is committed to maintaining the highest standards of integrity. Managing the editorial evaluation of scientific manuscripts entails responsibilities that can only be fulfilled by human judgment. Generative AI or AI-assisted technologies must not be used by editors in the assessment or decision-making processes, as the critical and original thinking required lies beyond the capacity of such tools. There is a significant risk that AI could produce conclusions that are inaccurate, incomplete, or biased. Editors remain fully responsible and accountable for the editorial process, the final decision, and the communication of that decision to the authors.