AI Policy
AI bots such as ChatGPT cannot be listed as authors on your submission.
Authors must clearly indicate the use of tools based on large language models and generative AI for data or code generation, data collection, cleaning, analysis, or interpretation, (which tool was used and for what purpose), preferably in the methods or acknowledgements sections.
Photography, videos or illustrations created wholly or partly using generative AI are not considered acceptable. The use of non-generative machine learning tools to manipulate, combine or enhance existing images or figures should be disclosed in the relevant caption upon submission to allow a case-by-case review. Concealing the use of AI tools is unethical.
The use of AI-based tools for copyediting, translation, and spell checking does not need to be declared.
AI outputs should not be cited as primary sources for backing up specific claims.
Editors and Reviewers must ensure the confidentiality of the editorial work and the peer review process.
Editors must not share information about submitted manuscripts or peer review reports with any tools based on large language models and generative AI.
Reviewers must not use any tools based on large language models and generative AI to generate review reports.
Concealing the use of AI tools is unethical and undermines transparency in editorial work and peer review.
The editorial and review processes are confidential, and using AI tools on the manuscript makes it public, violating the confidentiality principle, disclosing confidential information in public, and compromising transparency.