
Generative AI Policy
1. General
This policy sets out the principles for the responsible use of generative artificial intelligence in the preparation, submission, review and editorial processing of scientific manuscripts. The document is consistent with the international standards of academic integrity recommended by the Committee on Publication Ethics (COPE), as well as the editorial policies of Elsevier and Springer. The policy aims to ensure transparency, accuracy and trust in scientific results obtained with the participation of digital technologies.
2. Use of generative artificial intelligence by authors
Authors may use generative artificial intelligence tools as an auxiliary tool, for example, for language correction, stylistic editing, technical translation, clarification of formulations or structuring of the text. Generative artificial intelligence may also be used for code analysis, calculations and statistical operations, provided that the results are verified by the author. Supporting illustrative materials are allowed, but authors are fully responsible for their accuracy.
At the same time, generative AI cannot be used to create fictional data, quotes, literary sources, or artificially generated “research results.” It is prohibited to use generative models to generate content that is presented as original scientific contribution without the actual participation of the author. It is also not allowed to upload confidential, unpublished, or protected data to AI tools.
3. Disclosure of the use of generative AI
Authors are required to transparently report any use of generative AI in the process of creating the manuscript. Such a notification should be placed in the “Acknowledgements” section or in the “Materials and Methods” section, indicating the name of the tool used, its version (if available), and a clear description of the stages of text preparation or analysis in which it was applied.
Generative AI cannot be listed as a co-author, as it is not capable of taking responsibility for the content of the work. Responsibility for the reliability of the data, the accuracy of the statements, the correctness of references and compliance with ethical standards lies entirely with the authors.
4. Use of generative AI by reviewers
Reviewers are prohibited from transferring the content of manuscripts to third-party generative AI models, as this is contrary to the confidentiality principle defined by COPE. Reviews should not be created entirely or substantially by artificial intelligence tools. Only the technical use of such tools is allowed to improve the language or structure of the review's own text without disclosing the content of the manuscript.
5. Use of generative AI by the editorial board
The editorial team of the collection may use generative AI tools for technical and administrative tasks, for example, to improve the language quality of accompanying communication texts or to automate organizational processes. However, such tools are not used to make editorial decisions, change the content of manuscripts, or process confidential materials without the editor’s control.
6. Authors’ Responsibility
Authors are fully responsible for all scientific results, data, statements, citations, and interpretations presented in the manuscript, regardless of whether generative artificial intelligence was used. If artificial intelligence tools have caused inaccuracies, errors, or violations, the responsibility for their elimination or consequences remains with the authors. In cases of policy violations, the editorial office reserves the right to request revisions, reject the manuscript, or apply ethical procedures in accordance with COPE recommendations.
7. Policy Updates
The policy may be reviewed and updated in accordance with the development of generative artificial intelligence technologies, improvements in international standards, and changes in scientific communication practices.
