Guidelines on the Use of Generative AI Tools
Posted on 2025-12-20The Journal of Language and Culture recognizes the increasing presence of artificial intelligence (AI) in research, writing, and publishing. The objective of these guidelines is to protect and preserve the integrity, transparency, and impartiality that are essential elements for academic publishing endeavours. The detailed set of guidelines below has been carefully designed to regulate the ethical use of AI, which sets out expectations for both authors and reviewers in the submission and review process.
For Authors
Authors must state whether they have employed any writing assistants, the name of the AI tool and its version used to prepare their paper. This explanation should include a description of the specific types of AI tools used and an explanation as to how they were employed in the service of the work.
Most importantly, AI tools should not be listed as authors of any research output. Humans are the only ones to be recognized as authors, and credit should be given to others who only participated in certain aspects of the research or writing work.
Data Privacy and Security
Authors must ensure that all data used in AI tooling complies with the data protection requirements of the authors’ respective countries.
Authors should ensure the protection of sensitive information and should ensure that the electronic files are protected from unauthorized access or breaches.
Plagiarism and Misconduct
The final responsibility for ensuring that the manuscript is completely original lies with the authors themselves. Any form of research misconduct is manifestly banned, in the form of fabrication, falsification or plagiarism, etc.
AI in Research and Writing
AI-based tools are great supports in terms of research and writing, but they cannot replace critical thinking, creative work, and the value that authors themselves add to their creation. Therefore, content that is produced by or largely assisted by AI tools should be attributed appropriately, and this must be declared in the manuscript, in the acknowledgement (see the template below).
Writers should declare their use of AI tools. If this cannot be achieved, the submission must be rejected or retracted in line with COPE guidelines.
Disclosure Statement
The following clause must be included in manuscripts by authors who use AI-assisted translation, editing, or research analysis:
“The author(s) declare the use of [name of AI tool and its version] for [specific ways in using AI tool, such as translation, grammar checking, data analysis]. The final manuscript has been reviewed and approved by the author(s) to ensure accuracy and integrity.”
For Reviewers
Confidentiality
It is a good practice for the reviewer to keep manuscripts under review confidential. Reviewers must not upload a manuscript they agree to review, or any part of it, into an AI tool. Doing so may affect author confidentiality, intellectual property, and data privacy. Furthermore, review reports should not be written by any AI platforms, but only for grammar correction or readability improvement. The responsibility for content and assessment lies solely with the reviewer. Generative AI can never compare to a reviewer’s critical perspective, creativity, and expertise on what should be used to assess manuscripts. It is the reviewer's own judgment whether a review comment is fair or unfair.
Use of AI Screening Tools
The JLC may utilize in-house AI tools, selectively license third-party AI solutions, or employ plagiarism detection and refereeing (checking) for certain key tasks to verify the quality of the submitted manuscript. The use of these systems adheres to strict rules regarding security and ethical conduct, aiming to protect the privacy of both authors and reviewers, which is expected to help maintain their research and reviewing integrity.