ADDITIONAL MENU
Artificial Intelligence Policy
Guidelines for Responsible AI Usage in Jurnal Educative: Journal of Educational Studies
1. Overview and Core Principles
HuRuf Journal recognizes the rapid development of generative AI and AI-assisted technologies (such as Large Language Models, e.g., ChatGPT) in academic research. While these tools can improve efficiency in writing and data processing, they must be used responsibly. Artificial Intelligence (AI) cannot replace human critical thinking, expertise, and evaluation.
Ultimately, authors are fully responsible and accountable for the contents of their work, including the accuracy, validity, and originality of the manuscript.
2. Policy for Authors
A. Authorship and Accountability
- No AI Authorship: AI tools do not qualify for authorship. Authorship implies responsibilities and tasks (such as approving the final version and accountability for integrity) that can only be performed by humans. AI tools must not be listed as an author or co-author.
- Human Oversight: Authors must carefully review and verify any output generated by AI. AI can generate biased, incorrect, or fabricated information (hallucinations). Authors are responsible for ensuring the work does not infringe on third-party rights or plagiarism policies.
B. Disclosure and Transparency
- Authors must be transparent about their use of AI tools. If AI or AI-assisted technologies were used in the writing process, data collection, or analysis, this must be declared.
- Exceptions: Basic tools for checking spelling, grammar, and punctuation (e.g., standard Grammarly, spell-checkers) do not require a specific declaration unless they significantly alter the text.
- Mandatory Declaration: Authors should include a statement at the end of their manuscript (before the References section) using the following format:
Declaration of Generative AI and AI-assisted technologies in the writing process
During the preparation of this work, the author(s) used [NAME OF TOOL / SERVICE] in order to [REASON, e.g., improve readability/generate code/analyze data]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
C. Use of AI in Figures and Images
- In alignment with global ethical standards, the use of generative AI to create or alter images, figures, or scientific artwork is generally not permitted, except when the use of AI is part of the research design/methodology itself (e.g., studying AI-generated patterns in linguistics). If used as part of the methodology, it must be described in detail in the "Methods" section.
3. Policy for Peer Reviewers
Confidentiality is paramount. When a researcher is invited to review a manuscript for Jurnal Educative, they must treat the document as confidential.
- Prohibition on Uploading Manuscripts: Reviewers are strictly prohibited from uploading a submitted manuscript (or any part of it) into a generative AI tool (such as ChatGPT, Claude, etc.). Doing so violates the authors’ confidentiality and proprietary rights, as these tools may use the input data for training purposes.
- Review Reports: Reviewers should not use generative AI to write their peer review reports. The critical assessment required for peer review is a human responsibility.
4. Policy for Editors
Editors must maintain the integrity of the assessment process and protect author data privacy.
- Decision Making: Generative AI should not be used to assist in the final editorial decision-making process. Editorial decisions require human judgment and accountability.
- Data Privacy: Editors must not upload manuscripts, decision letters, or confidential correspondence into public generative AI tools to prevent data leakage and privacy breaches.
