Generate AI Policy

1. Scope and Applicability
• This policy regulates the use of generative artificial intelligence (AI) and AI-assisted technologies in the preparation of manuscripts submitted to the Indonesian Journal of Islamic Law (IJIL).
• The policy applies primarily to the writing and language-production process. The use of AI tools in data analysis, coding, or computational research is not prohibited, provided that such use is transparent, methodologically sound, verifiable, and remains under full human oversight.
• IJIL recognises that generative AI tools may produce incomplete, biased, or inaccurate outputs and therefore cannot replace human scholarly judgement or intellectual responsibility.


2. Permitted Uses
Generative AI tools may be used in a limited and supportive manner, including:

• Improving grammar, spelling, readability, or language clarity.
• Assisting with stylistic refinement or rephrasing under direct human supervision.
• Supporting non-substantive editorial tasks that do not affect the scholarly content.

In all permitted uses:
• Authors must critically review, edit, and verify AI-generated text.
• Authors retain full responsibility and accountability for the accuracy, originality, and integrity of the final manuscript.


3. Prohibited Uses
Generative AI and AI-assisted technologies must not be used in ways that compromise scholarly integrity, including:

• Listing AI tools as authors or co-authors, as authorship entails intellectual accountability that can only be assumed by human researchers.
• Using AI to generate or substantially shape the intellectual core of the manuscript, including research questions, central arguments, normative or theoretical interpretations, original legal analysis, or scholarly conclusions.
• Uploading confidential, sensitive, or proprietary materials (e.g., interview transcripts, unpublished manuscripts, court documents, or personal data) into AI systems in ways that risk privacy, data security, or intellectual property rights.


4. Disclosure Requirement
Authors must disclose any use of generative AI tools in the writing process. Disclosure must appear in a separate section at the end of the manuscript, prior to the references.
Title of Section:
Declaration of Generative AI and AI-Assisted Technologies in the Writing Process

Statement Example:
During the preparation of this work, the author(s) utilised [name of AI tool/service] for [specific purpose, e.g., language refinement]. The author(s) subsequently reviewed, edited, and verified all AI-assisted content and take full responsibility for the final version of the manuscript.
• Disclosure is not required for basic tools such as spell checkers, grammar checkers, or reference management software.
• If no generative AI tools were used, no statement is required.


5. Editorial and Peer Review Process
• Generative AI tools must not be used by editors or reviewers to perform substantive scholarly evaluation, peer review, or editorial decision-making.
• Human judgement, expertise, and accountability remain central to all editorial and peer review decisions at IJIL.

Limited and non-substantive uses of AI by editors (e.g., language polishing of decision letters or administrative support) do not replace editorial responsibility and must not influence evaluative judgements.
IJIL may periodically review and update this policy in response to evolving ethical standards, technological developments, and guidance issued by COPE, Scopus, and other relevant scholarly bodies.


6. Ethical Standards and Consequences
• IJIL adheres to the Committee on Publication Ethics (COPE) principles and emerging guidance on the responsible use of AI in scholarly publishing.
• Failure to disclose the use or misuse of generative AI tools in violation of this policy constitutes a breach of publication ethics.
• Possible consequences include: 1) rejection of the manuscript during review, 2) correction, expression of concern, or retraction after publication, and 3) notification to the author’s institution where appropriate.