AI POLICY
IJRASHT follows strict ethical standards regarding the use of Generative AI and automated tools (e.g., LLMs, chatbots) by authors, reviewers, and editors.
1. Guidelines for Authors
Disclosure Requirement: Authors must disclose the use of generative AI in the preparation of their article, especially if it goes beyond straightforward language correction, editing, or formatting.
Human Responsibility: Authors are solely responsible for checking the validity and accuracy of the output generated by any automated tools used in their research or manuscript preparation.
No Authorship for AI: Automated tools and AI cannot be credited as authors or co-authors.
Citation Rule: Generative AI cannot be cited as a source in the references.
Consequences of Misuse: Failure to disclose AI use or misuse of AI tools may result in manuscript rejection, correction, or retraction in accordance with the journal’s publication ethics policies.
Licensing & Rights: Any AI-assisted content must comply with the journal’s copyright and open-access licensing policies and must not infringe third-party rights.
2. Guidelines for Peer Reviewers and Editors
Restrictions on Use: Peer reviewers and editors should not use generative AI to create their assessments or peer-review reports due to risks such as breaches of confidentiality, superficial feedback, bias, or false information.
Disclosure for Editing: Any use of AI for basic editing or rewriting by reviewers/editors must be clearly disclosed.
3. Journal Operations & Integrity
Human-in-the-Loop: Any automated tools used by the journal for routine tasks must be overseen by humans.
Verification: IJRASHT ensures that an editor or staff member will manually verify any automated detection of integrity issues, such as text similarity or figure manipulation.








