Responsible research
Currently, we are still conducting a large-scale survey involving many researchers to understand how generative AI should be used in an ethical and responsible way.
Guideline for responsible use of Generative AI in research
Introduction
The integration of generative AI into research methodologies presents transformative potential for scientific inquiry. However, it necessitates stringent oversight to preserve the integrity, accuracy, and ethical standards of scholarly work. These guidelines aim to establish a framework for the responsible use of generative AI across various stages of research.
​
1. Scope of Use
-
Permissible Uses: Researchers are encouraged to utilize generative AI for tasks such as data analysis, literature searching and mapping, proofreading, formatting, stimuli creation, and other supportive functions that enhance productivity without undermining fundamental research tasks*.
*This type of task is cognitively demanding and exhaustive, involving repetitive and monotonous activities that do not offer much in the way of creative or novel challenges. It requires sustained mental effort and attention to detail, leading to fatigue without the stimulation of intellectual engagement or innovation.
​
-
Prohibited Uses: Generative AI must not replace the primary intellectual tasks of researchers, such as the formulation of research questions, primary writing, interpretation of results, and critical analysis. Additionally, AI-generated images and multimedia should not directly be used in final research outputs unless explicitly permitted by the journal and relevant to the research design.
​
2. Disclosure and Transparency
-
Mandatory Disclosure: All uses of generative AI must be explicitly disclosed in research manuscripts, particularly in the methods section. This includes detailing the specific roles and outputs of the AI tools employed.
-
Declaration of AI Contributions: While generative AI cannot be credited as an author, its usage in the research process must be acknowledged to maintain transparency with peer reviewers, journal editors, and the broader scientific community.
​
3. Oversight and Verification
-
Human Oversight: Every phase of research involving generative AI must be supervised by human researchers to ensure the validity and integrity of the work. Researchers are responsible for critically assessing and verifying all AI-generated content.
-
Peer Review and Editing: The majority of academic journals prohibit the use of generative AI in the peer review process. Submitting unpublished manuscripts to generative AI platforms may result in data leakage. Consequently, we advise against using generative AI for manuscript review.
​
4. Ethical Considerations
-
Data Privacy and Security: Researchers must consider data privacy issues, particularly in studies involving sensitive or personal data, ensuring that AI tools comply with all applicable data protection regulations.
-
Avoidance of Misinformation: Care must be taken to prevent the dissemination of misinformation through rigorous verification of AI-generated content and cross-checking AI-suggested citations and data.
​
5. Continuous Learning and Adaptation
-
Training and Development: Researchers should continuously update their knowledge of generative AI capabilities and limitations.
​
These guidelines will be reviewed periodically to adapt to technological advancements and evolving ethical standards in research. They aim to support the scientific community in exploring the benefits of AI while safeguarding the foundational principles of research integrity and transparency.
​
16 April, 2024