Responsible research
Currently, we are still conducting a large-scale survey involving many researchers to understand how generative AI should be used in an ethical and responsible way.
Guideline for responsible use of Generative AI in research
Introduction
The integration of generative AI into research methodologies presents a transformative potential for scientific inquiry. However, it necessitates stringent oversight to preserve the integrity, accuracy, and ethical standards of scholarly work. These guidelines aim to establish a comprehensive framework for the responsible use of generative AI across various stages of research, addressing emerging challenges and ensuring alignment with evolving ethical standards. This document is based on an analysis of the TOP 100 universities' GenAI policies in research*, the European Union Guidelines*, and four main publishers*.
*Top 100 universities from ARWU: 2024: https://www.shanghairanking.com/
*European Union:https://research-and-innovation.ec.europa.eu/document/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en
*Four main publishers: https://www.gaiforresearch.com/journal-policy
​​
1. Scope of Use
-
Permissible Uses: Researchers are encouraged to utilize generative AI for repetitive and monotonous tasks including but not limited to literature searching and mapping, stimuli creation, labelling, interview, programming, data analysis, referencing proofreading, formatting, and other supportive functions that enhance productivity without undermining fundamental research tasks*.
*This type of task is typically cognitively demanding and exhaustive, involving repetitive and monotonous activities that do not offer much in the way of creative or novel challenges. It requires sustained mental effort and attention to detail, leading to fatigue without the stimulation of intellectual engagement or innovation.
​
-
Discouraged Uses: Researchers should avoid using generative AI to replace the primary intellectual tasks of researchers and subjects, including but not limited to the formulation of research questions, primary writing, interpretation of results, critical analysis, journal peer review, and replacing participants or interviewees. Additionally, GenAI-generated images and multimedia should not directly be used in final research outputs unless explicitly permitted by the journal and relevant to the research design.
​
2. Disclosure and Transparency
-
Mandatory Disclosure: All uses of generative AI must be explicitly disclosed in research manuscripts, particularly in the methods section. This includes detailing the specific roles, tool versions, purposes, and outputs of the AI tools employed.
-
Declaration of AI Contributions: While generative AI cannot be credited as an author, its usage in the research process must be acknowledged to maintain transparency with peer reviewers, journal editors, and the broader scientific community. Additionally, comprehensive records of AI-related activities must be maintained to support transparency.
​
3. Oversight and Verification
​
-
Avoidance of Misinformation: Care must be taken to prevent the dissemination of misinformation through rigorous verification of AI-generated content and cross-checking AI-suggested citations and data, and implementing fact-checking processes to avoid reliance on incorrect GenAI-generated information.
-
Human Oversight: Every phase of research involving generative AI must be supervised by human researchers to ensure the validity and integrity of the work, as well as adherence to academic and ethical standards. Researchers are responsible for critically assessing and verifying all AI-generated content.AI should complement human decision-making, not replace it, especially in ethical decisions.
-
Peer Review and Editing: The majority of academic journals prohibit the use of generative AI in the peer review process. Submitting unpublished manuscripts to generative AI platforms may result in data leakage. Consequently, we advise against using generative AI for manuscript review.
​
4. Data Security and Considerations
-
Data Privacy and Security: Researchers must consider data privacy issues, particularly in studies involving sensitive or personal data, ensuring that AI tools comply with all applicable data protection regulations (e.g., GDPR). Researchers must adhere to institutional data protection policies to prevent breaches. Additionally, avoid using third-party GenAI integrations that pose high risks, especially when handling sensitive data, confidential information, or systems that require high-security standards. Implement comprehensive security measures to counteract threats posed by GenAI-enabled tools and ensure GenAI platforms undergo thorough risk assessments. Raise awareness of security threats, including GenAI-generated deepfakes and malicious activities.
-
Avoidance of Biases: Researchers should identify and mitigate biases in GenAI outputs to prevent unfair research findings, assess GenAI tools to ensure they do not negatively impact diversity or marginalized groups, and monitor for language and cultural biases in GenAI-generated content. Users should evaluate the ethical implications of GenAI usage to avoid moral risks.
​
5. Intellectual Property and Content Ownership
-
Intellectual Property Protection: Avoid unauthorized use of proprietary data in GenAI tools.
-
Copyright and Proper Attribution: Ensure GenAI-generated content complies with copyright laws and is properly cited.
-
Original Content Safeguarding: Prevent intellectual property infringements when publishing research.
​
6. Environmental Sustainability
-
Resource Management: Encourage the judicious use of GenAI to minimize energy consumption and carbon footprint. Use more environmentally friendly GenAI models.
-
Ecological Responsibility: Advocate for sustainable GenAI practices considering environmental impacts.
​
7. Continuous Learning and Adaptation
-
Training and Development: Researchers should continuously update their knowledge of generative AI capabilities and limitations, adhere to GenAI policies from external bodies such as publishers and funding agencies, familiarize themselves with the functions and limitations of GenAI tools to ensure their correct application, and encourage collaboration to adjust GenAI standards as ethical requirements evolve while advocating for adaptable policies that keep pace with technological advancements.
​
These guidelines will be reviewed periodically to adapt to technological advancements and evolving ethical standards in research. They aim to support the scientific community in exploring the benefits of AI while safeguarding the foundational principles of research integrity and transparency.
​
16 Nov 2024
For the older version, please visit this.