Robert M. Davison, Hameed Chughtai, Petter Nielsen, Marco Marabelli, Federico Iannacci, Marjolein van Offenbeek, Monideepa Tarafdar, Manuel Trenz, Angsana A. Techatassanasoontorn, Antonio Díaz Andrade, Niki Panteli
First published: 21 January 2024
Highlight
The use of Generative Artificial Intelligence (GAI) for qualitative data analysis raises several ethical issues that need to be carefully considered by researchers. These issues span data ownership and rights, data privacy and transparency, interpretive sufficiency, biases manifested in GAI, and researcher responsibilities and agency.
Surrendering research data to commercial GAI platforms in exchange for automated analysis could violate data rights and confidentiality agreements with research participants and organizations.
Using GAI for analysis raises privacy concerns when sensitive data is shared with AI tools, especially if participants and organizations are not informed or do not provide consent for this data exchange.
Overreliance on GAI for coding and analysis has limitations, as GAI lacks the contextual understanding, empathy and consciousness of human researchers which are crucial for in-depth qualitative interpretation.
GAI models can encode social biases, stereotypes and privileged Western perspectives that get propagated in the automated analysis outputs in ways that are difficult to critically examine and correct.
Researchers have an epistemic responsibility to be accountable to the evidence and claims they make. Blind application of GAI without human agency and oversight is considered unethical. The researcher must maintain authorship and accountability.
Fixed ethical guidelines for GAI use are not appropriate given the rapid evolution of the technology. Instead, an evolving set of "living guidelines" should be developed through ongoing dialogue to help navigate the ethical complexities.
Abstract
It is important to note that the text of this editorial is entirely written by humans without any Generative Artificial Intelligence (GAI) contribution or assistance. The Editor of the ISJ (Robert M. Davison) was contacted by one of the ISJ's Associate Editors (AE) (Marjolein van Offenbeek) who explained that the qualitative data analysis software ATLAS.ti was offering a free-of-charge analysis of research data if the researcher shared the same data with ATLAS.ti for training purposes for their GAI1 analysis tool. Marjolein believed that this spawned an ethical dilemma. Robert forwarded Marjolein's email to the ISJ's Senior Editors (SEs) and Associate Editors (AEs) and invited their comments. Nine of the SEs and AEs replied with feedback. We (the 11 contributing authors) then engaged in a couple of rounds of brainstorming before amalgamating the text in a shared document. This was initially created by Hameed Chughtai, but then commented on and edited by all the members of the team. The final version constitutes the shared opinion of the 11 members of the team, after several rounds of discussion. It is important to emphasise that the 11 authors have contrasting views about whether GAI should be used in qualitative data analysis, but we have reached broad agreement about the ethical issues associated with this use of GAI. Although many other topics related to the use of GAI in research could be discussed, for example, how GAI could be effectively used for qualitative analysis, we believe that ethical concerns overarch many of these other topics. Thus, in this editorial we exclusively focus on the ethics associated with using GAI for qualitative data analysis.
Comentarios