ChatGPT, an artificial intelligence (AI) chatbot developed by OpenAI, has recently been credited as a co-author on at least four published research papers and preprints1, sparking a debate among journal editors, researchers, and publishers. The chatbot, which is a large language model (LLM), generates convincing sentences by mimicking the statistical patterns of language in a large database of text collated from the Internet. This has raised questions about the appropriateness of citing AI tools as authors in scientific literature.
Publishers and preprint servers have stated that AIs like ChatGPT do not fulfill the criteria for being an author because they cannot take responsibility for the content and integrity of scientific papers. They argue that authorship in scientific literature is a formal role that carries legal responsibility, and therefore, only human beings should be listed as authors. However, some publishers suggest that an AI’s contribution to writing papers can be acknowledged in sections other than the author list, such as in the acknowledgments section.
The debate about the appropriate use of AI in scientific research is not new. In recent years, there has been a growing discussion about the role of AI in scientific discovery, and many have argued that AI tools can be used to speed up the research process and make it more efficient. These tools can assist researchers in generating hypotheses, analyzing large data sets, and even writing portions of papers. However, there are also concerns about the potential for AI to be misused and the need to ensure that the research produced is of high quality and ethically sound.
One of the main concerns is the risk of biased AI-generated research, as the models are trained on large amounts of data that may contain biases. In addition, AI models cannot understand the context of the research they are working on, which may lead to errors or inaccuracies in the final product.
There are also ethical concerns surrounding the use of AI in research. For instance, researchers using AI tools may not be able to fully understand the methods used by the AI, which could lead to a lack of transparency in the research process. Additionally, AI-generated research may be used to make important decisions in fields such as medicine, where errors can have serious consequences.
To address these concerns, publishers and researchers need to establish clear guidelines and policies to govern the use of AI tools in the research process. This includes ensuring that the contributions of AI are acknowledged in a transparent and appropriate manner, and that the limitations of AI-generated research are clearly stated. Additionally, it is crucial to ensure that the AI models used in research are properly trained, validated, and tested to minimize the risk of bias or errors.
In conclusion, the emergence of ChatGPT as a co-author on published research papers highlights the need for clear guidelines and policies to govern the use of AI in scientific research. While AI tools can be useful in speeding up the research process and making it more efficient, it is crucial to ensure that the research produced is of high quality and ethically sound. Publishers and researchers need to establish clear guidelines and policies to govern the use of AI tools in the research process and ensure that the contributions of AI are acknowledged in a transparent and appropriate manner.
What do you think about this issue? Feel free to voice your opinion by leaving a comment down below.
-
Kung, T. H. et al. Preprint at medRxiv https://doi.org/10.1101/2022.12.19.22283643 (2022).
-
O’Connor, S. & ChatGPT Nurse Educ. Pract. 66, 103537 (2023).
-
ChatGPT & Zhavoronkov, A. Oncoscience 9, 82–84 (2022).
-
GPT, Osmanovic Thunström, A. & Steingrimsson, S. Preprint at HAL https://hal.science/hal-03701250 (2022).