Text-generating artificial intelligence (AI) models, such as ChatGPT, have the potential to “revolutionize the field of medical writing” by automating tasks and increasing efficiency throughout the writing process — at least according to a paper generated by ChatGPT.
While the AI model successfully generated almost all of the text in the peer-reviewed paper published in Radiology, it required close editing and careful consideration of several factors, such as ethics, copyright, and accuracy, noted Som Biswas, MD, of the University of Tennessee Health Science Center College of Medicine in Memphis.
Biswas said that he used ChatGPT, a large language model developed by OpenAI, to write the first several sections of the paper using a series of prompts.
“The idea has to come [from] a human and the subheadings have to be conceptualized by a human being, and then it can probably write short paragraphs for each,” Biswas told MedPage Today. “Then it has to be edited by a human.”
“If ChatGPT can write short paragraphs or jokes, why can’t we use it for something more meaningful?” he asked.
Biswas emphasized the need to consider potential ethical and legal mistakes when using AI models, especially since ChatGPT has a track record of plagiarizing. For example, he wrote that using ChatGPT to write letters of recommendation or personal statements could raise concerns about authenticity. Bias and transparency are also areas of concerns when using AI models for writing research papers, he added.
“The use of AI in the writing process and identification of text that has been generated by a machine should be made clear,” he wrote in the author-generated portion of the paper.
To that point, Biswas asked ChatGPT to include this note at the beginning of the paper: “The human author of this article would like to state that this entire article was written by ChatGPT.”
Despite the advanced technological capabilities they offer, Biswas said that AI models are not well suited for generating innovative ideas. Since “ChatGPT is based on prior data fed to it, eventually it will lead to repetitive text generation and lack of creativity,” he noted.
On the other hand, ChatGPT generated a statement of optimism for its own use in research, writing that “the use of these technologies in medical writing has the potential to improve the speed and accuracy of document creation.”
In an accompanying editorial, Felipe C. Kitamura, MD, PhD, of Universidade Federal de São Paulo, said that the paper covered a compelling list of examples for using AI to assist in research writing.
“Large language models could be useful for “real-time assistance or creating drafts for clinical trial protocols, study reports, regulatory documents, patient-facing materials, and translation of medical information into a myriad of languages,” wrote Kitamura, who is also the head of Applied Innovation and AI at Dasa, a large healthcare system in Brazil.
“The possibilities seem endless,” he added.
However, Kitamura also echoed Biswas’s acknowledged study limitations, noting that he is “curious to learn if scientific journals will see an increase in the plagiarism percentage of received manuscripts.”
Despite the limitations, Kitamura said that he believes that ChatGPT is worth testing with the proper “discretion.”
“ChatGPT has raised the bar, bringing writing support tools to the next level,” he wrote. “With proper use, writers can benefit from it.”
The use of AI models, specifically large language models similar to ChatGPT, has become a controversial topic in medical research fields. Several prominent publications have recently announced that papers written with any help from these models will not be accepted, including JAMA journals.
In an editorial published in Science, Editor-in-Chief H. Holden Thorp, PhD, wrote that “an AI program cannot be an author,” and that violating the journal’s policy prohibiting its use “will constitute scientific misconduct no different from altered images or plagiarism of existing works.”
Despite publishing this paper written almost entirely by ChatGPT, Radiology recently published an editorial calling large language models “double-edged swords” that will make “it more challenging for journals to evaluate the scientific integrity of submitted manuscripts.”
As one of the first medical researchers to use ChatGPT to this extent, Biswas concluded that AI language models are a powerful tool that will need to be applied in carefully considered ways.
“It is definitely much easier, but as far as the thought process that goes into an article, I think that humans still have a very significant role,” he said. “We should always remember that the readers are still humans.”
Biswas and Kitamura reported no conflicts of interest.
Source Reference: Biswas S “ChatGPT and the future of medical writing” Radiology 2023; DOI: 10.1148/radiol.223312.
Source Reference: Kitamura FC “ChatGPT is shaping the future of medical writing but still requires human judgment” Radiology 2023; DOI: 10.1148/radiol.230171.