K.I. Egbuchulem1, H.D. Ogundipe1, K. Uwajeh2

  1. Division of Paediatric Surgery, Department of Surgery, University College Hospital, Ibadan, Nigeria.
  2. Department of Clinical Psychology, University of California Southern, United States of America.

The statement above is not a conundrum but a perspective. Before delving into this point of view, I will share the core issues with you – the relationship between medicine and technology. In Medical school, we can attest to the detestation against innovation. Medicine and dogma appear to be inseparable Siamese twins. The delusion is so strong that emancipation comes at the cost of castigation. This blind perspective kills creativity, but these chains will not shackle the next generation.

The issue is not emancipation but catching up after the lag. It is this lag; the current physician must contend with, but we should fear not, for there is hope. Accepting and utilizing technological innovations help in catching up with this lag in medicine and research undertakings.

The 21st century has witnessed intense technological advancement. One key component revolutionizing technology and the way we do things is with the use of artificial intelligence.

Artificial intelligence (AI) is defined as the development of computer systems which can use the information available to them to autonomously perform tasks normally requiring human intelligence.1 In other words, it is the simulation of human intelligence by devices and robots using computer-controlled systems, relying on existing detailed and existing data of products of human intelligence to work seamlessly. This has led to the automation of several complex tasks into an easy routine which can be activated by voice prompts, press of the button or few strokes of the keypad.

Its penetration into every sphere of our life is well established and its introduction and use for article writing, referencing, reviewing, editing, plagiarism checks and publishing processes over the years is well known. The use of Microsoft word editor, Grammarly, Mendeley, EndNote, Zotero, Turnitin amongst others is quite among researchers and have been of tremendous help.

However, the use of AI in the article writing process has evolved into an unanticipated level over the last one year. The concept of generative AI has now led to the automation of the thinking and writing aspect of manuscript writing! Using the Large Language Models (LLMs), they have successfully developed ways to answering questions in a format like human thinking and writing.

The introduction of platforms such as GPT 1-5, ChatGPT, Google AI Bard, et cetera has led to the generation of organized, well styled, detailed, and seemingly relevant manuscripts resembling that written by humans being generated within minutes. A task that takes hours or days when done manually. Literature review is now as easy and fast as the blinking of one’s eyes. This is a great relief for anyone who has gone through the rigor of spending hours, days and months researching, taking notes, and stringing together a literature review write up for a research paper.

The use of these AI models leaves an author with the duty to conceive the idea for the research, input the question into the bot and then write up the resulting pops up from the platform. An added advantage for non-native English speakers is the fluency in language and style of writing which eliminates the poor construct of English that often ridicule their papers and render them unfit for publication in high impact journals.

However, key issues have evolved following this innovative intervention. Who takes responsibility for the write up? Concerns about plagiarism, promotion of false narrative, wrong submissions, amongst others are also being seen.

These bots string together information in sentences without properly citing the source of the information being projected. This can lead to plagiarism, an unethical practice that is penalized in all spheres of academic writing.

Moreso, AI platforms don’t screen out the source of their information. In writing a scientific thesis where relevant journal articles are to be cited, the bot could be including unverified information from newspaper articles, opinion blogs amongst other unverifiable, unreliable, non-scientific sources. The resulting inaccuracies and false information produced by these LLM models are a key thing to consider, as this leads to misrepresentation of facts and findings with consequent wrong conclusions which impact medical practice. Unfortunately, it is the author who is left holding the sagging straws of this falsehood.

These concerns bring forth key questions that require well thought out responses.

Having made such significant contribution to the article, is the AI worthy of being considered an author? Does the ethical committee hold the AI responsible or the human author? What about other AI software that have been hitherto used to assist in research writing. Is there a difference in their use compared to these generative AIs? Should they also be acknowledged as having assisted in the write up process or can they also be used as co-authors?

What about concerns of intellectual laziness? Undergraduate and postgraduate essay assignments are now being written by AIs. Do the students really think about what has been written by the bots? Do they interrogate the information written by the AI for them? How is using the AI for essay assignment different from copying a textbook write up verbatim and submitting it as an assignment? Would learning not be a thing for AIs alone who keep improving as we use them while we keep diminishing in our cerebration as we rely significantly on their output? Would they not gradually orchestrate our extinction by outsmarting us? What about jobs? The world population is increasing, and job opportunities are not increasing realistically to match up demands. Yet, AIs are now taking up the roles of researchers and research assistants.2 Of course, in the long term, it is more cost effective and less
cumbersome to use an AI. However, from the aforementioned concerns, the quality of such literature works may leave a lot to be desired.

These and more have led to journals, publishers, and academic institutions reviewing their policies regarding the use of AIs in academic writing. Indeed, Turnitin, a plagiarism checker recently released an update that would detect the level of AI contribution to academic works.3

Several articles over the past year (4th quarter of 2022) have included ChatGPT and other related bots as authors on several journal articles.4 This has led to an uproar in the scientific community with researchers debating how exactly a bot fits into an author.4 One of such examples was the listing of ChatGPT as an author in an article published on Elsevier,5 the authors retracted the authorship role given to the AI6 following complaints.