EDITORIAL |
https://doi.org/10.5005/jp-journals-10077-3277 |
Artificial Intelligence: A Boon or Bane to Researchers
1Department of Pediatric and Preventive Dentistry, Manav Rachna International Institute of Research and Studies, Faridabad, Haryana, India
2Private Practice, Abbeville Dentistry, Texas, United States
Corresponding Author: Gauri Kalra, Department of Pediatric and Preventive Dentistry, Manav Rachna International Institute of Research and Studies, Faridabad, Haryana, India, Phone: +91 9910329898, e-mail: drgauri_dentist@yahoo.co.in
How to cite this article: Kalra G, Dhillon JK. Artificial Intelligence: A Boon or Bane to Researchers. J South Asian Assoc Pediatr Dent 2023;6(1):49-50.
Source of support: Nil
Conflict of interest: None
Humans have developed languages as the core form of communication to convey ideas and concepts. Language models fulfill a similar purpose in the realm of artificial intelligence (AI) by serving as the means of conveying concepts. Large language models (LLMs) are AI systems designed to understand and generate human-like text as they can learn, understand, and process human language efficiently.1 They are trained on vast sets of data to perform various tasks like language translation, text generation, answering questions, rewriting content, conversational AI, and chatbots. They can help researchers by automating tasks like data analysis, literature review, and generating hypotheses. An upward trend has been observed in their utility and capabilities in scientific research. Some of the most prominent LLMs today are GPT-3 by OpenAI; LaMDA by Google; Flamingo by DeepMind; LLaMA by Meta AI; and GPT-4 by OpenAI. AI-driven tools like Grammarly (most commonly used) have been an ultimate aid in enhancing language quality and syntax. Immediately after its release, it became a go-to tool for every researcher to overcome language barriers and create grammatical-error-free content. To ensure quality content, a plagiarism check is a pivotal step in publishing research. Earlier it was done manually, however, with AI-driven applications such as iThenticate and other plagiarism checkers using advanced algorithm checks and processes, detecting similar content from huge databases could be done in a blink of an eye.2
Recently, the release of Chat Generative Pre-trained Transformer (ChatGPT) (OpenAI, San Francisco, CA, USA), an AI chatbot has generated sensation across the world’s healthcare system. It has been quite applicable in diagnoses and assessing disease risk, identifying dental and maxillofacial anomalies, and patient scheduling.3 Moreover, it has been effective in conversing with patients and answering their disease-related queries in a human-like manner. ChatGPT has also proven useful in scholarly writing as it can write and summarize research papers reviews, and translate papers in different languages. Moreover, this AI-based application generates high-quality scientific research papers with utmost accuracy which are hard to be spotted as an output of machine learning.4
Despite its role in aiding scientific research, there are various ethical concerns which cannot be ignored. The content generated by AI may not be unreliable, spam, or malicious and is likely to be cited from a pool of low-quality journals, periodicals, blogs, and websites. It may contain scientific errors and plagiarized text which goes undetectable either with the current AI-based software or manually.5
Due to the reasonable use of AI in manuscript preparation, another misconduct that caught the attention of editorial board members, peer reviewers, and publishers of the journals was listing AI as one of the co-authors in at least four research papers within two months of the release of ChatGPT. By January 2023, the majority of journal publishers had announced their editorial policies restricting AI usage for manuscript preparation/ submission. The Committee on Publication Ethics in conjunction with the World Association of Medical Editors and Journal of the American Medical Association also issued its advisory and declared chatbot not to be included as a primary/co-author. It further suggested that AI does not fulfill the requirements of an author as it cannot bear the responsibility of the work submitted. Also, AI cannot be held responsible for any conflicts of interest or copyright issues. It would be only authors (human) who would be responsible for their content and publication ethical misconduct. Role of AI tools while writing a manuscript must be mentioned clearly as acknowledgments not as an author without citing them in the reference section of the manuscript.6 Quartile 1 journals like Science and Lancet had come up with editorial policies concerning AI tool usage in scientific research writing and asserted content produced by AI to be authenticated or else to be considered plagiarism.7,8 As a result, the major scientific journal publishers and their editorial board members have initiated solutions for screening and detecting AI-generated texts for higher quality, progressive research papers.
It is critical that editorial boards as well as reviewers use their own judgment while evaluating research papers. Sometimes AI like ChatGPT can generate text that seems plausible but is not grounded in facts and require due diligence on the part of the researcher to verify the facts. However, it is recommended that AI-related authorship policies must be developed or adopted strictly by all scientific journals worldwide. Publishers must include their vigilant policies on their websites and rigorous training of editorial members and reviewers must be formulated pertaining to AI-related scientific misconduct. Thus, chatbots may not be listed as an author and any text contributed by generative AI without proper citation to be considered plagiarized.9
REFERENCES
1. Birhane A, Kasirzadeh A, Leslie D. et al. Science in the age of large language models. Nat Rev Phys 2023;5:277–280. DOI: 10.1038/s42254-023-00581-4
2. Research Through Ages — Evolution of research publishing with the advent of AI! [Internet]. Enago Academy. Available from: https://www.enago.com/academy/research-publishing-advent-of-ai/.
3. OpenAI. ChatGPT. 2022. Available from: https://openai.com/blog/chatgpt/. Accessed July 21, 2023.
4. Alhaidry HM, Fatani B, Alrayes JO, et al. ChatGPT in dentistry: a comprehensive review. Cureus 2023;15(4):e38317. DOI: 10.7759/cureus.38317
5. Park JY. Could ChatGPT help you to write your next scientific paper?: concerns on research ethics related to usage of artificial intelligence tools. J Korean Assoc Oral Maxillofac Surg 2023;49(3):105–106. DOI: 10.5125/jkaoms.2023.49.3.105
6. COPE position statement on AI as an author. [Internet]. Committee on Publication Ethics. [Cited 2023 Aug 12]. Available from: https://publicationethics.org/cope-position-statements/ai-author#:~:text=COPE%20position%20statement,an%20author%20of%20a%20paper
7. Thorp HH. ChatGPT is fun, but not an author. Science 2023;379(6630):313. DOI: 10.1126/science.adg7879)
8. Liebrenz M, Schleifer R, Buadze A, et al. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health 2023;5(3):e105–e106. DOI: 10.1016/S2589-7500(23)00019-5
9. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 2023;613(7945):620–621. DOI: 10.1038/d41586-023-00107-z
________________________
© The Author(s). 2023 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.