Artificial intelligence-assisted academic writing: recommendations for ethical use
Executive Summary
The rapid integration of Large Language Models (LLMs), such as ChatGPT, into academic workflows presents a transformative opportunity for researchers alongside significant risks to scholarly integrity. While these tools offer enhanced efficiency in structuring and refining text, they are prone to "hallucinations," plagiarism, and the fabrication of scientific references. Current consensus among major academic publishers and ethical bodies dictates that AI cannot be credited as an author due to its inability to remain accountable for research accuracy. To maintain academic standards, researchers must adhere to a tiered approach to AI utilization—prioritizing human critical thinking, ensuring total transparency in disclosure, and rigorously verifying all AI-generated outputs.
The Evolution and Role of Generative AI in Research
Since the widespread release of ChatGPT in late 2022, LLMs have become accessible tools for healthcare professionals and educators. These models are designed to generate coherent responses based on statistical relationships within vast datasets of human-generated content.
Technological Context: AI features are increasingly integrated into transcription systems and data analysis software.
A Shift in Capability: Unlike previous software that merely assisted with research tasks (e.g., statistical analysis or qualitative data organization), generative AI can produce novel written content independently, challenging traditional notions of academic integrity.
The Scholarly Dilemma: Researchers face a conflict between the potential for increased productivity and the risk of undermining the intellectual rigour essential to scholarly development.
Institutional and Publisher Positions on AI
Academic publishers, including those aligned with the Committee on Publication Ethics (COPE), have established clear boundaries regarding the use of generative AI in manuscripts.
Authorship and Accountability
AI tools do not meet the criteria for authorship. Scientific authorship requires both a contribution to the work and the ability to be held accountable for its accuracy and integrity. Because AI cannot vouch for the validity of its content or participate in post-publication investigations, it must never be listed as an author.
Transparency and Disclosure
Transparency is the cornerstone of the ethical use of AI.
Reporting: Authors should explicitly describe how AI tools were used within the Methods section of their manuscript. This includes detailing how AI-generated output was handled, edited, and incorporated.
Visibility: While some journals allow disclosure in the acknowledgements section, placement in the Methods section is preferred to ensure reader awareness.
Critical Risks in AI-Assisted Writing
Studies into LLM capabilities have identified three primary challenges that threaten the utility and accuracy of medical manuscripts:
AI Hallucination: The generation of convincing but entirely fabricated or factually inaccurate content. Current models lack "fact-checking" components and rely solely on syntactical associations.
Plagiarism and Bias: LLMs may reproduce copyrighted content verbatim from their training data. Furthermore, they tend to amplify implicit or explicit biases present in the source data.
Reference Fabrication: AI is notoriously unreliable regarding citations.
One study found that 38% of references generated by ChatGPT had wrong or fabricated DOIs, and 16% of articles were entirely non-existent.
Another study revealed that only 7% of AI-generated references were completely authentic and accurate.
Tiered Framework for Ethical AI Application
To guide researchers, AI use cases can be categorized into three "ethical tiers" based on the balance between tool utility and human critical thinking.
Limitations of Tier 1 and 2 Uses
Translation: LLMs may miss cultural nuances or slang; a human native speaker should always review translations.
Brainstorming: Ideas generated by AI must be edited and attributed correctly to avoid the risk of plagiarism.
Author Checklist for Ethical Compliance
Authors are encouraged to reflect on four key principles before submitting AI-assisted manuscripts. A "No" response to any of these questions indicates that the use of AI may be ethically suspect.
Conclusion and Future Directions
The goal of academic research is the promotion of scholars capable of deep reasoning and evidence-based problem-solving. Outsourcing primary intellectual tasks—such as data interpretation or de novo writing—to AI risks the long-term diminishment of these human skills.
While specialized tools (e.g., Scopus AI) are emerging that draw from more reliable, peer-reviewed data sources, the requirement for human vetting remains absolute. As technology evolves, the healthcare simulation and broader research communities must maintain a continuous discourse to ensure that AI serves as a support mechanism rather than a replacement for human scholarship.