Blog /

AI as Co-Author: Guidelines for Transparency in Academic Publishing

AI cannot be listed as a co-author on academic papers—it doesn’t meet authorship requirements for accountability, copyright, or intellectual contribution. However, transparency is mandatory: you must disclose any AI assistance in your manuscript, typically in the methods, acknowledgments, or a dedicated declaration section. This guide explains where, how, and why to disclose AI use, plus citation formats (APA/MLA) and publisher-specific requirements for 2026.

Introduction: The “AI Co-Author” Question

As AI tools like ChatGPT, Claude, and Gemini become routine in research and writing, a persistent question emerges: “Can I list AI as a co-author?”

The short answer is no—and the longer answer reveals why transparency, not authorship, is the cornerstone of ethical AI use in academic publishing. This isn’t just semantics; it’s about accountability, integrity, and the fundamental principles of scholarly communication.

This guide distills current policies from major publishers, style authorities, and research ethics boards to give you actionable clarity on:

  • Why AI fails as a co-author (and what that means for accountability)
  • Exactly where and how to disclose AI use (with real examples)
  • How to cite AI tools in APA, MLA, and Chicago styles
  • Publisher-specific requirements for Elsevier, Springer Nature, Wiley, and others
  • Ethical boundaries—what AI can and cannot do in your research
  • Common mistakes that lead to retractions or accusations of misconduct

By the end, you’ll have concrete templates and decision frameworks to navigate AI assistance responsibly—and protect your academic reputation.


Why AI Cannot Be a Co-Author: The Accountability Gap

Academic authorship carries three non-negotiable responsibilities:

  1. Accountability for content – Authors must stand behind every claim, data point, and interpretation.
  2. Copyright and licensing – Authors sign legal agreements and manage permissions.
  3. Intellectual contribution – Authors must contribute meaningfully to conception, design, analysis, or writing.

AI tools fail all three:

  • No accountability: AI cannot take legal or ethical responsibility for inaccuracies, fabricated content, or copyright violations. If an AI-generated figure misrepresents data, who is responsible? The tool cannot be sanctioned, sued, or censured.
  • No copyright ownership: In most jurisdictions (including the U.S.), copyright law requires human authorship. The U.S. Copyright Office has repeatedly stated that works created solely by AI are not eligible for protection.
  • No true intellectual contribution: AI generates outputs based on patterns in training data; it doesn’t understand, reason, or contribute novel ideas. As COPE (Committee on Publication Ethics) states: “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work.”

This is the universal consensus across major publishers:

Elsevier, Springer Nature, Wiley, Taylor & Francis, Science, Nature, PLOS all explicitly prohibit listing AI as an author, co-author, or contributor in the author byline.

The consequence? Human authors remain fully responsible for everything in the paper—including any AI-generated content. If your AI creates a fabricated reference or an impossible anatomical diagram, you are liable.


The Correct Approach: Mandatory Disclosure, Not Authorship

Since AI can’t be an author, the path forward is transparency through disclosure. Journals now require authors to declare:

  • Which AI tools were used (name, version)
  • For what purposes (writing, data analysis, figure generation, etc.)
  • That the human authors reviewed, edited, and take full responsibility for the final content

This isn’t just bureaucratic—it builds trust, ensures reproducibility, and helps editors detect inappropriate AI use.

The Core Disclosure Components

A complete AI disclosure statement should include:

  1. Tool identification: “We used ChatGPT-4o (OpenAI, May 2024 version)”
  2. Specific use: “to summarize literature on renewable energy policies” or “to assist with language editing of the first draft”
  3. Verification clause: “All AI-generated content was reviewed and verified by the authors, who take full responsibility for the final content”
  4. Location: Declared in the appropriate section (see below)

Where and How to Disclose AI Use

Where you disclose depends on how you used the AI. Different sections serve different functions:

1. Methods Section (for data analysis, methodology)

If AI was used to process data, run simulations, or develop methodology, disclose it in the methods:

“Data analysis was performed using Python scripts with scikit-learn libraries. Additionally, ChatGPT-4 was used to assist in drafting the statistical analysis plan (see Supplementary Material for full prompts). All statistical outputs were independently verified using R.”

Why here? Methods describe how the research was conducted. AI involvement in analysis or methodology directly impacts reproducibility and validity.

2. Acknowledgments Section (for writing, editing, brainstorming)

For AI used to improve language, structure arguments, or generate ideas, place disclosure in acknowledgments:

“The authors acknowledge the use of ChatGPT-4 to improve the clarity and flow of the manuscript. All suggestions were critically evaluated and edited by the authors, who retain full responsibility for the content.”

Some journals (like Elsevier) accept acknowledgments, but others prefer a dedicated section.

3. Dedicated “Declaration of Generative AI” Section

Many journals now require a separate declaration before the references or in the submission system. This is particularly common for:

  • Elsevier journals: Requires statement about AI use in writing, images, or data
  • Wiley journals: Asks for documentation of AI use, including prompts
  • Springer Nature: Mandates disclosure in a dedicated declaration section

Example template for a dedicated section:

Declaration of Generative AI

The authors declare the use of the following AI tools in the preparation of this manuscript:

1. ChatGPT-4o (OpenAI, version: May 2024) was used to draft initial literature review summaries and to suggest phrasing for the discussion section. All generated text was rewritten, fact-checked, and substantively edited by the authors.

2. Claude 3.5 Sonnet (Anthropic) assisted with code debugging for the Python analysis scripts. The authors verified all code functionality and bear responsibility for any errors.

No AI-generated images or data were included. The authors reviewed and approved the final manuscript in its entirety.

4. Cover Letter (often required)

Many journals ask for AI disclosure in the cover letter as well, separate from the manuscript. Keep it concise:

“This submission contains no AI-generated content that has not been properly disclosed and verified by the authors. All AI assistance has been documented in the manuscript’s Declaration section.”


How to Cite AI Tools: APA, MLA, and Chicago

Citing AI is different from citing a person. The key: cite the tool as software or as a source, not as an author.

APA Style (7th Edition)

APA treats AI as software with the company as the “author.”

Reference list format:

OpenAI. (2024). ChatGPT (May 24 version) [Large language model]. https://chat.openai.com

In-text citation: (OpenAI, 2024)

Practical tips:

  • Include the specific version and date you used (AI tools update frequently)
  • Add “[Large language model]” in square brackets after the title to clarify the content type
  • Describe your use in the methods or caption if you’re quoting or paraphrasing AI output

Example in paper:

“The initial coding framework was developed using ChatGPT-4 for thematic analysis suggestions. The framework was refined through manual review (OpenAI, 2024).”

MLA Style (9th Edition)

MLA focuses on the specific prompt and conversation rather than the tool as author.

Works Cited format:

"Explain the theory of relativity in simple terms." prompt. ChatGPT, 24 May version, OpenAI, 7 Jul. 2023, chat.openai.com/chat.

In-text citation: ("Explain the theory")

Practical tips:

  • MLA prioritizes the prompt as the “title”
  • Include the tool name, version, company, and access date
  • If you used multiple prompts, you may need multiple citations or a general acknowledgment

Chicago Style (17th Edition)

Chicago offers two systems. For AI tools:

Notes and Bibliography:

OpenAI, ChatGPT-4 response to "Summarize the French Revolution," May 15, 2024, https://chat.openai.com.

Author-Date:

OpenAI. 2024. ChatGPT-4 response to "Summarize the French Revolution," May 15. https://chat.openai.com.

Publisher Policies Overview: What You Need to Know in 2026

Policies evolve, but as of early 2026, the landscape is relatively stable. All major publishers share core principles:

  • No AI as author (non-negotiable)
  • Disclosure mandatory for AI-generated text, data, or images
  • Human accountability remains absolute
  • AI-generated images/figures often restricted or require special permission
  • Basic tools (spell-check, Grammarly for minor edits) typically exempt

Specific Publisher Requirements

Publisher AI as Author? Disclosure Required? Where to Disclose Notes
Elsevier Prohibited Yes Methods, Acknowledgment, or Dedicated Declaration Requires detailed description of tool, version, and use
Springer Nature Prohibited Yes Dedicated Declaration section “No generative AI tools were used” is acceptable if true
Wiley Prohibited Yes Methods or Acknowledgment Must include prompts if AI used for writing
Taylor & Francis Prohibited Yes Acknowledgment or Declaration Distinguish between “AI-assisted copyediting” (often allowed) and “AI-generated text”
PLOS Prohibited Yes Methods section Emphasizes validation of AI outputs
Science/Nature Prohibited Yes Cover letter + manuscript Strictest; AI-generated text considered plagiarism unless disclosed and minimal

Important: Always check the journal’s “Information for Authors” page. Some journals have moved from “disclosure optional” to “disclosure mandatory” in the past two years.


Ethical Use vs Misuse: Where’s the Line?

Ethical AI use isn’t just about following rules—it’s about maintaining the integrity of your research.

Acceptable Uses

  • Brainstorming research questions or hypotheses
  • Improving language and clarity (like a sophisticated editor)
  • Summarizing literature for background sections (with verification)
  • Translating text between languages (with human verification)
  • Formatting code or debugging errors
  • Creating rough outlines or structure suggestions
  • Generating example data for teaching materials

Key condition: You must critically evaluate, verify, and edit all AI outputs. Never accept AI content as fact without checking sources and accuracy.

Unacceptable Uses

  • Generating core research findings or data
  • Creating figures or images (especially data visualizations) without disclosure and permission
  • Fabricating references or citations
  • Writing entire sections of a paper without significant human editing and verification
  • Using AI to circumvent language barriers without acknowledgment (this can be considered deceptive)
  • Embedding hidden prompts to manipulate AI-powered peer review systems

The “30%” Guideline (Rule of Thumb)

While no publisher sets a strict percentage, a practical heuristic emerges from policy discussions and editorial comments:

If more than ~30% of your manuscript’s substantive content (excluding methodology details, reference lists, or standard boilerplate) was directly generated by AI without substantial human rewriting and verification, you risk crossing from “assistance” into “ghostwriting.”

But this isn’t a safe harbor—disclosure is required regardless of percentage. Better to disclose and be transparent than to hide use and risk later retraction.


Common Mistakes That Lead to Problems

1. Listing AI as Co-Author (With or Without Permission)

Problem: Including “ChatGPT” or “Claude” in the author list.

Consequence: Immediate desk rejection or post-publication correction/retraction. AI cannot sign copyright forms, respond to reviewer comments, or take responsibility.

Solution: Never, under any circumstances, list an AI as an author. If you’re unsure whether something constitutes authorship, remember: authorship requires accountability, not just contribution.

2. Failing to Disclose AI Use

Problem: Using AI for writing, analysis, or figures but not mentioning it anywhere.

Consequence: Considered research misconduct. Increasingly, journals use AI-detection tools on submissions. Undisclosed AI-generated text triggers investigations, retractions, and institutional reporting.

Solution: When in doubt, disclose. Basic spell-check and grammar tools are generally exempt, but anything beyond that (generating sentences, restructure suggestions, content generation) should be declared.

3. Over-Citing or Under-Citing

Problem: Citing AI for every minor edit (clutters references) OR not citing AI when you’ve quoted/paraphrased its output extensively.

Solution: Use a hybrid approach. Acknowledge general AI use in a disclosure statement for the overall writing process. Reserve formal citations (APA/MLA) for specific instances where you directly quote or paraphrase unique AI-generated content that readers might want to locate.

4. Not Verifying AI Output

Problem: Trusting AI-generated references, data, or factual claims without checking.

Consequence: Fabricated citations, incorrect statistics, or false statements enter the literature. This is academic fraud, even if unintentional.

Solution: Verify every AI-generated reference by checking the actual source (if it exists). Fact-check all data and claims. Use AI as a starting point, not an endpoint.

5. Using AI to Generate Images Without Permission

Problem: Creating figures or diagrams with DALL-E, Midjourney, or similar tools without checking journal policy.

Consequence: Many journals (Science, Nature, Elsevier) require explicit permission for AI-generated images. Some prohibit them altogether for primary research.

Solution: Check journal policy before using AI for visual content. If allowed, disclose tool, prompt, and modifications. Better: generate images yourself or use licensed/creative commons sources.


Real-World Consequences: The Rising Cost of AI Misuse

AI-related retractions surged from a handful in 2022 to nearly 100 by late 2023 and continue climbing. Here’s what happens when disclosure fails:

Case 1: The “Certainly, Here Is…” Paper

A 2025 Scopus Q1 journal retracted a paper whose introduction began: “Certainly, here is a possible introduction for your topic…” The entire manuscript was AI-generated with no disclosure or human oversight.

Consequence: Retraction, author sanctions, institutional notification.

Case 2: Fabricated References

Multiple papers have been retracted when AI-generated reference lists included non-existent DOIs, fake authors (e.g., “Bill Franks”), or “tortured phrases” that only occur in AI training corpora.

Consequence: Loss of credibility, difficulty publishing future work, potential research misconduct investigations.

Case 3: AI-Generated Figures with Anatomical Errors

Springer Nature’s Neurosurgical Review retracted scores of commentaries and letters in 2025 after discovering AI-generated figures with impossible anatomical structures (rats with incorrectly placed genitalia, duplicated organs).

Consequence: Massive retraction, damage to journal reputation, questions about peer review efficacy.

Case 4: “Unintentional Co-Authorship”

Researchers have sometimes accidentally left ChatGPT in the author list during submission. These papers are corrected (with AI removed) but still count as author list corrections, which appear on the record and raise red flags for future submissions.

Consequence: Corrections, editorial scrutiny on future submissions, potential loss of trust.


Sample Disclosure Statements: Templates You Can Use

Scenario 1: General Writing Assistance

“The authors used ChatGPT-4 (OpenAI, May 2024 version) to assist with language editing, sentence structure, and overall readability. All AI-generated suggestions were critically reviewed, edited, and validated by the human authors. The authors take full responsibility for the final content.”

Scenario 2: Data Analysis

“Data analysis was conducted using R (v4.3.1). Additionally, Claude 3.5 Sonnet (Anthropic) was consulted to optimize code efficiency and suggest alternative statistical approaches. All code and results were independently verified by the first author.”

Scenario 3: Literature Review / Summarization

“We employed ChatGPT-4 to summarize and synthesize the 50 most relevant articles identified in our systematic review. These summaries served as initial drafts that were expanded, contextualized, and fact-checked against the original sources by all authors.”

Scenario 4: Figure Generation (if journal allows)

“Figure 3 was created using DALL-E 3 (OpenAI, version: October 2024) based on the prompt: ‘Scientific diagram showing the proposed mechanism of action, with clear labels for receptor A and pathway B.’ The AI output was heavily edited in Adobe Illustrator and reviewed for accuracy by the authors.”

Scenario 5: No AI Used

“The authors did not use any generative AI tools (including but not limited to ChatGPT, Claude, Gemini, Copilot, or DALL-E) in the preparation of this manuscript, the analysis of data, or the creation of figures.”


Decision Framework: What Should You Do?

When deciding how to handle AI tools, ask these questions:

  1. Did AI generate text or content that ended up in the final manuscript?
    Yes = Disclose.
    No (only minor grammar/spell-check) = No disclosure needed.
  2. Did AI help with data analysis, code generation, or statistical interpretation?
    Yes = Disclose in methods.
    No = Proceed.
  3. Are you using AI to create or alter images/figures?
    Yes = Check journal policy first. Many require permission. Disclose if allowed.
    No = Proceed.
  4. Can you verify everything AI produced?
    No = Don’t use AI for that task. Verification is your responsibility.
    Yes = Disclose the assistance.
  5. Would you feel comfortable explaining to your supervisor or collaborators exactly what role AI played?
    No = Reconsider your use.
    Yes = Document and disclose.

Next Steps for Researchers

  1. Review your target journal’s policy before submission. Look for “AI policy,” “generative AI,” or “machine learning” in author guidelines.
  2. Document your AI use as you work: tool names, version numbers, prompts, dates. This makes disclosure easy and accurate.
  3. Verify everything: Follow every AI-generated reference to its source. Check data calculations. Review images for errors.
  4. Write your disclosure early (don’t leave it for the final draft). It’s easier to iterate on it as your manuscript develops.
  5. Stay updated: Policies change. Subscribe to journal alerts or check publisher websites annually.

Conclusion: Transparency Builds Trust

The question “Can AI be a co-author?” reflects a deeper search for how to integrate powerful tools into scholarly work without compromising integrity. The answer, crystallized across publishers and ethics bodies, is clear:

AI’s role is that of an assistant, not a collaborator; a tool, not an author.

Your responsibility is to use AI ethically, disclose its use transparently, and verify every output. This protects your reputation, advances the field responsibly, and maintains the trust that underlies academic publishing.

The next time you’re tempted to let AI “co-write” a section or generate a figure without acknowledgment, remember: transparency isn’t an admission of wrongdoing—it’s a mark of scholarly rigor.


Related Guides


Need help navigating AI disclosure for your specific journal? Our team can review your manuscript for compliance before submission. Contact us for a policy audit.

Recent Posts
Paraphrasing vs AI Humanization: What’s the Difference and Why It Matters for Turnitin

Paraphrasing tools and AI humanizers serve fundamentally different purposes. Paraphrasers (like QuillBot) reword text to improve clarity or avoid plagiarism by swapping synonyms and restructuring sentences. AI humanizers are specifically engineered to bypass AI detectors by manipulating statistical patterns like perplexity and burstiness. In August 2025, Turnitin added dedicated “bypasser detection” to catch humanized AI […]

Content Marketing Plagiarism: How Agencies and Freelancers Use AI Ethically

Content marketing plagiarism can destroy brand reputation, trigger Google penalties, and lead to costly legal disputes. In 2026, agencies and freelancers face new challenges with AI-generated content and mandatory disclosure requirements under the EU AI Act. This guide explains the real risks, practical prevention strategies, and the ethical frameworks top agencies use to keep every […]

Fair Use in Academia: How to Legally Use AI-Generated Content in Research Papers

TL;DR: Fair use may legally permit limited AI-generated content in research papers, but it’s not a blank check. The U.S. Copyright Office maintains that purely AI-generated text is not copyrightable, and major publishers (Elsevier, Wiley, Taylor & Francis) require explicit disclosure of AI use. Your safest approach: treat AI as a brainstorming and editing tool—not […]