On PaperOrchestra: AI Manuscript Generation and the Authorship Problem
Google Cloud AI Research recently introduced PaperOrchestra, a multi-agent system that autonomously converts unstructured pre-writing materials into a submission-ready LaTeX manuscript.
At first glance, this is a boon to researchers facing grant cycles, conference deadlines, and the immense pressure to produce and publish. The promise is appealing: a submission-ready manuscript in under an hour, so you can get back to the science. Think of the possibilities: Half-written manuscripts would be finished, deadlines would be easily met, and hours of writing could be saved and better dedicated to developing new experiments.
But before you hand over the writing, it's worth asking what it means to put your name on work you didn't produce.
What Does PaperOrchestra Actually Do?
First, the (human) researcher inputs information such as an idea summary, experimental log, LaTeX template, and conference guidelines. Then, the five specialized agents of PaperOrchestra work in tandem, performing the following tasks:
- The Outline Agent produces "a section-level writing plan including high-level content bullets and a comprehensive list of citation hints for all core external dependencies."
- The Plotting Agent generates "conceptual diagrams and statistical plots" through "a closed-loop refinement system where a VLM critic evaluates rendered images against design objectives, iteratively revising text descriptions and regenerating images."
- The Literature Review Agent is instructed directly to "write the introduction and related work section of a paper" — conducting web searches, verifying citations via Semantic Scholar, and mandating that "at least 90% of the gathered literature pool must be actively integrated and cited."
- The Section Writing Agent is told to "complete a research paper by writing the missing sections" and to "always provide detailed ablation studies and qualitative analysis of the experimental results: what works, what does not, and why."
- The Content Refinement Agent "iteratively optimizes the manuscript using simulated peer-review feedback," accepting or reverting revisions based on score changes.
Interestingly, the researchers position PaperOrchestra as "an advanced assistive tool designed to accelerate the drafting process, rather than an independent entity capable of claiming authorship" and state that "human researchers must retain full accountability for the factual accuracy, originality, and validity of any generated manuscript."
Indeed, accountability rests with human authors, but accountability and authorship are not the same thing. How can a tool be considered "assistive" when it is performing the core work of authorship? Or more precisely, what remains of the human researcher’s authorship when the AI has written the literature review, structured the argument, generated the figures, and formatted the manuscript?
The Publishing World Has Already Drawn Lines
In response to the proliferation of generative AI use in academic and research writing, journals and publishers have established generative AI policies. The use of a tool like PaperOrchestra might violate policy for only some journals, but it would have to be disclosed in submissions to nearly every major journal.
Researchers build their reputations with each submission, and the standards for AI use set forth by scientific journals reflect the broader professional consensus on authorship. Here are some standout points from well-known scientific publishers:
Nature
"The use of an LLM (or other AI-tool) for 'AI assisted copy editing' purposes does not need to be declared. In this context, we define the term 'AI assisted copy editing' as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation."
Science (AAAS)
"AI-assisted technologies [such as large language models (LLMs), chatbots, and image creators] do not meet the Science journals' criteria for authorship and therefore may not be listed as authors or coauthors, nor may sources cited in Science journal content be authored or coauthored by AI tools."
"Editors may decline to move forward with manuscripts if AI is used inappropriately."
ACS
"AI tools cannot meet the requirements for authorship as they cannot take responsibility and accountability for the published work."
"The editor may, at their discretion, determine that the AI use in a given submission is too extensive, including (but not limited to) AI tools used to generate substantive commentary or extensive literature reviews. This determination may result in manuscript rejection or a request for revision to remove or reduce AI-generated portions of the manuscript."
Publishers are right to be concerned. A recent Nature analysis has suggested that tens of thousands of papers published in 2025 have included AI-hallucinated citations. PaperOrchestra attempts to address this through API-grounded citation verification but ultimately maintains that "users are responsible for verifying the outputs to prevent the propagation of LLM-derived biases or misinformation." The increase in efficiency is less than it seems: When an AI tool generates over 40 citations of sources you may never have seen before and the responsibility of checking their accuracy falls on you, the time saved in writing has been redistributed to source verification—with the same accountability attached.
I've Collected the Data. Isn't That Enough?
According to the researchers at Google, PaperOrchestra only needs "unconstrained pre-writing materials" to produce a submission-ready article, but the work of science is not done when the experimental data collection is.
Writing is not dissociable from the scientific process; rather, it is the process through which you honestly confront your claims, methodology, results, and interpretation of those findings. This is intuitive for working scientists: Writing your methods section reveals gaps in and future improvements for your experiment, and discussing your results forces you to confront and defend what you believe you have proved.
No tool has previously proposed to sever writing from the labor of science; now, we have to defend writing as essential to the production of science. The prose, regardless of whether it needs highly engaged editing or translation, should come from you, the human.
Writing Is the Work
The development of such AI frameworks incorrectly suggests that writing is an inefficiency in the production of scientific research. The pressure scientists face to produce is real, but a tool that proposes to remove writing from the scientific process doesn't solve that pressure. Instead, it trades long-term integrity for the appearance of productivity.
With developments like PaperOrchestra, AI can now generate a manuscript, but it still can't tell you how your work will be received. Your research deserves to be read the way that a peer or journal reviewer would: with scrutiny and full attention to what your writing is arguing. Before you submit, Scribendi's scientific experts can stress-test your manuscript from a reviewer's perspective, identifying vulnerabilities in your argument, assessing how your work is positioned in the field, and ensuring your intellectual fingerprint stays intact.
Learn how your work will be read. Work with an expert scientific editor.