A Cornell University study reveals that AI tools like ChatGPT have increased researchers' paper output by up to 50%, particularly benefiting non-native English speakers. However, this surge in polished manuscripts is complicating peer review and funding decisions, as many lack substantial scientific value. The findings highlight a shift in global research dynamics and call for updated policies on AI use in academia.
Since ChatGPT's widespread adoption in late 2022, scientists have reported higher productivity, with journal editors noting an influx of well-written but low-value submissions. A Cornell study, published on December 18, 2025, in Science, analyzed over 2 million preprints from arXiv, bioRxiv, and SSRN, spanning January 2018 to June 2024. Researchers developed a detector to identify LLM-assisted papers by comparing them to pre-2023 human-written ones.
The results show a clear productivity boost: authors likely using LLMs posted about one-third more papers on arXiv and over 50% more on bioRxiv and SSRN. The gains were most pronounced for non-native English speakers, with researchers from Asian institutions increasing output by 43% to 89.3%, depending on the platform. "It is a very widespread pattern, across different fields of science," said Yian Yin, assistant professor of information science at Cornell's Ann S. Bowers College of Computing and Information Science.
Beyond writing, AI search tools like Bing Chat improved literature reviews by surfacing newer, more diverse sources. First author Keigo Kusumegi noted, "People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas."
Yet, challenges emerge in evaluation. Human-written papers with complex language often signal quality and higher journal acceptance rates. In contrast, LLM-assisted papers, despite sophisticated prose, are less likely to be accepted, suggesting that polish no longer reliably indicates value. This disconnect could hinder editors, reviewers, and funders, as raw publication counts become misleading.
The observational study calls for experimental follow-ups and policy updates. Yin is hosting a symposium on March 3-5, 2026, in Ithaca to discuss AI's role in research. Co-authors include Xinyu Yang, Paul Ginsparg, Mathijs de Vaan, and Toby Stuart; funding came from the National Science Foundation.
As AI evolves into a "co-scientist," Yin emphasizes transparency: "The question is, how exactly have you used AI and whether it's helpful or not."