AI boosts scientific productivity but erodes paper quality

A Cornell University study reveals that AI tools like ChatGPT have increased researchers' paper output by up to 50%, particularly benefiting non-native English speakers. However, this surge in polished manuscripts is complicating peer review and funding decisions, as many lack substantial scientific value. The findings highlight a shift in global research dynamics and call for updated policies on AI use in academia.

Since ChatGPT's widespread adoption in late 2022, scientists have reported higher productivity, with journal editors noting an influx of well-written but low-value submissions. A Cornell study, published on December 18, 2025, in Science, analyzed over 2 million preprints from arXiv, bioRxiv, and SSRN, spanning January 2018 to June 2024. Researchers developed a detector to identify LLM-assisted papers by comparing them to pre-2023 human-written ones.

The results show a clear productivity boost: authors likely using LLMs posted about one-third more papers on arXiv and over 50% more on bioRxiv and SSRN. The gains were most pronounced for non-native English speakers, with researchers from Asian institutions increasing output by 43% to 89.3%, depending on the platform. "It is a very widespread pattern, across different fields of science," said Yian Yin, assistant professor of information science at Cornell's Ann S. Bowers College of Computing and Information Science.

Beyond writing, AI search tools like Bing Chat improved literature reviews by surfacing newer, more diverse sources. First author Keigo Kusumegi noted, "People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas."

Yet, challenges emerge in evaluation. Human-written papers with complex language often signal quality and higher journal acceptance rates. In contrast, LLM-assisted papers, despite sophisticated prose, are less likely to be accepted, suggesting that polish no longer reliably indicates value. This disconnect could hinder editors, reviewers, and funders, as raw publication counts become misleading.

The observational study calls for experimental follow-ups and policy updates. Yin is hosting a symposium on March 3-5, 2026, in Ithaca to discuss AI's role in research. Co-authors include Xinyu Yang, Paul Ginsparg, Mathijs de Vaan, and Toby Stuart; funding came from the National Science Foundation.

As AI evolves into a "co-scientist," Yin emphasizes transparency: "The question is, how exactly have you used AI and whether it's helpful or not."

相关文章

Illustration depicting OpenAI's ChatGPT-5.2 launch, showing professionals using the AI to enhance workplace productivity amid rivalry with Google's Gemini.
AI 生成的图像

OpenAI releases ChatGPT-5.2 to boost work productivity

由 AI 报道 AI 生成的图像

OpenAI has launched ChatGPT-5.2, a new family of AI models designed to enhance reasoning and productivity, particularly for professional tasks. The release follows an internal alert from CEO Sam Altman about competition from Google's Gemini 3. The update includes three variants aimed at different user needs, starting with paid subscribers.

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

由 AI 报道

随着AI平台转向基于广告的变现模式,研究人员警告这项技术可能以隐形方式塑造用户行为、信念和选择。这标志着OpenAI的转变,其CEO Sam Altman曾认为广告与AI的结合“令人不安”,但现在保证AI应用中的广告能够维持信任。

The open-source project LLVM has introduced a new policy allowing AI-generated code in contributions, provided humans review and understand the submissions. This 'human in the loop' approach ensures accountability while addressing community concerns about transparency. The policy, developed with input from contributors, balances innovation with reliability in software development.

由 AI 报道

日本公正交易委员会计划对使用生成式AI的搜索引擎展开事实调查,这些搜索引擎可能未经授权使用了媒体机构的新闻文章。这可能通过滥用支配地位违反反垄断法。目标包括谷歌和微软等美国主要科技公司。

Cybersecurity experts are increasingly alarmed by how artificial intelligence is reshaping cybercrime, with tools like deepfakes, AI phishing, and dark large language models enabling even novices to execute advanced scams. These developments pose significant risks to businesses in the coming year. Published insights from TechRadar underscore the scale and sophistication of these emerging threats.

由 AI 报道

Music labels and tech companies are addressing the unauthorized use of artists' work in training AI music generators like Udio and Suno. Recent settlements with major labels aim to create new revenue streams, while innovative tools promise to remove unlicensed content from AI models. Artists remain cautious about the technology's impact on their livelihoods.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝