AI boosts scientific productivity but erodes paper quality

A Cornell University study reveals that AI tools like ChatGPT have increased researchers' paper output by up to 50%, particularly benefiting non-native English speakers. However, this surge in polished manuscripts is complicating peer review and funding decisions, as many lack substantial scientific value. The findings highlight a shift in global research dynamics and call for updated policies on AI use in academia.

Since ChatGPT's widespread adoption in late 2022, scientists have reported higher productivity, with journal editors noting an influx of well-written but low-value submissions. A Cornell study, published on December 18, 2025, in Science, analyzed over 2 million preprints from arXiv, bioRxiv, and SSRN, spanning January 2018 to June 2024. Researchers developed a detector to identify LLM-assisted papers by comparing them to pre-2023 human-written ones.

The results show a clear productivity boost: authors likely using LLMs posted about one-third more papers on arXiv and over 50% more on bioRxiv and SSRN. The gains were most pronounced for non-native English speakers, with researchers from Asian institutions increasing output by 43% to 89.3%, depending on the platform. "It is a very widespread pattern, across different fields of science," said Yian Yin, assistant professor of information science at Cornell's Ann S. Bowers College of Computing and Information Science.

Beyond writing, AI search tools like Bing Chat improved literature reviews by surfacing newer, more diverse sources. First author Keigo Kusumegi noted, "People using LLMs are connecting to more diverse knowledge, which might be driving more creative ideas."

Yet, challenges emerge in evaluation. Human-written papers with complex language often signal quality and higher journal acceptance rates. In contrast, LLM-assisted papers, despite sophisticated prose, are less likely to be accepted, suggesting that polish no longer reliably indicates value. This disconnect could hinder editors, reviewers, and funders, as raw publication counts become misleading.

The observational study calls for experimental follow-ups and policy updates. Yin is hosting a symposium on March 3-5, 2026, in Ithaca to discuss AI's role in research. Co-authors include Xinyu Yang, Paul Ginsparg, Mathijs de Vaan, and Toby Stuart; funding came from the National Science Foundation.

As AI evolves into a "co-scientist," Yin emphasizes transparency: "The question is, how exactly have you used AI and whether it's helpful or not."

관련 기사

Illustration depicting OpenAI's ChatGPT-5.2 launch, showing professionals using the AI to enhance workplace productivity amid rivalry with Google's Gemini.
AI에 의해 생성된 이미지

OpenAI releases ChatGPT-5.2 to boost work productivity

AI에 의해 보고됨 AI에 의해 생성된 이미지

OpenAI has launched ChatGPT-5.2, a new family of AI models designed to enhance reasoning and productivity, particularly for professional tasks. The release follows an internal alert from CEO Sam Altman about competition from Google's Gemini 3. The update includes three variants aimed at different user needs, starting with paid subscribers.

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

AI에 의해 보고됨

AI 플랫폼이 광고 기반 수익화로 전환함에 따라 연구원들은 이 기술이 사용자 행동, 신념, 선택을 보이지 않는 방식으로 형성할 수 있다고 경고한다. 이는 OpenAI의 입장 변화로, CEO Sam Altman이 한때 광고와 AI의 조합을 '불안하게 만든다'고 했으나 이제 AI 앱의 광고가 신뢰를 유지할 수 있다고 확신한다.

일본 공정거래위원회는 생성 AI를 사용하는 검색엔진에 대해 사실 조사에 착수할 계획이다. 미디어 기관의 뉴스 기사를 무단 사용했을 가능성이 있으며, 이는 지배적 지위 남용으로 독점금지법 위반 소지가 있다. 타겟은 구글·마이크로소프트 등 미국 주요 테크 기업이다.

AI에 의해 보고됨

Commonly used AI models, including ChatGPT and Gemini, often fail to provide adequate advice for urgent women's health issues, according to a new benchmark test. Researchers found that 60 percent of responses to specialized queries were insufficient, highlighting biases in AI training data. The study calls for improved medical content to address these gaps.

Music labels and tech companies are addressing the unauthorized use of artists' work in training AI music generators like Udio and Suno. Recent settlements with major labels aim to create new revenue streams, while innovative tools promise to remove unlicensed content from AI models. Artists remain cautious about the technology's impact on their livelihoods.

AI에 의해 보고됨

AerynOS, an alpha-stage Linux distribution, has implemented a policy banning large language models in its development and community activities. The move addresses ethical issues with training data, environmental impacts, and quality risks. Exceptions are limited to translation and accessibility needs.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부