Generative AI outperforms human teams in analyzing medical data

Researchers at UC San Francisco and Wayne State University found that generative AI can process complex medical datasets faster than traditional human teams, sometimes yielding stronger results. The study focused on predicting preterm birth using data from over 1,000 pregnant women. This approach reduced analysis time from months to minutes in some cases.

Scientists at UC San Francisco and Wayne State University conducted a real-world test of generative AI in health research, comparing its performance to human experts. The task involved predicting preterm birth, a leading cause of newborn death in the United States, where about 1,000 babies are born prematurely each day. The researchers used microbiome data compiled from approximately 1,200 pregnant women across nine studies, sourced from the March of Dimes Preterm Birth Data Repository.

To evaluate AI capabilities, the team drew on datasets from the DREAM crowdsourcing competition, which previously involved over 100 global teams developing machine learning models for preterm birth risks and gestational age estimation. Human participants in that competition took about three months to build models, followed by nearly two years to consolidate and publish findings.

In the new study, eight AI chatbots were given natural language prompts to generate analytical code without direct human programming. Only four of the systems produced usable code, but those that succeeded matched or exceeded the performance of human teams. For instance, a junior pair—a UCSF master's student, Reuben Sarwal, and a high school student, Victor Tarca—developed prediction models with AI support, generating functional code in minutes rather than hours or days required by experienced programmers.

The entire process, from inception to journal submission, took just six months. "These AI tools could relieve one of the biggest bottlenecks in data science: building our analysis pipelines," said Marina Sirota, PhD, professor of Pediatrics at UCSF and principal investigator of the March of Dimes Prematurity Research Center. Co-senior author Adi L. Tarca, PhD, from Wayne State University, added, "Thanks to generative AI, researchers with a limited background in data science won't always need to form wide collaborations or spend hours debugging code. They can focus on answering the right biomedical questions."

The study, co-authored by Sirota and Tarca, emphasizes that AI requires human oversight to avoid misleading results. It was published in Cell Reports Medicine on February 17, highlighting potential for faster progress in understanding preterm birth risk factors.

Relaterede artikler

Illustration depicting AI cancer diagnostic tool inferring patient demographics and revealing performance biases across groups, with researchers addressing the issue.
Billede genereret af AI

AI cancer tools can infer patient demographics, raising bias concerns

Rapporteret af AI Billede genereret af AI Faktatjekket

Artificial intelligence systems designed to diagnose cancer from tissue slides are learning to infer patient demographics, leading to uneven diagnostic performance across racial, gender, and age groups. Researchers at Harvard Medical School and collaborators identified the problem and developed a method that sharply reduces these disparities, underscoring the need for routine bias checks in medical AI.

Commonly used AI models, including ChatGPT and Gemini, often fail to provide adequate advice for urgent women's health issues, according to a new benchmark test. Researchers found that 60 percent of responses to specialized queries were insufficient, highlighting biases in AI training data. The study calls for improved medical content to address these gaps.

Rapporteret af AI

Researchers at the University of Michigan have developed an AI system called Prima that interprets brain MRI scans in seconds, identifying neurological conditions with up to 97.5% accuracy. The tool also flags urgent cases like strokes and brain hemorrhages, potentially speeding up medical responses. Findings from the study appear in Nature Biomedical Engineering.

Researchers at Duke University have developed an artificial intelligence framework that reveals straightforward rules underlying highly complex systems in nature and technology. Published on December 17 in npj Complexity, the tool analyzes time-series data to produce compact equations that capture essential behaviors. This approach could bridge gaps in scientific understanding where traditional methods fall short.

Rapporteret af AI

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

A poll by the Korea Chamber of Commerce and Industry (KCCI) shows South Korean workers have cut their hours by an average of 8.4 per week, or 17.8 percent overall, thanks to generative AI platforms. More than half of respondents use such tools daily, with the highest adoption in the information and telecommunications sector.

Rapporteret af AI

A new study from Brown University identifies significant ethical concerns with using AI chatbots like ChatGPT for mental health advice. Researchers found that these systems often violate professional standards even when prompted to act as therapists. The work calls for better safeguards before deploying such tools in sensitive areas.

 

 

 

Dette websted bruger cookies

Vi bruger cookies til analyse for at forbedre vores side. Læs vores privatlivspolitik for mere information.
Afvis