Generative AI outperforms human teams in analyzing medical data

Researchers at UC San Francisco and Wayne State University found that generative AI can process complex medical datasets faster than traditional human teams, sometimes yielding stronger results. The study focused on predicting preterm birth using data from over 1,000 pregnant women. This approach reduced analysis time from months to minutes in some cases.

Scientists at UC San Francisco and Wayne State University conducted a real-world test of generative AI in health research, comparing its performance to human experts. The task involved predicting preterm birth, a leading cause of newborn death in the United States, where about 1,000 babies are born prematurely each day. The researchers used microbiome data compiled from approximately 1,200 pregnant women across nine studies, sourced from the March of Dimes Preterm Birth Data Repository.

To evaluate AI capabilities, the team drew on datasets from the DREAM crowdsourcing competition, which previously involved over 100 global teams developing machine learning models for preterm birth risks and gestational age estimation. Human participants in that competition took about three months to build models, followed by nearly two years to consolidate and publish findings.

In the new study, eight AI chatbots were given natural language prompts to generate analytical code without direct human programming. Only four of the systems produced usable code, but those that succeeded matched or exceeded the performance of human teams. For instance, a junior pair—a UCSF master's student, Reuben Sarwal, and a high school student, Victor Tarca—developed prediction models with AI support, generating functional code in minutes rather than hours or days required by experienced programmers.

The entire process, from inception to journal submission, took just six months. "These AI tools could relieve one of the biggest bottlenecks in data science: building our analysis pipelines," said Marina Sirota, PhD, professor of Pediatrics at UCSF and principal investigator of the March of Dimes Prematurity Research Center. Co-senior author Adi L. Tarca, PhD, from Wayne State University, added, "Thanks to generative AI, researchers with a limited background in data science won't always need to form wide collaborations or spend hours debugging code. They can focus on answering the right biomedical questions."

The study, co-authored by Sirota and Tarca, emphasizes that AI requires human oversight to avoid misleading results. It was published in Cell Reports Medicine on February 17, highlighting potential for faster progress in understanding preterm birth risk factors.

Verwandte Artikel

Radiologist and AI system struggling to identify deepfake X-ray images in a medical study.
Bild generiert von KI

Study finds radiologists and AI models struggle to spot AI-generated “deepfake” X-rays

Von KI berichtet Bild generiert von KI Fakten geprüft

A study published March 24, 2026 in *Radiology* reports that AI-generated “deepfake” X-rays can be convincing enough to mislead radiologists and several multimodal AI systems. In testing, radiologists’ average accuracy rose from 41% when they were not told fakes were included to 75% when they were warned, highlighting potential risks for medical imaging security and clinical decision-making.

Researchers at the University of Michigan have developed an AI system called Prima that interprets brain MRI scans in seconds, identifying neurological conditions with up to 97.5% accuracy. The tool also flags urgent cases like strokes and brain hemorrhages, potentially speeding up medical responses. Findings from the study appear in Nature Biomedical Engineering.

Von KI berichtet

At the Game Developers Conference 2026 in San Francisco, generative AI tools drew mixed reactions, with demos from Google highlighting potential uses amid widespread developer skepticism. A recent industry report showed 52% of companies using the technology, but only 36% of workers incorporating it into their jobs, and 52% viewing it as harmful to the sector.

Researchers at the University of Geneva have developed MangroveGS, an AI model that predicts cancer metastasis risk with nearly 80% accuracy. The tool analyzes gene expression patterns in tumor cells, initially from colon cancer, and applies to other types like breast and lung. Published in Cell Reports, it aims to enable more personalized treatments.

Von KI berichtet

A poll by the Korea Chamber of Commerce and Industry (KCCI) shows South Korean workers have cut their hours by an average of 8.4 per week, or 17.8 percent overall, thanks to generative AI platforms. More than half of respondents use such tools daily, with the highest adoption in the information and telecommunications sector.

A New York Times analysis shows Google's AI Overviews, powered by Gemini, answering correctly only 90% to 91% of questions in a standard benchmark. This translates to tens of millions of incorrect responses daily across searches. Google disputes the test's relevance.

Diese Website verwendet Cookies

Wir verwenden Cookies für Analysen, um unsere Website zu verbessern. Lesen Sie unsere Datenschutzrichtlinie für weitere Informationen.
Ablehnen