Illustration of a patient undergoing brain monitoring while listening to a podcast, with neural activity layers mirroring AI language model processing.
Àwòrán tí AI ṣe

Study links step-by-step brain responses during speech to layered processing in large language models

Àwòrán tí AI ṣe
Ti ṣayẹwo fun ododo

A new study reports that as people listen to a spoken story, neural activity in key language regions unfolds over time in a way that mirrors the layer-by-layer computations inside large language models. The researchers, who analyzed electrocorticography recordings from epilepsy patients during a 30-minute podcast, also released an open dataset intended to help other scientists test competing theories of how meaning is built in the brain.

Scientists have reported evidence that the brain’s processing of spoken language unfolds in a sequence that resembles the layered operations of modern large language models.

The research, published in Nature Communications on Nov. 26, 2025, was led by Dr. Ariel Goldstein of the Hebrew University of Jerusalem, with collaborators including Dr. Mariano Schain of Google Research and Prof. Uri Hasson and Eric Ham of Princeton University.

Listening experiment and neural recordings

The team analyzed electrocorticography (ECoG) recordings from nine epilepsy patients as they listened to a 30-minute audio podcast, “Monkey in the Middle” (NPR, 2017). The researchers modeled neural responses to each word in the story using contextual embeddings drawn from multiple hidden layers of the GPT2-XL model and from Llama 2.

They focused on several regions along a ventral language-processing pathway, including areas in the superior temporal gyrus, the inferior frontal gyrus (which includes Broca’s area), and the temporal pole.

A layered time course of meaning

The study reports that brain responses matched the models’ internal representations in a time-ordered pattern: earlier neural signals aligned more strongly with earlier model layers, while later neural activity corresponded more closely to deeper layers that integrate broader context. The association was described as particularly strong in higher-level language regions such as Broca’s area.

“What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models,” Goldstein said, according to a summary released by the Hebrew University of Jerusalem.

Implications and data release

The findings are presented as a challenge to strictly rule-based accounts of language comprehension, suggesting instead that context-sensitive, statistical representations may explain real-time neural activity more effectively than traditional linguistic units such as phonemes and morphemes.

The researchers also released a public dataset intended to support further work in language neuroscience, including neural recordings aligned with linguistic features.

Separate from the Nature Communications report, a related data descriptor in the journal Scientific Data describes a “Podcast” ECoG dataset from nine participants with 1,330 electrodes listening to the same 30-minute stimulus, along with extracted features ranging from phonetic information to large language model embeddings and accompanying tutorials for analysis.

Awọn iroyin ti o ni ibatan

Illustration of a brain connectivity map from an Ohio State University study, showing neural patterns predicting cognitive activities, for a news article on neuroscience findings.
Àwòrán tí AI ṣe

Study maps how brain connectivity predicts activity across cognitive functions

Ti AI ṣe iroyin Àwòrán tí AI ṣe Ti ṣayẹwo fun ododo

Scientists at The Ohio State University have charted how patterns of brain wiring can predict activity linked to many mental functions across the entire brain. Each region shows a distinct “connectivity fingerprint” tied to roles such as language and memory. The peer‑reviewed findings in Network Neuroscience offer a baseline for studying healthy young adult brains and for comparisons with neurological or psychiatric conditions.

Researchers at Rutgers Health have identified how the brain integrates fast and slow processing through white matter connections, influencing cognitive abilities. Published in Nature Communications, the study analyzed data from nearly 1,000 people to map these neural timescales. Variations in this system may explain differences in thinking efficiency and hold promise for mental health research.

Ti AI ṣe iroyin Ti ṣayẹwo fun ododo

Researchers at UNSW Sydney report evidence that auditory verbal hallucinations in schizophrenia-spectrum disorders may involve a breakdown in the brain’s normal ability to dampen responses to self-generated inner speech, causing internally generated thoughts to be processed more like external sounds.

Scientists at Brown University have identified a subtle brain activity pattern that can forecast Alzheimer's disease in people with mild cognitive impairment up to two and a half years in advance. Using magnetoencephalography and a custom analysis tool, the researchers detected changes in neuronal electrical signals linked to memory processing. This noninvasive approach offers a potential new biomarker for early detection.

Ti AI ṣe iroyin

Researchers at Concordia University have discovered that people blink less when concentrating on speech amid background noise, highlighting a link between eye behavior and cognitive effort. This pattern persists regardless of lighting conditions, suggesting it's driven by mental demands rather than visual factors. The findings, published in Trends in Hearing, could offer a simple way to measure brain function during listening tasks.

Scientists from the Allen Institute and Japan’s University of Electro-Communications have built one of the most detailed virtual models of the mouse cortex to date, simulating roughly 9 million neurons and 26 billion synapses across 86 regions on the Fugaku supercomputer.

Ti AI ṣe iroyin

Researchers have developed a noninvasive method using EEG brain scans to detect movement intentions in people with spinal cord injuries. By capturing signals from the brain and potentially routing them to spinal stimulators, the approach aims to bypass damaged nerves. While promising, the technology still struggles with precise control, especially for lower limbs.

 

 

 

Ojú-ìwé yìí nlo kuki

A nlo kuki fun itupalẹ lati mu ilọsiwaju wa. Ka ìlànà àṣírí wa fun alaye siwaju sii.
Kọ