Illustration of a patient undergoing brain monitoring while listening to a podcast, with neural activity layers mirroring AI language model processing.
Illustration of a patient undergoing brain monitoring while listening to a podcast, with neural activity layers mirroring AI language model processing.
AI:n luoma kuva

Study links step-by-step brain responses during speech to layered processing in large language models

AI:n luoma kuva
Faktatarkistettu

A new study reports that as people listen to a spoken story, neural activity in key language regions unfolds over time in a way that mirrors the layer-by-layer computations inside large language models. The researchers, who analyzed electrocorticography recordings from epilepsy patients during a 30-minute podcast, also released an open dataset intended to help other scientists test competing theories of how meaning is built in the brain.

Scientists have reported evidence that the brain’s processing of spoken language unfolds in a sequence that resembles the layered operations of modern large language models.

The research, published in Nature Communications on Nov. 26, 2025, was led by Dr. Ariel Goldstein of the Hebrew University of Jerusalem, with collaborators including Dr. Mariano Schain of Google Research and Prof. Uri Hasson and Eric Ham of Princeton University.

Listening experiment and neural recordings

The team analyzed electrocorticography (ECoG) recordings from nine epilepsy patients as they listened to a 30-minute audio podcast, “Monkey in the Middle” (NPR, 2017). The researchers modeled neural responses to each word in the story using contextual embeddings drawn from multiple hidden layers of the GPT2-XL model and from Llama 2.

They focused on several regions along a ventral language-processing pathway, including areas in the superior temporal gyrus, the inferior frontal gyrus (which includes Broca’s area), and the temporal pole.

A layered time course of meaning

The study reports that brain responses matched the models’ internal representations in a time-ordered pattern: earlier neural signals aligned more strongly with earlier model layers, while later neural activity corresponded more closely to deeper layers that integrate broader context. The association was described as particularly strong in higher-level language regions such as Broca’s area.

“What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models,” Goldstein said, according to a summary released by the Hebrew University of Jerusalem.

Implications and data release

The findings are presented as a challenge to strictly rule-based accounts of language comprehension, suggesting instead that context-sensitive, statistical representations may explain real-time neural activity more effectively than traditional linguistic units such as phonemes and morphemes.

The researchers also released a public dataset intended to support further work in language neuroscience, including neural recordings aligned with linguistic features.

Separate from the Nature Communications report, a related data descriptor in the journal Scientific Data describes a “Podcast” ECoG dataset from nine participants with 1,330 electrodes listening to the same 30-minute stimulus, along with extracted features ranging from phonetic information to large language model embeddings and accompanying tutorials for analysis.

Liittyvät artikkelit

Illustration depicting linguists studying why human language resists compression like computer code, contrasting brain processing with digital efficiency.
AI:n luoma kuva

Study explores why human language isn’t compressed like computer code

Raportoinut AI AI:n luoma kuva Faktatarkistettu

A new model from linguists Richard Futrell and Michael Hahn suggests that many hallmark features of human language—such as familiar words, predictable ordering and meaning built up step by step—reflect constraints on sequential information processing rather than a drive for maximum data compression. The work was published in Nature Human Behaviour.

Researchers at Rutgers Health have identified how the brain integrates fast and slow processing through white matter connections, influencing cognitive abilities. Published in Nature Communications, the study analyzed data from nearly 1,000 people to map these neural timescales. Variations in this system may explain differences in thinking efficiency and hold promise for mental health research.

Raportoinut AI

A new brain imaging study has found that recalling facts and personal experiences activates nearly identical neural networks, challenging long-held views on memory systems. Researchers from the University of Nottingham and University of Cambridge used fMRI scans on 40 participants to compare these memory types. The results, published in Nature Human Behaviour, suggest a rethink in how memory is studied and could inform treatments for Alzheimer's and dementia.

Neuroscientists have identified eight body-like maps in the visual cortex that mirror the organization of touch sensations, enabling the brain to physically feel what it sees in others. This discovery, based on brain scans during movie viewing, enhances understanding of empathy and holds promise for treatments in autism and advancements in AI. The findings were published in Nature.

Raportoinut AI Faktatarkistettu

Scientists at the Keck School of Medicine of the University of Southern California have identified a four-layer organization of neuron types in the mouse hippocampus’s CA1 region, a key hub for memory, navigation, and emotion. The study, published in Nature Communications in December 2025, uses advanced RNA imaging to chart genetic activity in tens of thousands of neurons and reveals shifting bands of specialized cells that may help explain behavioral differences and disease vulnerabilities.

Australia-based start-up Cortical Labs has announced plans to construct two data centres using neuron-filled chips. The facilities in Melbourne and Singapore will house its CL1 biological computers, which have demonstrated the ability to play video games like Doom. The initiative aims to scale up cloud-based brain-computing services while reducing energy consumption.

Raportoinut AI Faktatarkistettu

An evolutionarily ancient midbrain region, the superior colliculus, can independently carry out visual computations long attributed mainly to the cortex, according to a PLOS Biology study. The work suggests that attention-guiding mechanisms with roots more than 500 million years old help separate objects from backgrounds and highlight salient details.

 

 

 

Tämä verkkosivusto käyttää evästeitä

Käytämme evästeitä analyysiä varten parantaaksemme sivustoamme. Lue tietosuojakäytäntömme tietosuojakäytäntö lisätietoja varten.
Hylkää