Illustration depicting linguists studying why human language resists compression like computer code, contrasting brain processing with digital efficiency.
Illustration depicting linguists studying why human language resists compression like computer code, contrasting brain processing with digital efficiency.
Immagine generata dall'IA

Study explores why human language isn’t compressed like computer code

Immagine generata dall'IA
Verificato

A new model from linguists Richard Futrell and Michael Hahn suggests that many hallmark features of human language—such as familiar words, predictable ordering and meaning built up step by step—reflect constraints on sequential information processing rather than a drive for maximum data compression. The work was published in Nature Human Behaviour.

Human language is remarkably rich and intricate. From an information-theory standpoint, the same ideas could, in principle, be transmitted in far more compact strings—similar to how computers represent information using binary digits.

Michael Hahn, a linguist at Saarland University in Saarbrücken, Germany, and Richard Futrell of the University of California, Irvine, set out to address why everyday speech does not resemble a tightly compressed digital code. In a paper published in Nature Human Behaviour in November 2025, the researchers present a model in which “natural-language-like” structure arises when communication is constrained by limits on sequential prediction—how much information must be carried forward from what has already been heard to anticipate what comes next.

In that framework, language benefits from patterns that are easy for people to process as a stream. A ScienceDaily summary of the work, citing materials from the University of Osaka, uses examples to illustrate the idea: an invented word such as “gol” for a hybrid concept (half cat and half dog) would be hard to understand because it does not map cleanly onto shared experience, and a scrambled blend like “gadcot” is similarly difficult to interpret. By contrast, “cat and dog” is immediately meaningful.

The researchers also point to word order as a signal that helps listeners reduce uncertainty in real time. The ScienceDaily release highlights the German noun phrase “Die fünf grünen Autos” (“the five green cars”) as an example of how meaning can be built incrementally as each word narrows the set of plausible interpretations. Reordering those words—for example, “Grünen fünf die Autos”—disrupts that predictability and makes comprehension harder.

Beyond explaining why language is not “maximally compressed,” the paper’s discussion connects the findings to machine learning. Futrell and Hahn argue that natural language is structured in a way that makes next-token prediction comparatively easier under cognitive constraints, a point they say is relevant to modern large language models.

Articoli correlati

Illustration of a patient undergoing brain monitoring while listening to a podcast, with neural activity layers mirroring AI language model processing.
Immagine generata dall'IA

Study links step-by-step brain responses during speech to layered processing in large language models

Riportato dall'IA Immagine generata dall'IA Verificato

A new study reports that as people listen to a spoken story, neural activity in key language regions unfolds over time in a way that mirrors the layer-by-layer computations inside large language models. The researchers, who analyzed electrocorticography recordings from epilepsy patients during a 30-minute podcast, also released an open dataset intended to help other scientists test competing theories of how meaning is built in the brain.

Neuroscientists at Princeton University report that the brain achieves flexible learning by reusing modular cognitive components across tasks. In experiments with rhesus macaques, researchers found that the prefrontal cortex assembles these reusable “cognitive Legos” to adapt behaviors quickly. The findings, published November 26 in Nature, underscore differences from current AI systems and could eventually inform treatments for disorders that impair flexible thinking.

Riportato dall'IA

Researchers from Purdue University and the Georgia Institute of Technology have proposed a new computer architecture for AI models inspired by the human brain. This approach aims to address the energy-intensive 'memory wall' problem in current systems. The study, published in Frontiers in Science, highlights potential for more efficient AI in everyday devices.

A Cornell University study reveals that AI tools like ChatGPT have increased researchers' paper output by up to 50%, particularly benefiting non-native English speakers. However, this surge in polished manuscripts is complicating peer review and funding decisions, as many lack substantial scientific value. The findings highlight a shift in global research dynamics and call for updated policies on AI use in academia.

Riportato dall'IA

Japan's internet often seems cluttered to outsiders due to its dense, information-packed designs shaped by cultural values and practical demands. This gap was stark at the 2025 World Expo in Osaka, where confusing digital interfaces hindered visitors. Professionals note that minimalism in Japan can convey underdevelopment or isolation.

Engineers at the University of Pennsylvania have discovered that bubbles in everyday foams constantly shift positions while maintaining the foam's overall shape, following mathematical principles akin to those in deep learning for AI. This challenges traditional views of foams as glass-like and suggests learning behaviors may underpin diverse systems from materials to cells. The findings, published in Proceedings of the National Academy of Sciences, could inform adaptive materials and biological structures.

Riportato dall'IA

Researchers have produced the most detailed maps yet of how human DNA folds and reorganizes in three dimensions and over time. This work, led by scientists at Northwestern University as part of the 4D Nucleome Project, highlights how genome architecture influences gene activity and disease risk. The findings, published in Nature, could accelerate the discovery of genetic mutations linked to illnesses like cancer.

 

 

 

Questo sito web utilizza i cookie

Utilizziamo i cookie per l'analisi per migliorare il nostro sito. Leggi la nostra politica sulla privacy per ulteriori informazioni.
Rifiuta