Illustration depicting linguists studying why human language resists compression like computer code, contrasting brain processing with digital efficiency.
Illustration depicting linguists studying why human language resists compression like computer code, contrasting brain processing with digital efficiency.
AI에 의해 생성된 이미지

Study explores why human language isn’t compressed like computer code

AI에 의해 생성된 이미지
사실 확인됨

A new model from linguists Richard Futrell and Michael Hahn suggests that many hallmark features of human language—such as familiar words, predictable ordering and meaning built up step by step—reflect constraints on sequential information processing rather than a drive for maximum data compression. The work was published in Nature Human Behaviour.

Human language is remarkably rich and intricate. From an information-theory standpoint, the same ideas could, in principle, be transmitted in far more compact strings—similar to how computers represent information using binary digits.

Michael Hahn, a linguist at Saarland University in Saarbrücken, Germany, and Richard Futrell of the University of California, Irvine, set out to address why everyday speech does not resemble a tightly compressed digital code. In a paper published in Nature Human Behaviour in November 2025, the researchers present a model in which “natural-language-like” structure arises when communication is constrained by limits on sequential prediction—how much information must be carried forward from what has already been heard to anticipate what comes next.

In that framework, language benefits from patterns that are easy for people to process as a stream. A ScienceDaily summary of the work, citing materials from the University of Osaka, uses examples to illustrate the idea: an invented word such as “gol” for a hybrid concept (half cat and half dog) would be hard to understand because it does not map cleanly onto shared experience, and a scrambled blend like “gadcot” is similarly difficult to interpret. By contrast, “cat and dog” is immediately meaningful.

The researchers also point to word order as a signal that helps listeners reduce uncertainty in real time. The ScienceDaily release highlights the German noun phrase “Die fünf grünen Autos” (“the five green cars”) as an example of how meaning can be built incrementally as each word narrows the set of plausible interpretations. Reordering those words—for example, “Grünen fünf die Autos”—disrupts that predictability and makes comprehension harder.

Beyond explaining why language is not “maximally compressed,” the paper’s discussion connects the findings to machine learning. Futrell and Hahn argue that natural language is structured in a way that makes next-token prediction comparatively easier under cognitive constraints, a point they say is relevant to modern large language models.

관련 기사

Illustration of a patient undergoing brain monitoring while listening to a podcast, with neural activity layers mirroring AI language model processing.
AI에 의해 생성된 이미지

Study links step-by-step brain responses during speech to layered processing in large language models

AI에 의해 보고됨 AI에 의해 생성된 이미지 사실 확인됨

A new study reports that as people listen to a spoken story, neural activity in key language regions unfolds over time in a way that mirrors the layer-by-layer computations inside large language models. The researchers, who analyzed electrocorticography recordings from epilepsy patients during a 30-minute podcast, also released an open dataset intended to help other scientists test competing theories of how meaning is built in the brain.

Neuroscientists at Princeton University report that the brain achieves flexible learning by reusing modular cognitive components across tasks. In experiments with rhesus macaques, researchers found that the prefrontal cortex assembles these reusable “cognitive Legos” to adapt behaviors quickly. The findings, published November 26 in Nature, underscore differences from current AI systems and could eventually inform treatments for disorders that impair flexible thinking.

AI에 의해 보고됨

Researchers from Purdue University and the Georgia Institute of Technology have proposed a new computer architecture for AI models inspired by the human brain. This approach aims to address the energy-intensive 'memory wall' problem in current systems. The study, published in Frontiers in Science, highlights potential for more efficient AI in everyday devices.

A Cornell University study reveals that AI tools like ChatGPT have increased researchers' paper output by up to 50%, particularly benefiting non-native English speakers. However, this surge in polished manuscripts is complicating peer review and funding decisions, as many lack substantial scientific value. The findings highlight a shift in global research dynamics and call for updated policies on AI use in academia.

AI에 의해 보고됨

문화적 가치와 실용적 요구에 의해 형성된 빽빽하고 정보가 가득 찬 디자인 때문에 일본 인터넷은 외부인에게 종종 어수선해 보인다. 이러한 격차는 2025년 오사카 세계박람회에서 두드러졌으며, 혼란스러운 디지털 인터페이스가 방문객을 방해했다. 전문가들은 일본에서 미니멀리즘이 미발전이나 고립을 전달할 수 있다고 지적한다.

Engineers at the University of Pennsylvania have discovered that bubbles in everyday foams constantly shift positions while maintaining the foam's overall shape, following mathematical principles akin to those in deep learning for AI. This challenges traditional views of foams as glass-like and suggests learning behaviors may underpin diverse systems from materials to cells. The findings, published in Proceedings of the National Academy of Sciences, could inform adaptive materials and biological structures.

AI에 의해 보고됨

Researchers have produced the most detailed maps yet of how human DNA folds and reorganizes in three dimensions and over time. This work, led by scientists at Northwestern University as part of the 4D Nucleome Project, highlights how genome architecture influences gene activity and disease risk. The findings, published in Nature, could accelerate the discovery of genetic mutations linked to illnesses like cancer.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부