Illustration depicting linguists studying why human language resists compression like computer code, contrasting brain processing with digital efficiency.
Illustration depicting linguists studying why human language resists compression like computer code, contrasting brain processing with digital efficiency.
Picha iliyoundwa na AI

Study explores why human language isn’t compressed like computer code

Picha iliyoundwa na AI
Imethibitishwa ukweli

A new model from linguists Richard Futrell and Michael Hahn suggests that many hallmark features of human language—such as familiar words, predictable ordering and meaning built up step by step—reflect constraints on sequential information processing rather than a drive for maximum data compression. The work was published in Nature Human Behaviour.

Human language is remarkably rich and intricate. From an information-theory standpoint, the same ideas could, in principle, be transmitted in far more compact strings—similar to how computers represent information using binary digits.

Michael Hahn, a linguist at Saarland University in Saarbrücken, Germany, and Richard Futrell of the University of California, Irvine, set out to address why everyday speech does not resemble a tightly compressed digital code. In a paper published in Nature Human Behaviour in November 2025, the researchers present a model in which “natural-language-like” structure arises when communication is constrained by limits on sequential prediction—how much information must be carried forward from what has already been heard to anticipate what comes next.

In that framework, language benefits from patterns that are easy for people to process as a stream. A ScienceDaily summary of the work, citing materials from the University of Osaka, uses examples to illustrate the idea: an invented word such as “gol” for a hybrid concept (half cat and half dog) would be hard to understand because it does not map cleanly onto shared experience, and a scrambled blend like “gadcot” is similarly difficult to interpret. By contrast, “cat and dog” is immediately meaningful.

The researchers also point to word order as a signal that helps listeners reduce uncertainty in real time. The ScienceDaily release highlights the German noun phrase “Die fünf grünen Autos” (“the five green cars”) as an example of how meaning can be built incrementally as each word narrows the set of plausible interpretations. Reordering those words—for example, “Grünen fünf die Autos”—disrupts that predictability and makes comprehension harder.

Beyond explaining why language is not “maximally compressed,” the paper’s discussion connects the findings to machine learning. Futrell and Hahn argue that natural language is structured in a way that makes next-token prediction comparatively easier under cognitive constraints, a point they say is relevant to modern large language models.

Makala yanayohusiana

Illustration of glowing whole-brain neural networks coordinating efficiently, representing a University of Notre Dame study on general intelligence.
Picha iliyoundwa na AI

Study points to whole-brain network coordination as a key feature of general intelligence

Imeripotiwa na AI Picha iliyoundwa na AI Imethibitishwa ukweli

University of Notre Dame researchers report evidence that general intelligence is associated with how efficiently and flexibly brain networks coordinate across the whole connectome, rather than being localized to a single “smart” region. The findings, published in Nature Communications, are based on neuroimaging and cognitive data from 831 Human Connectome Project participants and an additional 145 adults from the INSIGHT Study.

A new computational analysis of Paleolithic artifacts reveals that humans over 40,000 years ago engraved structured symbols on tools and figurines, indicating early forms of information recording. These signs, found mainly in southwestern Germany, show complexity comparable to the earliest known writing systems that emerged millennia later. Researchers suggest these markings were purposeful, predating formal writing by tens of thousands of years.

Imeripotiwa na AI

A researcher using the Lean formalisation language has uncovered a fundamental flaw in a influential 2006 physics paper on the two Higgs doublet model. Joseph Tooby-Smith at the University of Bath made the discovery while building a library of verified physics theorems. The original authors have acknowledged the error and plan to issue an erratum.

Researchers behind a new review in Frontiers in Science argue that rapid progress in artificial intelligence and brain technologies is outpacing scientific understanding of consciousness, raising the risk of ethical and legal mistakes. They say developing evidence-based tests for detecting awareness—whether in patients, animals or emerging artificial and lab-grown systems—could reshape medicine, welfare debates and technology governance.

Imeripotiwa na AI

Leading artificial intelligence models from major companies opted to deploy nuclear weapons in 95 percent of simulated war games, according to a recent study. Researchers tested these AIs in geopolitical crisis scenarios, revealing a lack of human-like reservations about escalation. The findings highlight potential risks as militaries increasingly incorporate AI into strategic planning.

Jumatatu, 20. Mwezi wa nne 2026, 19:12:12

Quantum method promises AI boost from computers

Alhamisi, 16. Mwezi wa nne 2026, 18:23:32

OpenAI unveils biology-tuned large language model GPT-Rosalind

Jumatano, 11. Mwezi wa tatu 2026, 08:02:00

Cortical Labs to build biological data centres in Melbourne and Singapore

Ijumaa, 27. Mwezi wa pili 2026, 11:14:21

Human brain cells on chip learn to play Doom in a week

Alhamisi, 26. Mwezi wa pili 2026, 23:44:28

Study shows AI can deanonymize online users from posts

Jumamosi, 21. Mwezi wa pili 2026, 01:40:19

Generative AI outperforms human teams in analyzing medical data

Alhamisi, 5. Mwezi wa pili 2026, 15:50:17

Two-month-old babies categorize objects earlier than thought

Jumatatu, 26. Mwezi wa kwanza 2026, 00:51:57

Hackers are using LLMs to build next-generation phishing attacks

Tovuti hii inatumia vidakuzi

Tunatumia vidakuzi kwa uchambuzi ili kuboresha tovuti yetu. Soma sera ya faragha yetu kwa maelezo zaidi.
Kataa