Foam bubbles exhibit learning-like motion similar to artificial intelligence

Engineers at the University of Pennsylvania have discovered that bubbles in everyday foams constantly shift positions while maintaining the foam's overall shape, following mathematical principles akin to those in deep learning for AI. This challenges traditional views of foams as glass-like and suggests learning behaviors may underpin diverse systems from materials to cells. The findings, published in Proceedings of the National Academy of Sciences, could inform adaptive materials and biological structures.

Foams, found in products like soap suds and mayonnaise, were long considered to mimic glass, with bubbles fixed in disordered positions. However, new computer simulations by University of Pennsylvania researchers reveal that bubbles in wet foams persistently wander through various arrangements without settling, even as the foam retains its form.

This dynamic behavior mirrors the process of deep learning in artificial intelligence systems. In AI training, parameters adjust iteratively via methods like gradient descent, avoiding overly precise fits that hinder generalization. Instead, systems explore broader regions of viable solutions. "Foams constantly reorganize themselves," noted John C. Crocker, professor of chemical and biomolecular engineering and co-senior author. "It's striking that foams and modern AI systems appear to follow the same mathematical principles."

Traditional physics modeled foam bubbles as particles rolling to low-energy states, like rocks in a valley. Yet, data from nearly two decades ago showed discrepancies. "When we actually looked at the data, the behavior of foams didn't match what the theory predicted," Crocker explained. The team applied AI-inspired optimization insights, finding bubbles linger in flat energy landscapes with multiple equivalent configurations.

Co-senior author Robert Riggleman, also in chemical and biomolecular engineering, highlighted a parallel: "The key insight was realizing that you don't actually want to push the system into the deepest possible valley." Keeping AI in such flatter areas enables better performance on new data, much like foam's ongoing motion.

The study reopens questions in foam research and extends to living systems, such as the cell cytoskeleton, which reorganizes while preserving structure. "Why the mathematics of deep learning accurately characterizes foams is a fascinating question," Crocker said. Supported by the National Science Foundation, the work involved co-authors Amruthesh Thirumalaiswamy and Clary Rodríguez-Cruz, with full details in the 2025 PNAS paper on viscous ripening foams.

Awọn iroyin ti o ni ibatan

Realistic depiction of a rhesus macaque in a Princeton lab with brain overlay showing prefrontal cortex assembling reusable cognitive 'Lego' modules for flexible learning.
Àwòrán tí AI ṣe

Princeton study reveals brain’s reusable ‘cognitive Legos’ for flexible learning

Ti AI ṣe iroyin Àwòrán tí AI ṣe Ti ṣayẹwo fun ododo

Neuroscientists at Princeton University report that the brain achieves flexible learning by reusing modular cognitive components across tasks. In experiments with rhesus macaques, researchers found that the prefrontal cortex assembles these reusable “cognitive Legos” to adapt behaviors quickly. The findings, published November 26 in Nature, underscore differences from current AI systems and could eventually inform treatments for disorders that impair flexible thinking.

Researchers at Duke University have developed an artificial intelligence framework that reveals straightforward rules underlying highly complex systems in nature and technology. Published on December 17 in npj Complexity, the tool analyzes time-series data to produce compact equations that capture essential behaviors. This approach could bridge gaps in scientific understanding where traditional methods fall short.

Ti AI ṣe iroyin

Researchers from Purdue University and the Georgia Institute of Technology have proposed a new computer architecture for AI models inspired by the human brain. This approach aims to address the energy-intensive 'memory wall' problem in current systems. The study, published in Frontiers in Science, highlights potential for more efficient AI in everyday devices.

Researchers at Karolinska Institutet have identified how alpha oscillations in the brain help distinguish the body from the surroundings. Faster alpha rhythms enable precise integration of visual and tactile signals, strengthening the feeling of bodily self. The findings, published in Nature Communications, could inform treatments for conditions like schizophrenia and improve prosthetic designs.

Ti AI ṣe iroyin Ti ṣayẹwo fun ododo

Researchers behind a new review in Frontiers in Science argue that rapid progress in artificial intelligence and brain technologies is outpacing scientific understanding of consciousness, raising the risk of ethical and legal mistakes. They say developing evidence-based tests for detecting awareness—whether in patients, animals or emerging artificial and lab-grown systems—could reshape medicine, welfare debates and technology governance.

A new research paper argues that AI agents are mathematically destined to fail, challenging the hype from big tech companies. While the industry remains optimistic, the study suggests full automation by generative AI may never happen. Published in early 2026, it casts doubt on promises for transformative AI in daily life.

Ti AI ṣe iroyin

Researchers have experimentally observed a hidden quantum geometry in materials that steers electrons similarly to how gravity bends light. The discovery, made at the interface of two oxide materials, could advance quantum electronics and superconductivity. Published in Science, the findings highlight a long-theorized effect now confirmed in reality.

 

 

 

Ojú-ìwé yìí nlo kuki

A nlo kuki fun itupalẹ lati mu ilọsiwaju wa. Ka ìlànà àṣírí wa fun alaye siwaju sii.
Kọ