Study suggests brain-inspired algorithms to cut AI energy use

Researchers from Purdue University and the Georgia Institute of Technology have proposed a new computer architecture for AI models inspired by the human brain. This approach aims to address the energy-intensive 'memory wall' problem in current systems. The study, published in Frontiers in Science, highlights potential for more efficient AI in everyday devices.

The rapid growth of AI has exacerbated challenges in computer design, particularly the separation of processing and memory in traditional systems. A study published on Monday in the journal Frontiers in Science outlines a brain-inspired solution to this issue. Led by Kaushik Roy, a computer engineering professor at Purdue University, the research argues for rethinking AI architecture to make it more energy-efficient.

Current computers follow the von Neumann architecture, developed in 1945, which keeps memory and processing separate. This design creates a bottleneck known as the 'memory wall,' a term coined by University of Virginia researchers in the 1990s. As AI models, especially language processors, have expanded 5,000-fold in size over the past four years, the disparity between memory speed and processing power has become more pressing. IBM recently emphasized this problem in a report.

The proposed solution draws from how the brain operates, using spiking neural networks (SNNs). These algorithms, once criticized for being slow and inaccurate, have improved significantly in recent years. The researchers advocate for 'compute-in-memory' (CIM), which integrates computing directly into the memory system. As stated in the paper's abstract, "CIM offers a promising solution to the memory wall problem by integrating computing capabilities directly into the memory system."

Roy noted, "Language processing models have grown 5,000-fold in size over the last four years. This alarmingly rapid expansion makes it crucial that AI is as efficient as possible. That means fundamentally rethinking how computers are designed."

Co-author Tanvi Sharma, a Purdue researcher, added, "AI is one of the most transformative technologies of the 21st century. However, to move it out of data centers and into the real world, we need to dramatically reduce its energy use." She explained that this could enable AI in compact devices like medical tools, vehicles, and drones, with longer battery life and less data transfer.

By minimizing energy waste, the approach could make AI more accessible beyond large data centers, supporting broader applications in resource-constrained environments.

Liittyvät artikkelit

Realistic depiction of a rhesus macaque in a Princeton lab with brain overlay showing prefrontal cortex assembling reusable cognitive 'Lego' modules for flexible learning.
AI:n luoma kuva

Princeton study reveals brain’s reusable ‘cognitive Legos’ for flexible learning

Raportoinut AI AI:n luoma kuva Faktatarkistettu

Neuroscientists at Princeton University report that the brain achieves flexible learning by reusing modular cognitive components across tasks. In experiments with rhesus macaques, researchers found that the prefrontal cortex assembles these reusable “cognitive Legos” to adapt behaviors quickly. The findings, published November 26 in Nature, underscore differences from current AI systems and could eventually inform treatments for disorders that impair flexible thinking.

Tech developers are shifting artificial intelligence from distant cloud data centers to personal devices like phones and laptops to achieve faster processing, better privacy, and lower costs. This on-device AI enables tasks that require quick responses and keeps sensitive data local. Experts predict significant advancements in the coming years as hardware and models improve.

Raportoinut AI

Scientists are on the verge of simulating a human brain using the world's most powerful supercomputers, aiming to unlock secrets of brain function. Led by researchers at Germany's Jülich Research Centre, the project leverages the JUPITER supercomputer to model 20 billion neurons. This breakthrough could enable testing of theories on memory and drug effects that smaller models cannot achieve.

AI coding agents from companies like OpenAI, Anthropic, and Google enable extended work on software projects, including writing apps and fixing bugs under human oversight. These tools rely on large language models but face challenges like limited context processing and high computational costs. Understanding their mechanics helps developers decide when to deploy them effectively.

Raportoinut AI

China's State Grid Corporation plans to invest 4 trillion yuan (US$574 billion) by 2030 to build a more efficient power system integrating renewables, aiming to secure an edge in the US-China tech rivalry. Experts note that electricity is China's undeniable advantage in the AI race.

Amateur mathematicians have stunned professionals by using AI tools like ChatGPT to tackle long-standing problems posed by Paul Erdős. While most solutions rediscover existing results, one new proof highlights AI's potential to transform mathematical research. Experts see this as an early step toward broader applications in the field.

Raportoinut AI

Hangzhou-based startup DeepSeek has not announced plans for its next major AI model release, but its technical papers suggest potential advances. The papers highlight how AI infrastructure innovations could drive efficiency and scale up model performance.

 

 

 

Tämä verkkosivusto käyttää evästeitä

Käytämme evästeitä analyysiä varten parantaaksemme sivustoamme. Lue tietosuojakäytäntömme tietosuojakäytäntö lisätietoja varten.
Hylkää