Study suggests brain-inspired algorithms to cut AI energy use

Researchers from Purdue University and the Georgia Institute of Technology have proposed a new computer architecture for AI models inspired by the human brain. This approach aims to address the energy-intensive 'memory wall' problem in current systems. The study, published in Frontiers in Science, highlights potential for more efficient AI in everyday devices.

The rapid growth of AI has exacerbated challenges in computer design, particularly the separation of processing and memory in traditional systems. A study published on Monday in the journal Frontiers in Science outlines a brain-inspired solution to this issue. Led by Kaushik Roy, a computer engineering professor at Purdue University, the research argues for rethinking AI architecture to make it more energy-efficient.

Current computers follow the von Neumann architecture, developed in 1945, which keeps memory and processing separate. This design creates a bottleneck known as the 'memory wall,' a term coined by University of Virginia researchers in the 1990s. As AI models, especially language processors, have expanded 5,000-fold in size over the past four years, the disparity between memory speed and processing power has become more pressing. IBM recently emphasized this problem in a report.

The proposed solution draws from how the brain operates, using spiking neural networks (SNNs). These algorithms, once criticized for being slow and inaccurate, have improved significantly in recent years. The researchers advocate for 'compute-in-memory' (CIM), which integrates computing directly into the memory system. As stated in the paper's abstract, "CIM offers a promising solution to the memory wall problem by integrating computing capabilities directly into the memory system."

Roy noted, "Language processing models have grown 5,000-fold in size over the last four years. This alarmingly rapid expansion makes it crucial that AI is as efficient as possible. That means fundamentally rethinking how computers are designed."

Co-author Tanvi Sharma, a Purdue researcher, added, "AI is one of the most transformative technologies of the 21st century. However, to move it out of data centers and into the real world, we need to dramatically reduce its energy use." She explained that this could enable AI in compact devices like medical tools, vehicles, and drones, with longer battery life and less data transfer.

By minimizing energy waste, the approach could make AI more accessible beyond large data centers, supporting broader applications in resource-constrained environments.

相关文章

Realistic depiction of a rhesus macaque in a Princeton lab with brain overlay showing prefrontal cortex assembling reusable cognitive 'Lego' modules for flexible learning.
AI 生成的图像

Princeton study reveals brain’s reusable ‘cognitive Legos’ for flexible learning

由 AI 报道 AI 生成的图像 事实核查

Neuroscientists at Princeton University report that the brain achieves flexible learning by reusing modular cognitive components across tasks. In experiments with rhesus macaques, researchers found that the prefrontal cortex assembles these reusable “cognitive Legos” to adapt behaviors quickly. The findings, published November 26 in Nature, underscore differences from current AI systems and could eventually inform treatments for disorders that impair flexible thinking.

韩国大学的研究人员开发了一种双输出人工突触,以提升多任务AI系统的能量效率,该大学宣布。该设备同时发出电信号和光信号,实现并行处理。测试显示,计算速度提高高达47%,与传统GPU硬件相比,能耗降低最多32倍。

由 AI 报道

Australia-based start-up Cortical Labs has announced plans to construct two data centres using neuron-filled chips. The facilities in Melbourne and Singapore will house its CL1 biological computers, which have demonstrated the ability to play video games like Doom. The initiative aims to scale up cloud-based brain-computing services while reducing energy consumption.

A global shortage of RAM, driven by AI data center demands, has caused PC memory prices to surge by 40 to 70 percent in 2025, leading to higher costs and lower specs for computers in 2026. This development is dampening the hype around so-called AI PCs, as manufacturers shift focus amid waning consumer interest. Analysts predict volatility in PC sales this year, with shortages persisting beyond 2026.

由 AI 报道

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

At the India AI Impact Summit, Prime Minister Narendra Modi described artificial intelligence as a turning point in human history that could reset the direction of civilisation. He expressed concern over the form of AI to be handed to future generations and emphasised making it human-centric and responsible. Experts have warned about risks including data privacy, deepfakes, and autonomous weapons.

由 AI 报道

Chinese researchers have introduced photonic AI chips that promise significant speed advantages in specific generative tasks. These chips use photons instead of electrons, enabling greater parallelism through optical interference. The development could mark a step forward in AI hardware, though claims are limited to narrowly defined applications.

 

 

 

此网站使用 cookie

我们使用 cookie 进行分析以改进我们的网站。阅读我们的 隐私政策 以获取更多信息。
拒绝