Study suggests brain-inspired algorithms to cut AI energy use

Researchers from Purdue University and the Georgia Institute of Technology have proposed a new computer architecture for AI models inspired by the human brain. This approach aims to address the energy-intensive 'memory wall' problem in current systems. The study, published in Frontiers in Science, highlights potential for more efficient AI in everyday devices.

The rapid growth of AI has exacerbated challenges in computer design, particularly the separation of processing and memory in traditional systems. A study published on Monday in the journal Frontiers in Science outlines a brain-inspired solution to this issue. Led by Kaushik Roy, a computer engineering professor at Purdue University, the research argues for rethinking AI architecture to make it more energy-efficient.

Current computers follow the von Neumann architecture, developed in 1945, which keeps memory and processing separate. This design creates a bottleneck known as the 'memory wall,' a term coined by University of Virginia researchers in the 1990s. As AI models, especially language processors, have expanded 5,000-fold in size over the past four years, the disparity between memory speed and processing power has become more pressing. IBM recently emphasized this problem in a report.

The proposed solution draws from how the brain operates, using spiking neural networks (SNNs). These algorithms, once criticized for being slow and inaccurate, have improved significantly in recent years. The researchers advocate for 'compute-in-memory' (CIM), which integrates computing directly into the memory system. As stated in the paper's abstract, "CIM offers a promising solution to the memory wall problem by integrating computing capabilities directly into the memory system."

Roy noted, "Language processing models have grown 5,000-fold in size over the last four years. This alarmingly rapid expansion makes it crucial that AI is as efficient as possible. That means fundamentally rethinking how computers are designed."

Co-author Tanvi Sharma, a Purdue researcher, added, "AI is one of the most transformative technologies of the 21st century. However, to move it out of data centers and into the real world, we need to dramatically reduce its energy use." She explained that this could enable AI in compact devices like medical tools, vehicles, and drones, with longer battery life and less data transfer.

By minimizing energy waste, the approach could make AI more accessible beyond large data centers, supporting broader applications in resource-constrained environments.

관련 기사

Realistic depiction of a rhesus macaque in a Princeton lab with brain overlay showing prefrontal cortex assembling reusable cognitive 'Lego' modules for flexible learning.
AI에 의해 생성된 이미지

Princeton study reveals brain’s reusable ‘cognitive Legos’ for flexible learning

AI에 의해 보고됨 AI에 의해 생성된 이미지 사실 확인됨

Neuroscientists at Princeton University report that the brain achieves flexible learning by reusing modular cognitive components across tasks. In experiments with rhesus macaques, researchers found that the prefrontal cortex assembles these reusable “cognitive Legos” to adapt behaviors quickly. The findings, published November 26 in Nature, underscore differences from current AI systems and could eventually inform treatments for disorders that impair flexible thinking.

고려대학교 연구팀이 멀티태스킹 AI 시스템의 에너지 효율성을 높이기 위해 이중 출력 인공 시냅스를 개발했다고 대학이 밝혔다. 이 장치는 전기 및 광학 신호를 동시에 발산해 병렬 처리 능력을 강화한다. 연구 결과, 기존 GPU 기반 하드웨어 대비 계산 속도가 최대 47% 향상되고 에너지 소비가 32배 줄었다.

AI에 의해 보고됨

Australia-based start-up Cortical Labs has announced plans to construct two data centres using neuron-filled chips. The facilities in Melbourne and Singapore will house its CL1 biological computers, which have demonstrated the ability to play video games like Doom. The initiative aims to scale up cloud-based brain-computing services while reducing energy consumption.

A global shortage of RAM, driven by AI data center demands, has caused PC memory prices to surge by 40 to 70 percent in 2025, leading to higher costs and lower specs for computers in 2026. This development is dampening the hype around so-called AI PCs, as manufacturers shift focus amid waning consumer interest. Analysts predict volatility in PC sales this year, with shortages persisting beyond 2026.

AI에 의해 보고됨

A CNET commentary argues that describing AI as having human-like qualities such as souls or confessions misleads the public and erodes trust in the technology. It highlights how companies like OpenAI and Anthropic use such language, which obscures real issues like bias and safety. The piece calls for more precise terminology to foster accurate understanding.

At the India AI Impact Summit, Prime Minister Narendra Modi described artificial intelligence as a turning point in human history that could reset the direction of civilisation. He expressed concern over the form of AI to be handed to future generations and emphasised making it human-centric and responsible. Experts have warned about risks including data privacy, deepfakes, and autonomous weapons.

AI에 의해 보고됨

Chinese researchers have introduced photonic AI chips that promise significant speed advantages in specific generative tasks. These chips use photons instead of electrons, enabling greater parallelism through optical interference. The development could mark a step forward in AI hardware, though claims are limited to narrowly defined applications.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부