Analogue chips could speed up AI training 1000 times

Researchers at Peking University have developed analogue computer chips that solve key matrix equations for AI training with high accuracy and speed. These chips promise up to 1000 times faster processing and 100 times less energy use compared to digital GPUs. The innovation addresses the rising energy demands of AI in data centres.

Analogue computers, which process data using continuous quantities like electrical resistance rather than binary digits, have long offered advantages in speed and efficiency over digital systems but often at the expense of accuracy. Now, Zhong Sun at Peking University in China and his team have created a pair of analogue chips to tackle this issue, focusing on matrix equations essential for AI model training, telecommunications, and scientific simulations.

The setup involves two chips working in tandem. The first provides a rapid low-precision solution to matrix calculations with an error rate of about 1 per cent. The second chip then applies an iterative refinement algorithm, analyzing errors and improving precision. After three cycles, the error drops to 0.0000001 per cent, matching the accuracy of standard digital computations, according to Sun.

Currently, the chips handle 16 by 16 matrices, involving 256 variables, suitable for smaller problems. Scaling to the massive matrices in modern AI models would require circuits up to a million by a million, Sun notes. A key benefit is that solving larger matrices does not slow down analogue chips, unlike digital ones, which face exponential challenges. For a 32 by 32 matrix, the team's chip would outperform a Nvidia H100 GPU in throughput—the data processed per second.

Theoretically, further scaling could achieve 1000 times the throughput of digital chips like GPUs while using 100 times less energy. However, Sun cautions that real-world applications may not fully leverage this, as the chips are limited to matrix computations. "It’s only a comparison of speed, and for real applications, the problem may be different," Sun says. "Our chip can only do matrix computations. If matrix computation occupies most of the computing task, it represents a very significant acceleration for the problem, but if not, it will be a limited speed-up."

Sun anticipates hybrid chips, integrating analogue circuits into GPUs for specific tasks, though this remains years away. James Millen at King’s College London highlights the potential: "Analogue computers are tailored to specific tasks, and in this way can be incredibly fast and efficient. This work uses an analogue computing chip to speed up a process called matrix inversion, which is a key process in training certain AI models. Doing this more efficiently could help reduce the huge energy demands of our ever-growing reliance on AI."

The research appears in Nature Electronics (DOI: 10.1038/s41928-025-01477-0).

本网站使用 Cookie

我们使用 Cookie 进行分析以改善我们的网站。 阅读我们的 隐私政策 以获取更多信息。
拒绝