Moore Threads unveils Huagang GPU architecture at developer conference

Chinese GPU developer Moore Threads has introduced its Huagang architecture, promising significant advances in gaming and AI performance. Set for a 2026 launch, the design targets self-reliance in semiconductors amid global export curbs. While details remain sparse, the company highlighted ambitious benchmarks for upcoming products.

At the recent MUSA Developer Conference, Moore Threads presented its next-generation Huagang architecture, dubbed "Flowerpot" in some translations. This platform aims to power both gaming and artificial intelligence applications, with a full rollout planned for 2026. The announcement focused on performance projections rather than in-depth technical breakdowns, underscoring China's efforts to build domestic GPU capabilities in the face of international restrictions.

Central to the reveal is the Lushan gaming GPU, which will replace the existing MTT S80 and S90 models. Moore Threads asserts that Lushan will deliver a 15-fold increase in AAA game rendering speed and a 50-fold enhancement in ray tracing capabilities. It incorporates a second-generation hardware ray tracing engine and complete DirectX 12 Ultimate compatibility for improved software integration. Memory capacity is expected to reach 64 GB, a quadrupling from the current 16 GB GDDR6 in prior models. Additional touted gains include 64 times faster AI computations, 16 times better geometry processing, four times higher texture fill rates, and eight times quicker atomic memory operations. The architecture introduces UniTE, a unified rendering system with an integrated AI processing unit.

Complementing this, the Huashan AI GPU features a dual-chiplet configuration equipped with nine HBM modules. The firm claims its performance rivals Nvidia's Hopper and Blackwell series, with memory bandwidth surpassing that of the Nvidia B200. Huashan supports a range of precision formats from FP4 to FP64, including proprietary MTFP4, MTFP6, and MTFP8 options. Scalability extends to clusters exceeding 100,000 units through MTLink 4.0, offering 1,314 GB/s interconnect speed. Compared to current offerings, it promises a 50 percent rise in compute density and tenfold efficiency improvements.

Although no gaming demonstrations were shown, a benchmark on the forthcoming MTT S5000 GPU—unrelated to Huashan—ran the DeepSeek V3 model at 1,000 tokens per second in decoding and 4,000 in prefill phases, edging out Nvidia's Hopper performance. These developments reflect Beijing's drive toward technological independence, though the claims await validation as products near market.

관련 기사

Nvidia CEO Jensen Huang announces full production of Vera Rubin AI superchips at CES 2026, with futuristic chip visuals on stage.
AI에 의해 생성된 이미지

Nvidia's Vera Rubin AI chips enter full production

AI에 의해 보고됨 AI에 의해 생성된 이미지

Nvidia CEO Jensen Huang announced at CES 2026 that the company's next-generation AI superchip platform, Vera Rubin, is now in full production. The platform, first revealed in 2024, promises to reduce costs for training and running AI models. Customers can expect deliveries later this year.

Maxsun has introduced a compact Mini-ITX motherboard equipped with four DDR5 slots, positioning it as a competitor to NVIDIA’s high-performance Petaflop AI mini PC setups. The design emphasizes enhanced memory capabilities for AI applications. Testing shows its airflow optimization lowers system temperatures by about 10°C.

AI에 의해 보고됨

상하이 기반 GPU 제조사 MetaX가 스타마켓에서 화려한 데뷔를 했으며, 주가는 700위안으로 개장해 IPO 가격 104.66위안 대비 569% 상승했다. 장중 주가는 824.50위안까지 더 올랐다.

Nvidia unveiled NemoClaw for easier OpenClaw AI agent creation, DLSS 5 for advanced gaming graphics, and a new Vera CPU during its GTC 2026 keynote in San Jose. CEO Jensen Huang highlighted agentic AI advancements, comparing OpenClaw to HTML and Linux, while teasing partnerships like Disney robotics and space computing.

AI에 의해 보고됨

중국 지푸 AI가 2026년 2월 12일 새로운 주요 모델 GLM-5를 출시하며 경쟁사에 도전장을 던졌다. 이 모델은 AI 개발이 ‘vibe coding’에서 ‘agentic engineering’으로 전환되어 성능을 강화한 전환점을 나타낸다.

Chinese researchers have introduced photonic AI chips that promise significant speed advantages in specific generative tasks. These chips use photons instead of electrons, enabling greater parallelism through optical interference. The development could mark a step forward in AI hardware, though claims are limited to narrowly defined applications.

AI에 의해 보고됨

Several mainland Chinese suppliers of memory chips and storage solutions are pursuing listings in Hong Kong, signaling a strategic shift to fuel the sector's global ambitions. The most watched is Shanghai-based Montage Technology, set to debut on the Hong Kong stock exchange next week and raise up to US$896 million. Analysts view this wave as a key move for international growth in cloud computing and AI.

 

 

 

이 웹사이트는 쿠키를 사용합니다

사이트를 개선하기 위해 분석을 위한 쿠키를 사용합니다. 자세한 내용은 개인정보 보호 정책을 읽으세요.
거부