Huwaei unveils new GPU memory, AI supercomputer
Huawei turns to innovative packaging and open ecosystems to overcome chip bans.

To combat technology restrictions, Huawei is innovating with hardware techniques, full-stack hardware designs, and open source.
I wasn't at Huawei Connect 2025 held in Shanghai this year, but reading through the published announcements and keynotes, I can't help but be impressed with the breadth and scope of the strategy.
Six years of escalating restrictions
In 2019, Huawei was targeted by the US and blocked from various software platforms, component suppliers, and even the ability to contract non-US companies such as TSMC to make its products. That was just the beginning.
This has since evolved into a multi-year, escalating campaign that prevents Huawei and a growing list of Chinese firms from buying advanced chip-making hardware made in the Netherlands.
The HBM barrier
While Huawei has successfully pivoted by exiting certain markets, replacing components, and turning to Chinese chip makers, it continues to lag in AI systems. The two most vital gaps are high-performance AI chips and HBM memory.
Huawei can only access chip-making technology that is two to three generations behind Nvidia's 4nm Blackwell GPUs. That's a significant handicap, but it isn't the worse of it.
High-Bandwidth Memory that are a key part of AI accelerators is the real problem. Chinese firms cannot make HBM today, and estimates point to a lag of at least four years.
The announcements that matter
This is where Huawei pulled off some clever tricks. On the first day, it unveiled HiBL 1.0, designed to offer HBM-like performance. It offers decent performance but runs hotter due to aggressive packaging. How did Huawei solve the heat problem? By integrating cold plates for liquid cooling directly into the design, a simple solution that works.
Huawei also announced UnifiedBus 2.0, a competitor to Nvidia's NVLink that promises to be cheaper, scale higher, and deliver lower latency. And Huawei released the technical specifications for free. Finally, Huawei showed off its new SuperPoD system that can be combined to build a "SuperCluster" of up to half a million AI accelerators. And yes, Huawei is opening up the entire SuperPoD hardware for others to build.
The cynic might argue that Huawei can only win on scale because its GPUs don't perform as fast as Nvidia's. That's partially true, but it misses the bigger picture. Huawei isn't trying to beat Nvidia at its own game but leveraging on its strengths. Open specifications mean more competition, faster iteration, and potentially lower costs across the entire stack.
Will this work? Time will tell.