Home News The HBM dispute escalates

The HBM dispute escalates

2025-10-22

Share this article :

The generative AI revolution reveals a harsh reality: raw computing power means nothing if the beast can't be fed. In massive AI data centers with thousands of GPUs, the real bottleneck isn't processing speed, but memory bandwidth.

While engineers have been obsessed with FLOPS for decades, the industry now faces a new iron rule: If you can't move data fast enough, your trillion-dollar AI infrastructure will become an expensive paperweight.

HBM, SK Hynix take the lead

The imminent arrival of High Bandwidth Memory 4 (HBM4), a 3D stacked memory technology that promises unprecedented per-chip bandwidth, will determine which companies dominate or disappear in the AI space. This isn't just another incremental upgrade; it will determine whether the next breakthrough AI model takes weeks or months to train, and whether inference is profitable or burns cash per query.

Earlier this year, JEDEC finalized the HBM4 memory standard for high-performance AI. This new version offers higher per-pin speed and interface width than the previous generation HBM3 memory standard, targeting 8 Gbps per pin on a 2,048-bit interface, for a bandwidth of 2 TB/s per memory stack. This is roughly double the bandwidth of current HBM3 chips, a significant advancement for AI accelerators.

Another improvement comes in the form of increased capacity. HBM4 supports stacking up to 16 layers (16 memory chips bonded together), with densities of 24 Gb or 32 Gb per chip, for a maximum of 64 GB per stack. In other words, a single HBM4 module can hold as much data as the entire memory capacity of a current high-end GPU.


Despite its increased speed, HBM4 is designed with energy efficiency in mind. This allows for lower I/O and core voltages, improving energy efficiency. These improvements are precisely tailored to meet the demands of generative AI. Training large language models or running large-scale recommendation systems requires constantly moving terabytes of data between GPUs. Faster, larger memory can reduce this bottleneck, enabling each GPU to process data more quickly.


However, developing and manufacturing HBM4 presents significant challenges. Currently, only three memory suppliers—SK Hynix, Micron, and Samsung—have the DRAM and 3D stacking expertise required for mass production of HBM4. Whether these technologies can achieve mass production will directly impact the AI hardware roadmaps for future GPUs and AI accelerators from companies like NVIDIA, AMD, and Broadcom.

SK Hynix is the undisputed leader in the HBM4 field. The company holds multiple leading records in HBM. In 2015, it supplied the first generation of HBM to AMD GPUs and has consistently led major customers in HBM2, HBM2E, and HBM3. According to Counterpoint Research, SK Hynix's market share will reach 62% in the second quarter of 2025, far exceeding its competitors. This advantage stems from its close alliance with NVIDIA.

SK Hynix began shipping HBM4 samples long before the official JEDEC specification was released. In fact, the company delivered the world's first 12-layer HBM4 samples in March 2025, demonstrating the readiness of its stacking technology. SK Hynix announced that it has completed the development of its HBM4 design and is ready for mass production. Joohwan Cho, head of HBM development at SK Hynix, stated, "By promptly delivering products that meet customer needs for performance, power efficiency, and reliability, the company will meet time-to-market and maintain its competitive position."


SK Hynix confirmed that its HBM4 had met all specification requirements by September 2025. It operates at 10 GT/s per pin, 25% faster than the baseline 8 GT/s. The 10 GT/s speed rating fully meets Nvidia's requirements for the Blackwell generation of GPUs. SK Hynix hinted that its design may exceed the JEDEC specification, possibly to provide Nvidia with the performance headroom it needs.

SK Hynix is using its mature 1b DRAM process (fifth-generation 10nm node) to manufacture HBM4 DRAM chips. This node is slightly older than cutting-edge technology, but it offers lower defect density and higher yield, which is crucial when stacking dozens of chips. SK Hynix has not yet publicly disclosed the node used for the underlying logic chips beneath the DRAM layer. However, there is speculation that it may utilize TSMC's 12nm or 5nm process.

The company's philosophy appears to be "reliability first, performance later," which aligns with HBM's conservative and steady leadership style. By the end of 2025, SK Hynix will be ready to ramp up HBM4 production as soon as customers demand it. While the company has not yet announced a specific shipping date, all indications are that volume shipments will begin in early 2026, following final certification.

Nvidia's flagship GPU is the clear choice. Industry reports indicate that SK Hynix's HBM4 will be integrated first into the Rubin GPU platform. Furthermore, given the close relationship between Nvidia and SK Hynix, they are likely to provide the majority of the initial memory modules for Blackwell GPUs in 2026. This puts SK Hynix in a prime position to be the first to ship HBM4 in large quantities.

SK Hynix's market leadership has also translated into significant financial gains this year. In the second quarter of 2025, the company reported that 77% of its sales came from HBM and related AI memory. Despite its current dominance, the competition for HBM4 supply is not over yet, and competitors are scrambling to catch up.

Micron and Samsung are coming on strong

While SK Hynix is far ahead, Samsung and Micron are close behind.

Micron, a latecomer to the HBM market, surpassed Samsung in the past year, reaching 21% market share compared to Samsung's 17%. This is a significant development considering that Micron had virtually no HBM business just a few years ago. The catalyst for this growth is the surge in demand for generative AI (generic AI).

Micron's success is primarily due to HBM3E. It has secured supply agreements with multiple customers, including six HBM clients covering GPUs and accelerators. Micron has also successfully become a supplier for Nvidia's AI GPUs. This is because Nvidia has historically sourced memory from two suppliers for redundancy, while Micron and SK Hynix have shared a significant share of the market.

Micron's HBM business is expected to expand significantly by the end of 2025. In its September 2025 quarterly report, the company reported that HBM revenue was approaching $2 billion. This means that HBM has grown from a niche product to a double-digit percentage of the company's total revenue in a very short period of time. Micron even stated that its full-year 2025 HBM production is sold out, and orders for 2026 are also largely booked.

Riding this momentum, Micron Technology began shipping HBM4 samples in June 2025. The technology offers 36GB, 12-layer stacks to major customers, reportedly one of which is Nvidia. Over the past few months, Micron has further improved the chip. By the fourth quarter of 2025, Micron announced that its HBM4 samples will operate at speeds exceeding 11Gbps per pin and throughput exceeding 2.8TB/s per stack.

Micron HBM4 is likely to enter mass production in 2026. The company has already secured billions of dollars worth of HBM3E orders for 2026, and major buyers, including cloud computing giants and GPU vendors, are counting on Micron to be part of their 2026 supply chains. Given that Nvidia is expected to source Blackwell memory from both SK Hynix and Micron, Micron will fill the gap if SK Hynix can't meet all its needs or if Nvidia requires the flexibility of a second supplier.

As for Samsung, another South Korean giant, it has also been busy with its HBM business. This is because Samsung found itself in a unique position in the race for fourth-generation HBM, forcing it to catch up. Despite its strong manufacturing capabilities, Samsung lagged behind in the early stages of HBM.


Samsung's difficulties became increasingly apparent with HBM3E. While SK Hynix and Micron mass-produced 8- and 12-layer HBM3E for customers, Samsung's 12-layer HBM3E struggled to pass certification. It reportedly took Samsung 18 months and multiple attempts to meet Nvidia's quality and performance standards for HBM3E. By the third quarter of 2025, Samsung finally achieved Nvidia's validation, with its fifth-generation 12-layer HBM3E passing all tests. To date, Samsung HBM has only appeared in AMD's MI300 series accelerators. However, after receiving certification from Nvidia, the company has agreed to purchase 30,000 to 50,000 units of its 12-layer HBM3E for use in liquid-cooled AI servers. Samsung HBM3E will also ship for AMD accelerators in mid-2025.

One of the key challenges contributing to this delay is Samsung's attempt to apply its cutting-edge 1c DRAM process (sixth-generation 10nm node) to its 12-layer HBM3E and the upcoming HBM4, but it has encountered yield issues. As of July 2025, the yield rate for 1c trial runs was only 65%, a significant problem for mass production. Samsung was forced to recalibrate and modify the DRAM design, improve the substrate, and enhance thermal management.

Samsung plans to begin mass production of HBM4 in the first half of 2026. In the third quarter of 2025, Samsung began shipping large quantities of HBM4 samples to Nvidia for early certification. The company also holds a strategic trump card: its deepening partnership with AMD (and OpenAI). In October 2025, news broke that AMD had signed a major agreement to supply OpenAI with its Instinct MI450 GPU systems. Samsung is reportedly the primary supplier of HBM4 for AMD's MI450 accelerators.

Who will win?

Ultimately, the competition for HBM4 supply is not a zero-sum game. All three vendors will strive to deliver the highest-performance memory modules for generative AI. The true winners will be those who can overcome the technical challenges and achieve scaled delivery.

To expand the market, success for all three companies would be ideal. This would alleviate hard limitations and advance AI capabilities for researchers and businesses. Regardless, 2026 will be a decisive year in this memory race. It will be interesting to see which vendor ultimately achieves mass production first; this will reveal the true winners of this round and whose AI product plans may need to be adjusted due to their bets on the losers.


Source: Semiconductor Industry Observer

Reference link: https://www.theregister.com/2025/10/16/race_to_supply_advanced_memory/


View more at EASELINK


HOT NEWS

Glass substrates, transformed overnight

GPU,modules,AMD,products,semi,parts,chips,memory

In August 2024, a seemingly ordinary personnel change caused a stir in the semiconductor industry. Dr. Gang Duan, a longtime Intel chi...

2025-08-22

UFS 4.1 standard is commercially available, and industry giants respond positively

The formulation of the UFS 4.1 standard may accelerate the implementation of large-capacity storage such as QLC

2025-01-17

Amazon halts development of a chip

Amazon has stopped developing its Inferentia AI chip and is instead focusing on semiconductors for training AI models, an area the com...

2024-12-10

Understanding the Importance of Signal Buffers in Electronics

Have you ever wondered how your electronic devices manage to transmit and receive signals with such precision? The secret lies in a small ...

2023-11-13

Turkish domestically produced microcontrollers about to be put into production

Turkey has become one of the most important non-EU technology and semiconductor producers and distributors in Europe. The European se...

2024-08-14

US invests $75 million to support glass substrates

US invests $75 million to support glass substrates. In the last few days of the Biden administration in the United States, it has been...

2024-12-12

DRAM prices plummet, and the future of storage is uncertain

The DRAM market will see a notable price decline in the first quarter of 2025, with the PC, server, and GPU VRAM segments expe...

2025-01-06

SOT-MRAM, Chinese companies achieve key breakthrough

SOT-MRAM (spin-orbit moment magnetic random access memory), with its nanosecond write speed and unlimited erase and write times, is a...

2024-12-30

Address: 73 Upper Paya Lebar Road #06-01CCentro Bianco Singapore

GPU,modules,AMD,products,semi,parts,chips,memory GPU,modules,AMD,products,semi,parts,chips,memory
GPU,modules,AMD,products,semi,parts,chips,memory
Copyright © 2023 EASELINK. All rights reserved. Website Map
×

Send request/ Leave your message

Please leave your message here and we will reply to you as soon as possible. Thank you for your support.

send
×

RECYCLE Electronic Components

Sell us your Excess here. We buy ICs, Transistors, Diodes, Capacitors, Connectors, Military&Commercial Electronic components.

BOM File
GPU,modules,AMD,products,semi,parts,chips,memory
send

Leave Your Message

Send