Samsung HBM Planning: HBM4 to lead shipments this year, HBM5 substrates upgraded to 2 nanometers

robot
Abstract generation in progress

Samsung Electronics is accelerating the development of next-generation high-bandwidth memory layouts. As HBM4 officially enters mass production this year, Samsung has set its sights further ahead—planning to upgrade the HBM5 wafer process from 4 nanometers to 2 nanometers, with 1d DRAM serving as the core stacked storage for HBM5E. Meanwhile, HBM4 will account for over half of Samsung’s total HBM shipments this year, with overall HBM capacity more than tripling compared to last year.

According to ETNews and Yonhap News Agency, Samsung Electronics’ memory development head and Vice President Hwang Sang-jun revealed these plans at the NVIDIA GTC conference. He stated that the base die for HBM5 will use Samsung’s 2nm wafer process, representing a generational upgrade from the 4nm process used for HBM4 and HBM4E, to meet the higher memory performance demands of next-generation AI workloads.

Regarding capacity targets, Hwang Sang-jun said Samsung plans for HBM4 to make up more than 50% of all HBM shipments this year, with total HBM production more than tripling last year’s volume. This statement demonstrates Samsung’s commitment to expanding in the AI storage market and will have a direct impact on the high-end DRAM supply landscape and downstream AI accelerator supply chains.

In addition to the memory roadmap, Hwang Sang-jun also disclosed that the inference chip Groq 3 is being produced at Samsung Pyeongtaek Campus, with mass production targeted for late Q3 to early Q4 this year, and that orders have already exceeded expectations. This marks Samsung’s further extension from a pure memory supplier to a full-stack AI accelerator partner.

HBM5 Wafer Process: From 4nm to 2nm

According to ETNews, Hwang Sang-jun explicitly stated at NVIDIA GTC that the HBM5 wafer will use Samsung’s 2nm process, a significant upgrade from the 4nm process used for HBM4 and HBM4E. Upgrading the wafer process typically helps improve memory bandwidth and energy efficiency.

Hwang Sang-jun pointed out that adopting advanced process nodes will increase costs, but introducing cutting-edge technology is essential to achieve the desired performance targets for HBM. This clarifies Samsung’s technical approach of driving performance leaps in high-end AI storage through process upgrades.

Regarding HBM5E, ETNews reports that Hwang Sang-jun said this product will use 1d DRAM as the core stacked storage, further upgrading from the 1c DRAM used in HBM4 and HBM4E.

The 1d DRAM used for HBM5E is still in Samsung’s internal development stage and has not yet been commercialized. However, ETNews cites sources indicating that Samsung has achieved strong performance metrics and testing yields in this technology, signaling positive progress toward mass production.

HBM4 Dominates Shipments This Year, Capacity Over Tripled

Yonhap News Agency reports that Hwang Sang-jun stated Samsung’s goal this year is for HBM4 to constitute over 50% of all HBM shipments, with total HBM output more than tripling last year’s volume.

HBM4 only recently entered mass production this year. Samsung plans to significantly expand overall HBM capacity alongside scaling up mass production to meet the rising demand from AI chip markets for high-bandwidth memory. If these capacity expansion plans are realized, they will have a substantial impact on the supply landscape of high-end DRAM.

Groq 3 Foundry: Samsung Expanding Role in NVIDIA Ecosystem

Beyond storage, Samsung is further extending its position in the AI accelerator supply chain by foundry manufacturing of Groq 3 inference chips.

According to Yonhap News Agency, Hwang Sang-jun said NVIDIA CEO Jensen Huang has publicly recognized Samsung’s contributions to Groq 3. The chips are being produced at Samsung Pyeongtaek Campus, with mass production targeted for late Q3 to early Q4 this year, and current orders have exceeded expectations.

Yonhap reports that the Groq 3 chip die size exceeds 700 square millimeters, with only about 64 chips per wafer—far fewer than the typical 400 to 600 chips. About 70% to 80% of the chip area is SRAM, enabling fast inference computations on-chip without relying on external HBM. Hwang Sang-jun also revealed that even before signing a licensing agreement with NVIDIA, Samsung was already a customer for Samsung’s wafer foundry services for Groq 3.

SEDaily reports that Samsung’s foundry of Groq 3 LPU chips is widely regarded as a key milestone in establishing itself as a core partner in the next-generation AI accelerator full-stack platform. Since entering NVIDIA’s supply chain, Samsung’s role has expanded from merely memory supply to LPU manufacturing, deepening collaboration within the NVIDIA ecosystem.

Risk Warning and Disclaimer

Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should evaluate whether any opinions, views, or conclusions herein are suitable for their particular circumstances. Investment carries risks; responsibility rests with the individual.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin