Rambus detailed its next-generation HBM4 memory controller, which will deliver significant improvements over existing HBM3 and HBM3E solutions. As JEDEC progressively finalizes the HBM4 memory specifications, we get first-hand details of the next-generation solution. Targeting the AI and data center markets, HBM4 memory solutions will continue to extend the capabilities of existing HBM DRAM designs.
Starting with the details, Rambus has released its HBM4 memory controller, with speeds of over 6.4 Gb/s per pin, which should be faster than the first-generation HBM3 solution, while providing more bandwidth than the HBM3E solution, which is designed with the same 16-Hi stack and 64 GB maximum capacity. HBM4 has a starting bandwidth of 1638 GB/s, which is 33% higher than HBM3E and 2x higher than HBM3.
Currently, the HBM3E solution operates at speeds of up to 9.6 Gb/s and has a bandwidth of up to 1.229 TB/s per stack. With the introduction of HBM4, the memory solution will offer speeds of up to 10 Gb/s and bandwidth of up to 2.56 GB/s per HBM interface. This will be more than twice as large as HBM3E, but the full power of HBM4 memory will take some time to become apparent and will only be available after production ramps up. Other features of the HBM4 memory solution include: ECC, RMW (Read-Modify-Write), Error Erasure, and more.
SK Hynix has reportedly started mass production of its 12-layer HBM3E memory with capacities up to 36 GB and a speed of 9.6 Gbps, while its next-generation HBM4 memory is expected to launch this month. Meanwhile, Samsung expects to start mass production of HBM4 memory by the end of 2025 and is expected to launch it this quarter.
Currently, NVIDIA's Rubin GPU, which is expected to be available in 2026, will be the first AI platform to support HBM4 memory, while the Instinct MI400 is also expected to feature a next-generation design, but AMD has yet to confirm this.