Samsung Starts to Mass Produce Fastest Generation of HBM2 Memory

Samsung is now mass producing second-generation HBM2 or high bandwidth memory. The company announced this new development yesterday about six months after AMD unveiled the first GPU equipped with the first generation HBM.

This new generation HBM is designed to perform faster and more efficiently than the current Fury, Fury X and Fury Nano.

Like its predecessor, this new HBM2 utilizes an electrical interface called interposer. It routes the GPU connections to the unit's memory. The HBM2 offers two distinct advantages over the older HBM in that it has more DRAM per stack and it has a higher output.

HBM2's specs are more impressive than the latest AMD HBM because it has an 8 GB chip per DRAM stack compared to AMD's 1 GB. Every Samsung HBM2 has four 8 GB chips. That means every stack has a capacity of 4 GB.

The latest HBM2s are designed to be used for high performance computing, such as advanced network and graphic systems, and for enterprise servers. Samsung claims that all HBM2s offer a kind of performance that will be unprecedented.

The capacity of an HBM2 will be more than seven times that of the conventional DRAM performance limit. This will enable faster response in high-end computing tasks such as machine learning, graphics rendering and parallel computing.

Sewon Chun, Samsung Electronics Memory Marketing Senior Vice-President is upbeat about his latest product. "By mass producing next-generation HBM2 DRAM, we can contribute much more to the rapid adoption of next-generation HPC systems by global IT companies," Chun explains.

He added that by using Samsung's 3D memory technology in HBM2, they can proactively cope with the multi-layered needs of the global IT market. At the same time, the company will be able to strengthen the foundation of the DRAM market ensuring further growth of the industry.

Real Time Analytics