Today Samsung Electronics Co., Ltd. announced that it has developed what it refers to as the industry’s first High Bandwidth Memory (HBM) integrated with artificial intelligence (AI) processing power, the Samsung HBM-PIM. The PIM part stands for processing-in-memory that is architecture to bring AI capabilities inside the high-performance memory. The company claims this will result in twice the system performance at a 70% reduction in energy consumption. The HBM-PIM is aimed at accelerating large-scale processing in data centers, high-performance computing (HPC) systems, and AI-enabled mobile applications.

Today Samsung Electronics Co., Ltd. announced that it has developed what it refers to as the industry’s first High Bandwidth Memory (HBM) integrated with artificial intelligence (AI) processing power, the Samsung HBM-PIM. The PIM part stands for processing-in-memory that is architecture to bring AI capabilities inside the high-performance memory. The company claims this will result in twice the system performance at a 70% reduction in energy consumption. The HBM-PIM is aimed at accelerating large-scale processing in data centers, high-performance computing (HPC) systems, and AI-enabled mobile applications.

Samsung HBM-PIM

Most computing systems are based on von Neumann architecture: a processing unit, a control unit, memory, external storage, and input and output mechanisms. The von Neumann architecture has worked quite well over the last 75 years or so. However, it isn’t without fault. In fact, there is a phenomenon known as the von Neuman bottleneck where the throughput between the CPU and memory is limited. Now, more volume than ever is being seen in data and the need to quickly move it around. Samsung is looking to sidestep this bottleneck.

Samsung HBM-PIM looks to remove the bottleneck by placing processing power where data is stored through a DRAM-optimized AI engine inside each memory bank. These storage sub-units enable parallel processing and minimizing data movement. Samsung states it can hit the above double system performance with 70% less energy consumption when applying HBM-PIM to its existing HBM2 Aquabolt solution. No hardware or software changes are needed, so the Samsung HBM-PIM can be dropped in existing systems. The HBM-PIM is aimed at easing the bottleneck in AI-driven workloads such as HPC, training, and inference.

Availability

The Samsung HBM-PIM is now being tested inside AI accelerators by leading AI solution partners, with all validations expected to be completed within the first half of this year.

Samsung

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed