Toward Ideal Memory Compression

  • 895

Toward Ideal Memory Compression


Seokin Hong is currently an assistant professor at Kyungpook National University. His major research experiences and interests include the design of low power, reliable, and high-performance microprocessors and memory systems. He received a Ph.D. in Computer Science from Korea Advanced Institute of Science and Technology (KAIST), Korea, in 2015. During this Ph.D., he invented a cost-efficient and reliable microprocessor architecture. In 2015, he joined the Memory Product Group at Samsung Electronics as a senior engineer. During his 2 years there, he was involved in a project that developed the 3D-stacked memory (HBM). In 2017, he moved to IBM T.J. Watson Research Center where he worked on secure processor architectures and emerging memory/storage systems. He won two best paper awards from International Conference on Computer Design (ICCD) and Design Automation&Test in Europe Conference (DATE).

The memory system is one of the major performance and energy bottlenecks in modern computing systems. Recent trends in computing systems and applications require higher capacity, bandwidth, and energy-efficiency to memory systems. However, the current memory technologies have the fundamental limits in the scaling. Data compression is seen as a simple technique to increase the memory capacity and bandwidth. Unfortunately, the compression techniques either incur area and bandwidth overheads in maintaining the compression metadata.
This talk goes first over the fundamentals of memory compression. It then offers two recent research works that aim to mitigate the overheads of the compression. I will first introduce a practical technique that enables on-chip cache compression without any area overheads in the tag or data arrays. I will then describe an efficient approach to reduce the overheads of metadata accesses in the main memory. In both works, we will discuss how the probability-based approach can be used to solve technical problems efficiently.