HW/SW co-design for efficient deep learning
The computation requirements of AI applications increase rapidly, and these applications have different requirements in terms of latency and performance. The optimization of software and hardware for deep learning applications is becoming more and more important to meet the criteria of evolving AI applications on the existing or upcoming hardware. Recently, several commercial accelerators adopt low-precision computation to maximize the hardware’s throughput within the same chip area. However, current quantization algorithms are only applicable for the unoptimized networks, which degrades the benefit we could get from the advanced hardware. In this presentation, the newly introduced quantization technique, PROFIT, will be introduced, which is designed to minimize the bit-width of optimized networks with minimal accuracy loss.
Eunhyeok Park is an Assistant Professor in the Department of Computer Science and Engineering/ Graduate School of AI at POSTECH, South Korea. He obtained his Ph.D in Computer Science from Seoul National University, 2020, M.S. in Electrical Engineering from POSTECH in 2015 and B.S. in Electrical Engineering and Physics from POSTECH in 2014. His research interests are neural network optimization, energy efficient accelerator design, and optimization automation.
ID : 897 821 7407
PW : 1nTQDY