[Abstract]
Tiny machine learning aims to enable machine learning applications on tiny embedded systems, which typically utilizes low-power microcontrollers. Deploying machine learning models on those systems is challenging due to their limited resources with their small memory capacity— often less than one megabyte—being the primary constraint. Thus, it is crucial to optimize the memory usage of machine learning models for tiny machine learning. This talk will discuss how to optimize machine learning models for tiny embedded systems to reduce their memory requirements. Firstly, this talk will introduce a graph-level memory optimizer, which automatically transforms a machine learning model to reduce its peak memory usage without altering the weights of the model. Secondly, this talk will present an automatic code generator, which translates a machine learning model into minimal code, thereby reducing the binary size of the application. This talk will show with experimental results that the proposed approaches can enable more intelligent machine learning applications on tiny embedded systems by easing their resource requirements.
[Biography]
Seonyeong Heo received the B.S. and Ph.D. degrees in computer science and engineering from the Pohang University of Science and Technology, Pohang, South Korea, in 2016 and 2021, respectively. Formerly, she was a postdoctoral researcher in the Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland. She is currently an assistant professor in the School of Computing, Kyung Hee University, Yongin, South Korea. Her research interests include compiler optimization, real-time embedded systems, and tiny machine learning for embedded systems.