Learning Invariance Representation
Minimizing the traditional training error leads to absorbing all of the spurious correlations of a given training dataset. These spurious correlations interrupt the machine learning fairness and generalization performance of machine learning on the out-of-distribution dataset. Learning Invariant representation, domain or environment invariance, contributes to overcoming the limitation.
There are diverse perspectives for invariant representation learning, such as data augmentation and explicit regularization. In this seminar, I will introduce the overview of current diverse research perspectives. Moreover, I will present the limitation of the current research and our new research for improved representation learning.
Kyungwoo Song is an assistant professor at University of Seoul, and he is the director of the Machine Learning and Artificial Intelligence Lab. His research in machine learning focuses on multimodal learning, invariant representation learning to improve the generalization performance, and its application to the diverse domain (e.g., recommender system, medical) for machine learning deployment in the real world. He earned a Ph.D., M.S., and B.S. from KAIST. In 2018, he was visiting researcher at Naver Clova AI.