세미나안내

세미나안내

Shortcut learning in Machine Learning: Challenges, Examples, Solutions

2022-02-08
  • 981
Sanghyuk Chun, Lead Research Scientist at NAVER AI Lab / 2022.02.10

Title: Shortcut learning in Machine Learning: Challenges, Examples, Solutions

Speaker: Sanghyuk Chun, Lead Research Scientist at NAVER AI Lab

Date: Feb 10, Thursday, 2PM

Abstract: Recent advances in machine learning (ML) open a new era of practical AI applications. However, emerging studies on ML models have shown that ML models often rely on easy-to-learn discriminatory features, e.g., estimating object categories by backgrounds, not objects. This phenomenon, also known as shortcut learning, is emerging as a key limitation of the current generation of ML models. Despite its significance, the shortcut learning phenomenon is overlooked and underexplored. In this talk, I will introduce the shortcut learning problem in real-world applications; introduce the recent attempts to solve the problem. This talk will be based on my recent studies, such as “ICML’20 Learning De-biased Representations with Biased Representations”, “ICLR’21 AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights”, “NeurIPS’21 SWAD: Domain Generalization by Seeking Flat Minima”, “ICLR’22 Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective”, “ArXiv PrePrint StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures”, “ArXiv PrePrint Learning Fair Classifiers with Partially Annotated Group Labels”

Bio: I’m a lead research scientist at NAVER AI Lab, working on machine learning and its applications. In particular, my research interests focus on bridging the gap between two gigantic topics: reliable machine learning tasks (e.g., robustness, de-biasing or domain generalization, algorithmic fairness, uncertainty estimation, explainability, and fair evaluation) and learning with large-scale extra data but limited annotations (e.g., multi-modal learning, weakly-supervised learning, and self-supervised learning). I have contributed large-scale machine learning algorithms in NAVER AI Lab as well. Prior to working at NAVER, I worked as a research engineer at the advanced recommendation team in Kakao from 2016 to 2018.
I received a master’s degree in Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST) in 2016. During the master’s degree, I researched a scalable algorithm for robust subspace clustering. Before my master’s study, I worked at IUM-SOCIUS in 2012 as a software engineer internship. I also did a research internship at Networked and Distributed Computing System Lab in KAIST and NAVER Labs during summer 2013 and fall 2015, respectively.

목록