세미나안내

세미나안내

Topics in ML research – security and uncertainty

2019-05-13
  • 3,402
오성준 박사(NAVER) / 2019.06.05
*bldg. 2  NO.102/ 16:00~
Topics in ML research – security and uncertainty
Biography
Seong Joon Oh received the master’s degree in mathematics from the University of Cambridge in 2014 and his PhD in computer vision from the Max Planck Institute for Informatics in 2018. He currently works as a research scientist at Clova AI Research (LINE Plus Corp.), South Korea. His research interests are computer vision and machine learning.
ABSTRACT
Machine learning is finally starting to work! Its application areas now span large-scale recognition (e.g. image search, cloud photo organizer), recommendations (e.g. Netflix), personal assistants (e.g. Clova), and biometrics (e.g. unlocking phones with face capture), to name a few. Despite apparent breakthroughs, there are certain hurdles that need to be overcome before more widespread application of the technology – namely security and uncertainty aspects of machine learning, among others. The talk will introduce two prior works on respective topics.
Security. ML models are expensive intellectual properties: a massive amount of labelled data is needed for training deep neural networks and huge amount of GPU-hours are put into engineering good hyperparameters. What if a deployed model (from Naver or Google let’s say) can be copied locally and re-sold? To first understand the threat, we need to examine a threat model. I will talk about my prior work on building an adversarial system that can extract hyperparameters just by querying the model and observing corresponding outputs. [1]
Uncertainty. ML models tend to be confident even on types of inputs that were never presented during training (e.g. random noise) – this property aggravates the reliability of deployed models. Can a model be trained to be less confident on uncertain inputs? I will talk about my recent work on training instance embedding models (used e.g. for image retrieval) that are equipped with probabilistic estimates of input uncertainties. [2]
I will also talk a bit on other research areas we are looking into at Clova, such as ML robustness. Future research directions will be discussed throughout the talk.
[1] Towards Reverse-Engineering Black-Box Neural Networks, ICLR’18.
[2] Modeling Uncertainty with Hedged Instance Embeddings, ICLR’19.
[3] CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features, arXiv’19.
목록