Scalable and Efficient Deep Learning

2018-03-23
  • 4,605

Scalable and Efficient Deep Learning

Biography

01/2018 ~ KAIST 전산과 조교수

08/2014 ~ 12/2017 UNIST 전자전기컴퓨터공학부 조교수

09/2013 ~ 08/2014 Disney Research 박사후연구원

08/2013 University of Texas at Austin 전산학 박사

 

Abstract 

Recently, deep neural networks have achieved near human-level performance on number of tasks such as object categorization and machine translation. While this is an impressive result, deep learning is yet to bring high impact to our everyday life, due to the small scale of problems considered. For example, in case of visual object recognition, there exist more than hundreds of thousands of nameable objects, and this set of categories is ever growing with a plethora of products that are newly introduced to our world every day. Thus, a truly practical categorization system should be able to recognize millions of object categories. However, the current state-of-the-art deep learning models obtain only about 30% accuracy at maximum when classifying tens of thousands of classes. This low performance results from new challenges introduced in the large-scale deep learning, such as increased confusion due to number of classes, data/class imbalance, difficulty in finding the optimal network structure, and dealing with larger number of parameters and training time. In this talk, I will discuss about some of the recent models and algorithms I have developed to tackle new challenges posed by the large-scale deep learning problem.

 

  • [특별세미나]A Secret Guide to Forefront of Healthcare AI
  • Explorations to Enrich Touch Interaction
LIST