[특별세미나]Reducing Annotation Cost in Visual Recognition
*공학2동 102호 17:00~
Bio: Donggeun Yoo received BS in 2011, MS in 2013 and Ph.D. in 2019, all at the School of Electrical Engineering from KAIST, South Korea. The title of his dissertation (composed of four chapters, all published) was: Deep Learning Based Visual Recognition Robust Against Background Clusters, written under the supervision of Prof. In So Kweon. During his Ph.D. course, he co-founded Lunit, a Seoul-based medical AI startup, with his lifelong friends in 2013. As the VP of Research at Lunit, he is devoted to developing advanced medical AI for radiology and oncology. During his internship experience at Adobe Research in the US, he worked on large-scale video representation learning. His research interests include most of the visual recognition problems with deep learning approaches.
Abstract: Current empirical studies suggest that the performance of recent deep networks is not yet saturated with respect to the size of training data. Also, the higher portion of fine annotations ensures superior performance. This is why we are suffering from annotation labor and cost of time. This talk is dedicated to addressing the current challenge of reducing the annotation cost in the medical image domain. The first approach is weakly supervised learning, which uses weak annotations that are much cheaper than full annotations. For example, it is much cheaper to obtain image-level labels rather than pixel-level labels for medical image segmentation, or noisy labels automatically created from clinical reports than manual labels. This is advantageous in constructing a large-scale training dataset, but still remains challenged to learn a model performing better than a fully-supervised model. The second approach is active learning, in which a model asks human to annotate data that it perceived as uncertain. As hard examples would be more beneficial in improving the models than random examples, active learning reduces the annotation cost by picking uncertain examples. This talk provides an in-depth review of recent active learning methods that work well on the current deep network and large-scale data.