세미나안내

세미나안내

Applying Deep Learning to 360° and Event Cameras

2019-04-05
  • 3,552
윤국진 교수(KAIST) / 2019.04.17
*bldg. 2  NO.102/ 16:00~
Biography
 Kuk-Jin Yoon received the B.S., M.S., and Ph.D. degrees in electrical engineering and computer science from the Korea Advanced Institute of Science and Technology in 1998, 2000, and 2006, respectively. He was a Post-Doctoral Fellow in the PERCEPTION Team, INRIA, Grenoble, France, from 2006 to 2008, and was an Assistant/Associate Professor at the School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, South Korea, from 2008 to 2018. He is now an Associate Professor at the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, South Korea. His research interests include stereo, visual object tracking, SLAM, and structure-from-motion.

Abstract
In this talk, I will introduce recent works to apply CNNs to 360° and event cameras. First of all, event cameras have a lot of advantages over traditional cameras, such as, low latency, high temporal resolution, and high dynamic range. However, existing algorithms could not be directly applied. Therefore, it is demanding to generate intensity images from events for other tasks. In this talk, we unlock the potential of event camera-based conditional generative adversarial networks to create images/videos from an adjustable portion of the event data stream. The usefulness of event cameras to generate high dynamic range(HDR) images and also non blurred images under rapid motion is also shown. In addition, the possibility of generating very high frame rate videos is demonstrated, theoretically up to 1 million frames per second (FPS).
On the other hand, omni-directional cameras also have many advantages over conventional cameras. Several approaches have been recently proposed to apply convolutional neural networks (CNNs) to omni-directional images to solve classification and detection problems. However, most of them use image representations in the Euclidean space. This transformation leads to shape distortion due to nonuniform spatial resolving power and loss of continuity. These effects make existing convolution kernels have difficulties in extracting meaningful information. In this talk, I introduce a novel method to resolve the aforementioned problems of applying CNNs to omni-directional images. The proposed method utilizes a spherical polyhedron to represent omni-directional views. This method minimizes the variance of spatial resolving power on the sphere surface, and includes new convolution and pooling methods for the proposed representation. The proposed approach can also be adopted by existing CNN-based methods.
목록