Computational Video: Enhancing User Experience of Video Playback
Sunghyun Cho is an assistant professor at POSTECH. Before joining POSTECH, he was an assistant professor at DGIST from April 2017 to August 2019. He also worked for Samsung Electronics as a senior engineer from April 2014 to April 2017, and for Adobe Research in Seattle as a research scientist from March 2012 to March 2014. He received his Ph.D. in Computer Science from POSTECH in Feb. 2012, and B.S. degrees in Computer Science, and in Mathematics from POSTECH in 2005. He spent six months in Beijing in 2006 as an intern at Microsoft Research Asia, and four months in Seattle in 2010 as an intern at Adobe Research. In 2008, he was awarded Microsoft Research Asia 2008/09 Graduate Research Fellowship Award. His research interests include computational photography, image/video processing, computer vision, computer graphics, etc.
In this talk, I will introduce two of my recent research results for improving user experience of video playback. The first work is interactive and automatic navigation for 360 video playback. A common way to view a 360 video on a 2D display is to crop and render a part of the video as a normal field-of-view (NFoV) video. While users can enjoy natural-looking NFoV videos using this approach, they need to constantly make manual adjustment of the viewing direction not to miss interesting events in the video. In this work, we propose an interactive and automatic navigation system for 360 video playback, which finds a virtual camera path showing the most salient areas through the video in an online manner reflecting user interaction. Our experimental results including user studies show that our system provides more pleasant experience of watching 360 videos than existing approaches. While many people are enjoying shooting and sharing videos of their activities and everyday lives, shooting a high-quality video is still challenging for casual users. Videos captured by casual users often show severely shaky and slanted contents, which not only degrade aesthetic quality but also make a video visually uncomfortable, and sometimes even cause dizziness. In the second work, we propose a novel video upright adjustment method that can reliably correct slanted videocontents.Our approach combines deep learning and Bayesian inference to estimate accurate rotation angles from video frames. We also propose a joint approach to video stabilization and upright adjustment. Experimental results show that our video upright adjustment method can effectively correct slanted video contents, and its combination with video stabilization can achieve visually pleasing results from shaky and slanted videos.