Computational Media: from Analysis to Generation of Media Contents

2018-10-12
  • 3,022

Bio

Jean-Charles Bazin is the Director of the Computational Media Lab and Assistant Professor at KAIST, where he holds a joint appointment at the Graduate School of Culture Technology (CT) and the School of Electrical Engineering (EE). He has been working on the analysis, processing and creation of media contents. A representative list of his research topics includes video editing, VR/AR, AI (deep learning), multi-modal data processing, robust fitting and global optimization. His work has been covered in the news by Discovery Channel, Gizmodo, Engadget, The Verge, TechCrunch, among many others. He published several papers in the premier venues of computer vision (TPAMI, CVPR, ICCV, and ECCV), computer graphics (TOG, SIGGRAPH, SIGGRAPH Asia and Eurographics), robotics (IJRR) and multimedia (MobileHCI and ICME).

Prior joining KAIST, he was an Associate Research Scientist at Disney Research Zurich (The Walt Disney Company), and at the same time an Adjunct Lecturer at ETH Zurich, Switzerland  (2014-2016). Before this, he was a Postdoc and a Senior Research Associate at ETH Zurich (2011-2014), and a Postdoctoral Fellow at the University of Tokyo, Japan (2010-2011). He received an MS in Computer Science at Universite de Technologie de Compiegne, France (2006) and a PhD in Electrical Engineering, at KAIST, South Korea (2011). He is an invited Expert in AI for the World Economic Forum Expert Network.

 

Abstract

Computational Media refers to the automatic analysis and creation of media contents with computer algorithms, such as pictures, videos, VR/AR experiences, and music. It encompasses several disciplines such as computer vision, computer graphics, VR/AR, machine learning, robotics and multimedia. Exploring the complementarity of these disciplines provides exciting research opportunities and opens a wide range of novel and unconventional applications. In this talk, I will present some of our representative computational media projects: gaze correction in Skype calls, editing of facial performance videos, scene-space video processing, VR content stabilization, and video editing via motion data.

LIST