세미나안내
Understanding Infinite-Width Deep Neural Networks
2022-02-03
Jaehoon Lee (Senior Research Scientist at Google Brain) / 2022.02.04
- Date: Feb. 4, Friday, 10AM
- Abstract: Deep neural networks have shown amazing success in various domains of artificial intelligence. However, classical tools for analyzing these models and their learning algorithms are not sufficient to provide explanations for such success. Recently, the infinite-width limit of neural networks has become one of the key breakthroughs in our understanding of deep learning. This limit is unique in giving an exact theoretical description of large scale neural networks. Because of this, we believe it will continue to play a transformative role in deep learning theory. In this talk, we will first review some of the interesting theoretical questions in deep learning research. Then we will review recent progress in the study of the infinite-width limit of neural networks focused around Neural Network Gaussian Process (NNGP) and Neural Tangent Kernel (NTK). This correspondence allows us to understand very wide neural networks as kernel based machine learning models. This correspondence provides ways to do exact Bayesian inference without ever initializing or training a network, and gives closed form solutions of network function under gradient descent training. As an attempt to better understand the correspondence, we will describe our large-scale empirical study of the relationship between wide neural networks and kernel methods. By conducting controlled and careful analysis, our study resolves a variety of open questions related to the study of infinitely wide neural networks. We will discuss some of the (very biased) recent advances in applying infinite-width neural networks. Lastly, if time permits, we will have a brief overview of the python open-source software library, Neural Tangents, that powers all of our empirical research and applications.
- Bio: Jaehoon Lee is a Senior Research Scientist at Google Brain team. His main research interest is fundamental and scientific understanding of deep neural networks; actively working on the infinite-width limit of neural networks and their correspondence to the kernel methods. In 2017, Jaehoon joined Google and started a research career in machine learning as part of the Google Brain Residency program. Before that he was a postdoctoral fellow at University of British Columbia from 2015-2017 working on theoretical high energy physics. Jaehoon obtained his PhD in physics at the Center for Theoretical Physics, Massachusetts Institute of Technology (MIT) in 2015.