Bao Wang, University of California, Los Angeles
Conference ID: 91835482921
PIN Code: 777578
This talk contains two parts: In the first part, I will present some recent work on developing partial differential equation principled robust neural architecture and optimization algorithms for robust, accurate, private, and efficient deep learning. In the second part, I will discuss some recent progress on leveraging Nesterov accelerated gradient style momentum for accelerating deep learning, which again involves designing stochastic optimization algorithms and mathematically principled neural architecture.
Bao Wang is currently a postdoc at the Department of Mathematics of UCLA under the mentorship of Professors Andrea L. Bertozzi and Stanley J. Osher. He received his Ph.D. degree in applied math from Michigan State University in 2016. His current research focuses on developing mathematically principled algorithms for trustworthy deep learning. His current research is supported by NSF under grant number ATD-1924935. He will join the University of Utah as an assistant professor joint between the Scientific Computing and Imaging Institute and the Math department in 2020.