Zoom ID: 950-805-04766
If you cannot log in the above Zoom ID, please use the following one instead:
Zoom ID: 266-664-3379
Deep learning builds upon the mysterious abilities of gradient-based optimization algorithms. Not only can these algorithms often achieve low loss on complicated non-convex training objectives, but the solutions found can also generalize remarkably well on unseen test data. Towards explaining these mysteries, I will present some recent results that take into account the trajectories taken by the gradient descent algorithm – the trajectories turn out to exhibit special properties that enable the successes of optimization and generalization.