Logo

Short Course on Machine Learning Theory

Course Introduction

Machine learning has been a useful tool for data science and artificial intelligence for decades. Its theory has been developed well for several classical tasks such as regression and classification. A recent hot topic in machine learning is deep learning which provides efficient results in many practical applications dealing with big data. The purpose of this course is to train students on mathematical foundations of machine learning including the well-developed theory for regression, classification, sparsity, and less developed study on deep learning with deep neural networks.

There are many computer science courses on machine learning which emphasize implementations and application domains of machine learning algorithms. Important features of this course include the rigorous mathematical foundations for machine learning and the study on the hot topic of deep learning. The course provides the latest development in the theory for machine learning including some current active research topics.

Speaker

Dingxuan Zhou, City University of Hong Kong

Venue

Room 602, Pao Yue-Kong library, Shanghai Jiao Tong University

Application and Registration

No registration fee. Please register online. Apply Online

Prerequisites

Basic knowledge on probability, linear algebra, and functional analysis.

Schedule

July. 3 09:00 ~ 11:40
July. 5 09:00 ~ 11:40
July. 7 09:00 ~ 11:40

Course Plan

Chapter 1: Learning theory of least squares regression

Framework of the least squares regression, empirical risk minimization, hypothesis space, reproducing kernel Hilbert space, sample error, probability inequalities, covering number, approximation error

Chapter 2: Learning theory of classification

Regularization scheme, representor theorem, reduction of optimization problems, binary classification, support vector machines, misclassification error, Bayes rule, separable distributions, comparison theorem, error bounds

Chapter 3: Kernel methods in learning theory and data science

Dimensionality reduction, Laplacian eigenmap, spectral clustering, kernel PCA, semi-supervised learning on manifolds

Chapter 4: Sparsity in machine learning

LASSO, elastic net, sparsity, kernel projection machine, empirical features, gradient learning and variable selection

Chapter 5: Online learning and stochastic gradient descent

Online learning, Kaczmarz algorithm, mirror descent algorithms, pairwise learning and ranking, stochastic gradient descent in deep learning

Chapter 6: Distributed learning with big data

Distributed learning with regularized least squares, integral operators, minimax rates of convergence, distributed learning with spectral algorithms

Chapter 7: Deep learning with deep neural networks

Approximation theory of shallow networks, deep neural networks, convolutional deep neural networks, recurrent deep neural networks

Extended List of References

  1. F. Cucker and D. X. Zhou, Learning Theory: An Approximation Theory Viewpoint, Cambridge University Press, 2007.

  2. N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines, Cambridge University Press, 2000.

  3. V. Vapnik, Statistical Learning Theory, John Wiley & Sons, 1998.

  4. B. Schoelkopf and A. J. Smola, Learning with Kernels, MIT Press, Cambridge, 2002.

  5. M. Belkin and P. Niyogi, Semisupervised learning on Riemannian manifolds, Machine Learning 56 (2004), 209–239.

  6. R. Tibshirani, Regression shrinkage and selection via the lasso, J. Royal. Statist. Soc. B 58 (1996), 267–288.

  7. Y. Ying and D. X. Zhou, Online regularized classification algorithms, IEEE Trans. Inform. Theory 52 (2006), 4775–4788.

  8. J. H. Lin and D. X. Zhou, Learning theory of randomized Kaczmarz algorithm, J. Machine Learning Research 16 (2015), 3341–3365.

  9. S. B. Lin, X. Guo, and D. X. Zhou, Distributed learning with regularized least squares, J. Machine Learning Research, 2017, to appear.

  10. Z. C. Guo, S. B. Lin, and D. X. Zhou, Learning theory of distributed spectral algorithms, Inverse problems, 2017, to appear.

Lecture Notes

Lecture Notes.pdf