Machine learning has been a useful tool for data science and artificial intelligence for decades. Its theory has been developed well for several classical tasks such as regression and classification. A recent hot topic in machine learning is deep learning which provides efficient results in many practical applications dealing with big data. The purpose of this course is to train students on mathematical foundations of machine learning including the well-developed theory for regression, classification, sparsity, and less developed study on deep learning with deep neural networks.
There are many computer science courses on machine learning which emphasize implementations and application domains of machine learning algorithms. Important features of this course include the rigorous mathematical foundations for machine learning and the study on the hot topic of deep learning. The course provides the latest development in the theory for machine learning including some current active research topics.
Dingxuan Zhou, City University of Hong Kong
Room 602, Pao Yue-Kong library, Shanghai Jiao Tong University
No registration fee. Please register online. Apply Online
Basic knowledge on probability, linear algebra, and functional analysis.
|July. 3||09:00 ~ 11:40|
|July. 5||09:00 ~ 11:40|
|July. 7||09:00 ~ 11:40|
Framework of the least squares regression, empirical risk minimization, hypothesis space, reproducing kernel Hilbert space, sample error, probability inequalities, covering number, approximation error
Regularization scheme, representor theorem, reduction of optimization problems, binary classification, support vector machines, misclassification error, Bayes rule, separable distributions, comparison theorem, error bounds
Dimensionality reduction, Laplacian eigenmap, spectral clustering, kernel PCA, semi-supervised learning on manifolds
LASSO, elastic net, sparsity, kernel projection machine, empirical features, gradient learning and variable selection
Online learning, Kaczmarz algorithm, mirror descent algorithms, pairwise learning and ranking, stochastic gradient descent in deep learning
Distributed learning with regularized least squares, integral operators, minimax rates of convergence, distributed learning with spectral algorithms
Approximation theory of shallow networks, deep neural networks, convolutional deep neural networks, recurrent deep neural networks
F. Cucker and D. X. Zhou, Learning Theory: An Approximation Theory Viewpoint, Cambridge University Press, 2007.
N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines, Cambridge University Press, 2000.
V. Vapnik, Statistical Learning Theory, John Wiley & Sons, 1998.
B. Schoelkopf and A. J. Smola, Learning with Kernels, MIT Press, Cambridge, 2002.
M. Belkin and P. Niyogi, Semisupervised learning on Riemannian manifolds, Machine Learning 56 (2004), 209–239.
R. Tibshirani, Regression shrinkage and selection via the lasso, J. Royal. Statist. Soc. B 58 (1996), 267–288.
Y. Ying and D. X. Zhou, Online regularized classification algorithms, IEEE Trans. Inform. Theory 52 (2006), 4775–4788.
J. H. Lin and D. X. Zhou, Learning theory of randomized Kaczmarz algorithm, J. Machine Learning Research 16 (2015), 3341–3365.
S. B. Lin, X. Guo, and D. X. Zhou, Distributed learning with regularized least squares, J. Machine Learning Research, 2017, to appear.
Z. C. Guo, S. B. Lin, and D. X. Zhou, Learning theory of distributed spectral algorithms, Inverse problems, 2017, to appear.