Conference ID: 921-848-82909
It is widely believed by many researchers, in particular by those outside the traditional optimization community, that the second-order methods such as Newton’s method are no longer applicable for solving large scale optimization problems. This is partially true for optimization models that neither need a good optimal solution nor need to be solved quickly. In this talk, we shall use some large scale statistical optimization problems arising from machine learning to explain why the second-order methods, in particular sparse nonsmooth Newton methods, if wisely used, can be much faster than the first-order methods. The key point is to make use of the second order sparsity of the optimal solutions in addition to the data sparsity so that at each iteration, the computational costs of the second order methods can be comparable or even lower than those of the first order methods. More interestingly, these second order methods can exhibit even more advantages for larger problems.
Professor Defeng Sun is currently Chair Professor of Applied Optimization and Operations Research at the Hong Kong Polytechnic University. He mainly publishes in convex and non-convex continuous optimization. Together with Professor Kim-Chuan Toh and Dr Liuqin Yang, he was awarded the triennial 2018 Beale–Orchard-Hays Prize for Excellence in Computational Mathematical Programming by the Mathematical Optimization Society. He served as editor-in-chief of Asia-Pacific Journal of Operational Research from 2011 to 2013 and he now serves as associate editor of Mathematical Programming, SIAM Journal on Optimization, Journal of the Operations Research Society of China, Journal of Computational Mathematics, and Science China: Mathematics. He was elected as a SIAM Fellow in 2020.