Logo

Workshop on AI & Mathematics

Introduction

This workshop aims at bringing together researchers on the interaction between mathematics and artificial intelligence. The topics include but are not limited to the interaction of AI with many disciplines in applied and computational mathematics such as approximation theory, numerical analysis, optimization, optimal transport, dynamical systems, statistical mechanics etc.

Date

December 5, 2020

Venue

Room 305, No.5 Science Building, Minhang Campus, Shanghai Jiao Tong University

Application and Registration

No registration fee. Please register online. Apply Online

Organizing Committee

Speakers

Schedule

Time Speaker Title
09:00 - 09:30 Hao Wu Optimal Transport, Machine Learning and Applications
09:30 - 10:00 Aimin Zhou Evolutionary Multi-objective Optimization Made Faster
10:00 - 10:30 Ke Wei Vectorized Hankel Lift: A Convex Approach for Blind Super-Resolution of Point Sources
10:30 - 11:00 Tea Break  
11:00 - 11:30 Lei Zhang Numerical Homogenization and Neural Network Solutions for Nonlinear Elliptic Multiscale Problem
11:30 - 12:00 Lei Li Normalizing Flows via Unbalanced OT
12:00 - 14:00 Lunch  
14:00 - 14:10 Shi Jin Welcome Remarks
14:10 - 14:40 Jian Sun Deep Learning in Non-Euclidean Space
14:40 - 15:10 Bin Dong Learning to Solve PDEs with Hypernetworks
15:10 - 15:40 Tea Break  
15:40 - 16:10 Xudong Li A Dynamic Programming Approach for Generalized Nearly Isotonic Optimization
16:10 - 16:40 Nana Liu Security and robustness in quantum machine learning: progress in adversarial quantum learning

Program

Optimal Transport, Machine Learning and Applications

*Hao Wu, Department of Mathematical Sciences, Tsinghua University *

Abstract:
We will briefly review some basic theory and numerical methods for Optimal Transport and introduce its applications in seismology and natural language processing. We will also present our recent work in machine learning.


Evolutionary Multi-objective Optimization Made Faster

Aimin Zhou, School of Computer Science and Technology, East China Normal University

Abstract:
In 1880s, economics Professors F. Y. Edgeworth and V. Pareto started to study the optimality of multi-objective optimization problems (MOPs), which created a new field of research area. Unlike traditional optimization problems, the optimality of an MOP usually consists of a set of trade-off solutions, which are called Pareto optimal solutions. In later of 1990s, it is possible for the first time to approximate the whole Pareto optimal solutions of an MOP in a single run by using evolutionary algorithms, after more than a century of research. Our work aims to make evolutionary multi-objective optimization faster by using problem specific knowledge either given by the decision makers or extracted online by machine learning techniques. This talk will briefly outline the recent advances in evolutionary multi-objective optimization, introduce our work on making evolutionary multi-objective faster, and present some examples in artificial intelligence and engineering areas.


Vectorized Hankel Lift: A Convex Approach for Blind Super-Resolution of Point Sources

Ke Wei, School of Data Science, Fudan University

Abstract:
We consider the problem of resolving $r$ point sources from $n$ samples at the low end of the spectrum when point spread functions (PSFs) are not known. Assuming that the spectrum samples of the PSFs lie in low dimensional subspace (let $s$ denote the dimension), this problem can be reformulated as a matrix recovery problem. By exploiting the low rank structure of the vectorized Hankel matrix associated with the target matrix, a convex approach called Vectorized Hankel Lift is proposed in this paper. It is shown that $n\gtrsim rs\log^4 n$ samples are sufficient for Vectorized Hankel Lift to achieve the exact recovery. In addition, a new variant of the MUSIC method available for spectrum estimation in the multiple snapshots scenario arises naturally from the vectorized Hankel lift framework, which is of independent interest.


Numerical Homogenization and Neural Network Solutions for Nonlinear Elliptic Multiscale Problem

Lei Zhang, Institute of Natural Sciences, Shanghai Jiao Tong University

Abstract:
In this talk, we consider numerical solution of nonlinear monotone elliptic problem with multiple scales, for example, multi-scale p-Laplacian equations. We first introduce the iterated numerical homogenization method based on the so-called quasi-norm. We will then introduce a DNN based numerical method for the same problem. The first part is a collaboration with Xinliang Liu (SJTU) and Eric Chung (CUHK), and the second part is a collaboration with Xi’an Li (SJTU) and Zhiqin John Xu (SJTU).


Normalizing Flows via Unbalanced OT

Lei Li, Institute of Natural Sciences, Shanghai Jiao Tong University

Abstract:
A nomalizing flow is a mapping between an arbitrary distribution and the standard normal distribution. We propose a normalizing flow using unbalanced optimal transport. Building in the optimal transport term can on one hand regularize the trajectories of the particles . On the other hand, the changing weights of the particles allow faster convergence. This is an on-going project and I will show some partial results.


Deep Learning in Non-Euclidean Space

Jian Sun, School of Mathematics and Statistics, Xi’an Jiao Tong University

Abstract:
The traditional deep networks are commonly defined in Euclidean space, either in the 3D / 2D image space or sequential data space. However, in realistic scenario, the data maybe irregular or distributed on manifold / graph. In such cases, the traditional deep network fails to work or does not fully take advantages of the underlying data structure in non-Euclidean space. Along this research direction, in this talk, I will introduce the research backgrounds, advances and our preliminary research on deep learning approach in the non-Euclidean space, with applications to 3D object recognition, image segmentation and domain adaptation.


Learning to Solve PDEs with Hypernetworks

Bin Dong, Institute for Artificial Intelligence, Peking University

Abstract:
Deep learning continues to dominate machine learning and has been successful in computer vision, natural language processing, etc. Its impact has now expanded to many research areas in science and engineering. In this talk, I will mainly focus on some recent impact of deep learning on computational mathematics. I will present our preliminary attempt to establish a deep reinforcement learning based framework to solve 1D scalar conservation laws, and a meta-learning approach for solving linear parameterized PDEs based on the multigrid method. Both approaches adopt properly designed hypernetworks in the model which grant superior generalization ability of the proposed solvers.


A Dynamic Programming Approach for Generalized Nearly Isotonic Optimization

Xudong Li, School of Data Science, Fudan University

Abstract:
In this talk, we propose and study a generalized nearly isotonic optimization (GNIO) model, which recovers, as special cases, many classic problems in shape constrained statistical regression, such as isotonic regression, nearly isotonic regression and unimodal regression problems. We develop an efficient and easy-to-implement dynamic programming algorithm for solving the proposed model whose recursion nature is carefully uncovered and exploited. For special ℓ2-GNIO problems, implementation details and the optimal O(n) running time analysis of our algorithm are discussed. Numerical experiments, including the comparison between our approach and the powerful commercial solver Gurobi for solving ℓ1-GNIO and ℓ2-GNIO problems, on both simulated and real data sets are presented to demonstrate the high efficiency and robustness of the proposed algorithm.


Security and robustness in quantum machine learning: progress in adversarial quantum learning

*Nana Liu, Institute of Natural Sciences, Shanghai Jiao Tong University

Abstract:
The central questions in adversarial quantum learning lies in identifying the security vulnerabilities of quantum machine learning, how to protect learning algorithms against security attacks and how to leverage quantum features for security and other robustness advantages. This is relevant not only for real-life deployment of quantum machine learning algorithms in future quantum networks but is also an important platform to ask fundamental questions in machine learning with quantum data.
In this talk, I’ll introduce the emerging area of adversarial quantum learning and outline its recent progress. I’ll present some key open questions and connections between this area and more traditional topics in quantum information theory, then suggest methods forward to seek alternative measures of quantum advantage for machine learning in terms of resistance to adversaries.