About Speakers Schedule Slides Contact Us INS
International Workshop on Recent Advances on Mathematical Imaging and Data Science (July 2-6, 2019, SJTU)

Learning Guarantees with Mirror Stratifiable Regularization

Speaker

Jalal Fadili , Ecole Nationale Supérieure d'Ingénieurs de Caen, France

Time

03 Jul, 10:50 - 11:20

Abstract

Low-complexity non-smooth convex regularizers are routinely used to impose some structure on the coefficients for linear predictors in inverse problems and machine learning. Model consistency amounts then to provably selecting the correct structure (e.g., support or rank) by regularized empirical risk minimization. When some appropriate non-degeneracy condition is verified, we will provide results showing that model consistency holds. However such non-degeneracy condition may typically fail for highly correlated designs and it is observed that regularization methods tend to select larger models. We will then provide the theoretical underpinning of this behaviour by introducing a class of convex regularizers with a strong geometric structure, that we coin “mirror stratifiable”. This class of functions encompasses all regularizers routinely used in image (and data) processing, machine learning, and statistics. We show that this “mirror-stratifiable” structure allows to establish sharp model consistency guarantees which are applicable to optimal solutions of the learning problem, and also to the iterates computed by a certain class of stochastic proximal splitting algorithms.

Slide