Logo

On Bias in Stochastic Gradient Descent

Speaker

Jingwei Liang, University of Cambridge

Time

2020.01.09 09:30-10:30

Venue

Room 306, No.5 Science Building

Abstract

In this talk, I will present a uniform analysis of biased stochastic gradient methods for minimizing convex, strongly convex, and non-convex composite objectives, and identify settings where bias is useful is stochastic gradient estimation. In particular, I will show that biased gradient estimators can achieve better convergence rates than unbiased estimators when the bias of the estimator decreases its mean-squared error. The analysis allows us to extend proximal support to biased algorithms, including SAG and SARAH, for the first time in the convex setting. The framework also allows to develop a new algorithm, Stochastic Average Recursive GradiEnt (SARGE), that achieves the oracle complexity lower-bound for non-convex, finite-sum objectives and requires strictly fewer calls to a stochastic gradient oracle per iteration than SVRG and SARAH. Numerical experiments will be discussed to support the obtained result.