[Statlist] REMINDER: Next talk of Foundations of Data Science Seminar with Francis Bach, INRIA and Ecole normale superieure PSL Research University, Paris, France, June 26, 2019

Maurer Letizia letiziamaurer at ethz.ch
Mon Jun 24 08:56:09 CEST 2019


REMINDER:


ETH Foundations of Data Science



We are pleased to announce the following talk:


Organisers:

Proff.  - Bölcskei Helmut - Bühlmann Peter - Buhmann Joachim M. - Hofmann Thomas - Krause Andreas - Lapidoth Amos - Loeliger Hans-Andrea - Maathuis Marloes H. - Meinshausen Nicolai - Rätsch Gunnar - Van de Geer Sara
_______________________________________________________________________________________________________________________________________________________________________________________

with Francis Bach, INRIA and Ecole normale superieure PSL Research University, Paris, France

Wednesday, June 26, 2019,
ETH Zurich, HG F 3
at 16:00
***************************************************************************************************************


The talk will be followed by an apero in HG G G 69. (D-MATH common room)




Title:

Statistical Optimality of Stochastic Gradient Descent on Hard Learning Problems through Multiple Passes



Abstract:

We consider stochastic gradient descent (SGD) for least-squares regression with potentially several passes over the data. While several passes have been widely reported to perform practically better in terms of predictive performance on unseen data, the existing theoretical analysis of SGD suggests that a single pass is statistically optimal. While this is true for low-dimensional easy problems, we show that for hard problems, multiple passes lead to statistically optimal predictions while single pass does not; we also show that in these hard models, the optimal number of passes over the data increases with sample size. In order to define the notion of hardness and show that our predictive performances are optimal, we consider potentially infinite-dimensional models and notions typically associated to kernel methods, namely, the decay of eigenvalues of the covariance matrix of the features and the complexity of the optimal predictor as measured through the covariance matrix. We illustrate our results on synthetic experiments with non-linear kernel methods and on a classical benchmark with a linear model. (Joint work with Loucas Pillaud-Vivien and Alessandro Rudi)

	[[alternative HTML version deleted]]



More information about the Statlist mailing list