Quarterly of Applied Mathematics

Quarterly of Applied Mathematics

Online ISSN 1552-4485; Print ISSN 0033-569X



Universal halting times in optimization and machine learning

Authors: Levent Sagun, Thomas Trogdon and Yann LeCun
Journal: Quart. Appl. Math. 76 (2018), 289-301
MSC (2010): Primary 68Q32, 78M50
DOI: https://doi.org/10.1090/qam/1483
Published electronically: September 20, 2017
Full-text PDF

Abstract | References | Similar Articles | Additional Information

Abstract: We present empirical evidence that the halting times for a class of optimization algorithms are universal. The algorithms we consider come from quadratic optimization, spin glasses and machine learning. A universality theorem is given in the case of the quadratic gradient descent flow. More precisely, given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time of the algorithm follow a distribution that, after centering and scaling, appears invariant under changes in the distribution on the landscape -- universality is present.

References [Enhancements On Off] (What's this?)

Similar Articles

Retrieve articles in Quarterly of Applied Mathematics with MSC (2010): 68Q32, 78M50

Retrieve articles in all journals with MSC (2010): 68Q32, 78M50

Additional Information

Levent Sagun
Affiliation: Department of Mathematics, New York University, New York, New York 10012
Email: sagun@cims.nyu.edu

Thomas Trogdon
Affiliation: Department of Mathematics, University of California, Irvine, California 92697
Email: ttrogdon@math.uci.edu

Yann LeCun
Affiliation: Department of Computer Science, New York University, New York, New York 10012
Email: yann@cs.nyu.edu

DOI: https://doi.org/10.1090/qam/1483
Received by editor(s): June 16, 2017
Published electronically: September 20, 2017
Article copyright: © Copyright 2017 Brown University

American Mathematical Society