Skip to Main Content
Quarterly of Applied Mathematics

Quarterly of Applied Mathematics

Online ISSN 1552-4485; Print ISSN 0033-569X

   
 
 

 

Universal halting times in optimization and machine learning


Authors: Levent Sagun, Thomas Trogdon and Yann LeCun
Journal: Quart. Appl. Math. 76 (2018), 289-301
MSC (2010): Primary 68Q32, 78M50
DOI: https://doi.org/10.1090/qam/1483
Published electronically: September 20, 2017
MathSciNet review: 3769897
Full-text PDF

Abstract | References | Similar Articles | Additional Information

Abstract: We present empirical evidence that the halting times for a class of optimization algorithms are universal. The algorithms we consider come from quadratic optimization, spin glasses and machine learning. A universality theorem is given in the case of the quadratic gradient descent flow. More precisely, given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time of the algorithm follow a distribution that, after centering and scaling, appears invariant under changes in the distribution on the landscape — universality is present.


References [Enhancements On Off] (What's this?)

References

Similar Articles

Retrieve articles in Quarterly of Applied Mathematics with MSC (2010): 68Q32, 78M50

Retrieve articles in all journals with MSC (2010): 68Q32, 78M50


Additional Information

Levent Sagun
Affiliation: Department of Mathematics, New York University, New York, New York 10012
Email: sagun@cims.nyu.edu

Thomas Trogdon
Affiliation: Department of Mathematics, University of California, , Irvine, California 92697
MR Author ID: 965414
ORCID: 0000-0002-6955-4154
Email: ttrogdon@math.uci.edu

Yann LeCun
Affiliation: Department of Computer Science, New York University, New York,, New York 10012
MR Author ID: 759382
Email: yann@cs.nyu.edu

Received by editor(s): June 16, 2017
Published electronically: September 20, 2017
Article copyright: © Copyright 2017 Brown University