Skip to main content
Log in

QN-like variable storage conjugate gradients

  • Published:
Mathematical Programming Submit manuscript

Abstract

Both conjugate gradient and quasi-Newton methods are quite successful at minimizing smooth nonlinear functions of several variables, and each has its advantages. In particular, conjugate gradient methods require much less storage to implement than a quasi-Newton code and therefore find application when storage limitations occur. They are, however, slower, so there have recently been attempts to combine CG and QN algorithms so as to obtain an algorithm with good convergence properties and low storage requirements. One such method is the code CONMIN due to Shanno and Phua; it has proven quite successful but it has one limitation. It has no middle ground, in that it either operates as a quasi-Newton code using O(n 2) storage locations, or as a conjugate gradient code using 7n locations, but it cannot take advantage of the not unusual situation where more than 7n locations are available, but a quasi-Newton code requires an excessive amount of storage.

In this paper we present a way of looking at conjugate gradient algorithms which was in fact given by Shanno and Phua but which we carry further, emphasize and clarify. This applies in particular to Beale's 3-term recurrence relation. Using this point of view, we develop a new combined CG-QN algorithm which can use whatever storage is available; CONMIN occurs as a special case. We present numerical results to demonstrate that the new algorithm is never worse than CONMIN and that it is almost always better if even a small amount of extra storage is provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Bibliography

  1. A. Buckley, “A combined conjugate gradient quasi-Newton minimization algorithm”,Mathematical Programming 15 (1978) 200–210.

    Article  MATH  MathSciNet  Google Scholar 

  2. A. Buckley, “Extending the relationship between the conjugate gradient and BFGS algorithms”,Mathematical Programming 15 (1978) 343–348.

    Article  MATH  MathSciNet  Google Scholar 

  3. A. Buckley, “Conjugate gradient methods”, in: M.J.D. Powell, ed.,Nonlinear Optimization 1981, Proceedings of the NATO Advanced Research Institute on Nonlinear Optimization (Academic Press, London, 1982) pp. 17–22.

    Google Scholar 

  4. A. Buckley, “A portable package for testing minimization algorithms”, in: John M. Mulvey, ed.,Proceedings of the COAL Conference on Mathematical Programming Software, Boulder, Colorado (Springer, New York, 1982) 226–235.

    Google Scholar 

  5. R. Fletcher and C.M. Reeves, “Function minimization by conjugate gradients”,Computer Journal 7 (1963) 163–168.

    MathSciNet  Google Scholar 

  6. R. Fletcher, “AFortran subroutine for minimization by the method of conjugate gradients”, Report R7073, U.K. A.E.R.E., Harwell, England (1972).

    Google Scholar 

  7. L. Nazareth, “A relationship between the BFGS and conjugate gradient algorithms and its implications for new algorithms”,SIAM Journal on Numerical Analysis 16 (1979) 794–800.

    Article  MATH  MathSciNet  Google Scholar 

  8. J. Nocedal, “Updating quasi-Newton matrices with limited storage”,Mathematics of Computation 35 (1980) 773–782.

    Article  MATH  MathSciNet  Google Scholar 

  9. S.S. Oren and E. Spedicato, “Optimal conditioning of self-scaling variable metric algorithms”,Mathematical Programming 10 (1976) 70–90.

    Article  MATH  MathSciNet  Google Scholar 

  10. A. Perry, “A modified conjugate gradient algorithm”, Discussion paper 229, Center for Mathematical Studies in Economics and Management Science, Northwestern University (1976).

  11. M.J.D. Powell, “Restart procedures for the conjugate gradient method”,Mathematical Programming 12 (1977) 241–254.

    Article  MATH  MathSciNet  Google Scholar 

  12. D.F. Shanno, “Conjugate gradient methods with inexact searches”,Mathematics of Operations Research 3 (1978) 244–256.

    Article  MATH  MathSciNet  Google Scholar 

  13. D.F. Shanno and K.-H. Phua, “Numerical comparison of several variable metric algorithms”,Journal of Optimization Theory and Applications 25 (1978) 507–518.

    Article  MATH  MathSciNet  Google Scholar 

  14. D.F. Shanno, “Remark on Algorithm 500”,ACM Transactions on Mathematical Sofware 6 (1980) 618–622.

    Article  Google Scholar 

  15. Ph. Toint, “Some numerical results using a sparse matrix updating formula in unconstrained optimization”,Mathematics of Computation 32 (1978) 839–851.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

The authors wish to express their appreciation for the support of the Natural Sciences and Engineering Research Council through Operating Grant A8962 (Buckley) and of the National Research Council of Canada through a Postgraduate Scholarship (LeNir).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Buckley, A., Lenir, A. QN-like variable storage conjugate gradients. Mathematical Programming 27, 155–175 (1983). https://doi.org/10.1007/BF02591943

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02591943

Key words

Navigation