Skip to main content
Log in

Proximal methods for nonlinear programming: double regularization and inexact subproblems

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper describes the first phase of a project attempting to construct an efficient general-purpose nonlinear optimizer using an augmented Lagrangian outer loop with a relative error criterion, and an inner loop employing a state-of-the art conjugate gradient solver. The outer loop can also employ double regularized proximal kernels, a fairly recent theoretical development that leads to fully smooth subproblems. We first enhance the existing theory to show that our approach is globally convergent in both the primal and dual spaces when applied to convex problems. We then present an extensive computational evaluation using the CUTE test set, showing that some aspects of our approach are promising, but some are not. These conclusions in turn lead to additional computational experiments suggesting where to next focus our theoretical and computational efforts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: On augmented Lagrangian methods with general lower-level constraints. SIAM J. Optim. 18(4), 1286–1309 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  2. Andreani, R., Birgin, E.G., Martínez, J.M., Schuverdt, M.L.: Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Math. Program. 111(1–2), 5–32 (2008)

    MATH  MathSciNet  Google Scholar 

  3. Auslender, A., Teboulle, M., Ben-Tiba, S.: Interior proximal and multiplier methods based on second order homogeneous kernels. Math. Oper. Res. 24(3), 645–668 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  4. Auslender, A., Teboulle, M., Ben-Tiba, S.: A logarithmic-quadratic proximal method for variational inequalities. Comput. Optim. Appl. 12(1–3), 31–40 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  5. Auslender, A., Silva, P.J.S., Teboulle, M.: Nonmonotone projected gradient methods based on barrier and Euclidean distances. Comput. Optim. Appl. 38(3), 305–327 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  6. Ben-Tal, A., Zibulevsky, M.: Penalty/barrier multiplier methods for convex programming problems. SIAM J. Optim. 7(2), 347–366 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  7. Birgin, E.G., Castillo, R.A., Martínez, J.M.: Numerical comparison of augmented Lagrangian algorithms for nonconvex problems. Comput. Optim. Appl. 31(1), 31–55 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  8. Burachick, R., Svaiter, B.F.: A relative error tolerance for a family of generalized proximal point methods. Math. Oper. Res. 26(4), 816–831 (2001)

    Article  MathSciNet  Google Scholar 

  9. Censor, Y., Zenios, S.A.: Proximal minimization algorithm with D-functions. J. Optim. Theory Appl. 73(3), 451–464 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  10. Cohen, A.I.: Rate of convergence of several conjugate gradient algorithms. SIAM J. Numer. Anal. 9, 248–259 (1972)

    Article  MATH  MathSciNet  Google Scholar 

  11. Conn, A.R., Gould, N.I.M., Toint, P.L.: LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Springer, New York (1992)

    MATH  Google Scholar 

  12. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  13. Eckstein, J.: Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming. Math. Oper. Res. 18(1), 202–226 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  14. Eckstein, J.: A practical general approximation criterion for methods of multipliers based on Bregman distances. Math. Program. 96(1), 61–86 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  15. Hager, W.W., Zhang, H.: ASA-CG source code. http://www.math.ufl.edu/~hager/papers/CG/

  16. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  17. Hager, W.W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17(2), 526–557 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  18. Humes, C. Jr., Silva, P.J.S., Svaiter, B.F.: Some inexact hybrid proximal augmented Lagrangian algorithms. Numer. Algorithms 35(2–4), 175–184 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  19. Iusem, A.N., Pennanen, T., Svaiter, B.F.: Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 13(4), 1080–1097 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  20. Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: Open source scientific tools for Python (2001). http://www.scipy.org/

  21. Kummer, B.K.: Newton’s method for nondifferentiable functions. In: Guddat, J., et al. (eds.) Advances in Mathematical Optimization, pp. 114–125. Akademie-Verlag, Berlin (1988)

    Google Scholar 

  22. Pang, J.-S., Qi, L.: Nonsmooth equations: motivation and algorithms. SIAM J. Optim. 3, 443–465 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  23. Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27(1), 170–191 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  24. Qi, L.Q.: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18(1), 227–244 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  25. Qi, L.Q., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58(3), 353–367 (1993)

    Article  MathSciNet  Google Scholar 

  26. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  27. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  28. Silva, P.J.S., Eckstein, J.: Double-regularization proximal methods, with complementarity applications. Comput. Optim. Appl. 33(2–3), 115–156 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  29. Silva, P.J.S., Eckstein, J., Humes, C. Jr.: Rescaling and stepsize selection in proximal methods using generalized distances. SIAM J. Optim. 12(1), 238–261 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  30. Solodov, M.V., Svaiter, B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7(4), 323–345 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  31. Solodov, M.V., Svaiter, B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6(1), 59–70 (1999)

    MATH  MathSciNet  Google Scholar 

  32. Solodov, M.V., Svaiter, B.F.: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 25(2), 214–230 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  33. Teboulle, M.: Entropic proximal mappings with applications to nonlinear programming. Math. Oper. Res. 17(3), 670–690 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  34. van Rossum, G., et al.: Python language website. http://www.python.org/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jonathan Eckstein.

Additional information

Dedicated to José Mario Martínez on the occasion of his 60th birthday.

This research was supported in part by a Faculty Research Grant from Rutgers Business School—Newark and New Brunswick.

This research was partially carried out while Paulo J.S. Silva was visiting RUTCOR and IMECC-UNICAMP. Supported by CNPq (grant 303030/2007-0), FAPESP (grant 2008/03823-0), and PRONEX-Optimization.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eckstein, J., Silva, P.J.S. Proximal methods for nonlinear programming: double regularization and inexact subproblems. Comput Optim Appl 46, 279–304 (2010). https://doi.org/10.1007/s10589-009-9274-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-009-9274-1

Keywords

Navigation