Invited Review
Semi-infinite programming

https://doi.org/10.1016/j.ejor.2006.08.045Get rights and content

Abstract

A semi-infinite programming problem is an optimization problem in which finitely many variables appear in infinitely many constraints. This model naturally arises in an abundant number of applications in different fields of mathematics, economics and engineering. The paper, which intends to make a compromise between an introduction and a survey, treats the theoretical basis, numerical methods, applications and historical background of the field.

Introduction

A semi-infinite program (SIP) is an optimization problem in finitely many variables x=(x1,,xn)Rn on a feasible set described by infinitely many constraints:P:minxf(x)s.t.g(x,t)0tT,where T is an infinite index set. For the sake of shortness, we omit additional equality constraints hi(x) = 0, i = 1, 2,  , m.

By F we denote the feasible set of P, whereas vinf{f(x)|xF} is the optimal value, and S{x¯F|f(x¯)=v} is the optimal set or set of minimizers of the problem. We say that P is feasible or consistent if F, and set v = +∞ when F=. With the only exception of Sections 4 Linear SIP, 6.3 Homotopy methods, we assume that f is continuously differentiable on Rn, that T is a compact set in Rm, and that the functions g(.,.) and ∇xg(.,.) are continuous on Rn×T.

An important special case is given by the linear semi-infinite problem (LSIP), where the objective function f and the constraint function g are linear in x:P:minxcxs.t.a(t)xb(t)tT.We also study here the generalized SIP (GSIP), for which the index set T = T(x) is allowed to be dependent on x,P:minxf(x)s.t.g(x,t)0tT(x).During the last five decades the field of semi-infinite programming has known a tremendous development. More than 1000 articles and 10 books have been published on the theory, numerical methods and applications of SIP.

Although the origins of SIP are related to Chebyshev approximation, to the classical work of Haar on linear semi-infinite systems [41], and to the Fritz John optimality condition [52], the term SIP was coined in 1962 by Charnes, Cooper and Kortanek in some papers devoted to LSIP [16], [17], [18]. The last author, who contributed significantly to the development of the first applications of SIP in economics, game theory, mechanics, statistical inference, etc., has recently pointed out [63] the historical role of a paper published by Tschernikow in 1963 [95]. Gustafson and Kortanek proposed, during the early 1970s, the first numerical methods for SIP models arising in applications (see, for instance, [40]). The publication around 1980 of the following six books converted SIP in a mature and independent subfield in optimization. Two volumes of Lecture Notes on Mathematics, edited by Hettich [43] and by Fiacco and Kortanek [24], and four monographs by Tichatschke [93], Glashoff and Gustafson [26], Hettich and Zencke [46] (providing numerical methods and applications to approximation problems) and Brosowski [11] (devoted to stability in SIP). More recently, Goberna and López presented in [30] an extensive approach to LSIP, including both theory and numerical aspects. Reputed optimization books devoted some chapters to SIP e.g., Krabs [64], Anderson and Nash [2], Guddat, Guerra and Jongen [39], Bonnans and Shapiro [8] and Polak [73]. We also mention the review article of Polak [72] (on mathematical foundations of feasible directions methods) and [48] where Hettich and Kortanek surveyed, in a superb manner, theoretical results, methods and applications of SIP. Recently Goberna [27] and Goberna and López [31], [32] reviewed the LSIP model. Following the tracks of [3], [11], during the last years, the stability analysis in SIP became an important research issue (see e.g., [12], [13], [14], [15], [25], [29], [33], [34], [56], [62], [79], [80], etc., as a sample of recent papers on this topic). Since a first contribution [47] the GSIP model (3) became a topic of intensive research. The interested reader is referred, e.g., to [57], [60], [84].

The paper is organized as follows. After fixing the notation in 1.4, Section 2 gives an account of a representative collection of motivating SIP and GSIP models in many different application fields. Section 3 presents the first order (primal and dual) optimality conditions. Section 4 is focused on different families of LSIP problems (continuous, FM and LFM), and the Haar duality theory is discussed in detail. Also in Section 4, some cone constrained optimization problems (in particular, the semi-definite and the second order conic programming problems) are presented as special cases of LSIP. Section 5 surveys the second order optimality conditions, although proofs are not included due to the technical difficulties of the subject. Section 6 introduces the principles of the main algorithmic approaches. Special attention is paid to the discretization methods, including some results about the discretizability of the general LSIP (with arbitrary index set), and to the exchange methods which are, in general, more efficient than the pure discretization algorithms. The last section, Section 7, is devoted to explore the relationship between the GSIP model and other important classes of optimization problems, like bi-level problems and mathematical programs with equilibrium constraints. The paper finishes with the list of cited references, but the reader will find an exhaustive bibliography on SIP and GSIP in [100].

0p will denote the null-vector in the Euclidean space Rp and ∥.∥ represents the Euclidean norm.

If XRp, by aff X, conv X, coneX, D(X, x), and X0 we shall denote the affine hull of X, the convex hull of X, the conical convex hull of X (always including the null-vector), the cone of feasible directions of X at x, and the positive polar cone of X (X0{dRp|dx0xX}), respectively. From the topological side, int X, cl X and bd X represent the interior, the closure and the boundary of X, respectively, whereas rint X and rbd X are the relative interior and the relative boundary of X, respectively (the interior and the boundary in the topology relative to aff X).

Given a function f:Rp[-,+[, the set xαRp+1αf(x) is called hypograph of f and is denoted by hypo f. The function f is a proper concave function if hypo f is a convex set in Rp+1 and domf{xRp|f(x)>-}. The closure cl f of a proper concave function f is another proper concave function, defined as the upper-semi-continuous hull of f, i.e., (cl f)(x) = limsupyxf(y). The following facts are well-known: hypo(cl f) = cl(hypo f), dom f  dom(cl f)  cl(dom f), and both functions f and cl f coincide except perhaps at points of rbd(dom f).

The vector u is a subgradient of the proper concave function f at the point x  dom f if, for every yRp, f(y)  f(x)  u(y  x). The set of all the subgradients of f at x is called subdifferential of f at x, and is denoted by ∂f(x). The subdifferential is a closed convex set, and the differentiability of f at x is equivalent to ∂f(x) = {∇f(x)}. Moreover, ∂f(x)  ∅ if x  rint(dom f), and ∂f(x) is a non-empty compact set if and only if x  int(dom f).

For later purposes we give two theorems of the alternative (see [30] for a proof).

Lemma 1 Generalized Gordan lemma

Let ARp be a set such that convA is closed (e.g., A compact). Then exactly one of the following alternatives is true:

  • (i)

    0  convA.

  • (ii)

    There exists some dRp such that ad < 0a  A.

Lemma 2 Generalized Farkas lemma

Let SRp+1 be an arbitrary non-empty set of vectors (a, b), aRp, bR, such that the system σ = {az  b, (a, b)  S} is feasible. Then the following statements are equivalent:

  • (i)

    The inequality cz  γ is a consequent relation of the system σ.

  • (ii)

    cγclK where K=coneabS,0p-1.

Section snippets

Examples and applications

In the review papers [48], [72], as well as in [30], the reader will find many applications of SIP in different fields such as Chebyshev approximation, robotics, mathematical physics, engineering design, optimal control, transportation problems, fuzzy sets, cooperative games, robust optimization, etc. There are also significant applications in statistics [19], [30], e.g., the generalized Neyman–Pearson (present in the origin of linear programming), optimal experimental design in regression,

First order optimality conditions

In this section, first order optimality conditions are derived for the SIP problem P in (1).

A feasible point x¯F is called a local minimizer of P if there is some ε > 0 such thatf(x)-f(x¯)0xFsuch thatx-x¯<ε.The minimizer x¯ is said to be global if this relation holds for every ε > 0. We call x¯F a strict local minimizer of order p > 0 if there exist some q > 0 and ε > 0 such thatf(x)-f(x¯)q||x-x¯||pxFsuch thatx-x¯<ε.For x¯F we consider the active index set:Ta(x¯){tT|g(x¯,t)=0}.Since g is

Different models in LSIP

This section deals with the general LSIPP:mincxs.t.atxbttT.Here T is an arbitrary (infinite) set, and the vectors c,atRn, as well as the scalars btR, are also arbitrary. The functions t  at  a(t) and t  bt  b(t) need not to have any special property. As an intersection of closed halfspaces, the feasible set of P is a closed convex set.

We introduce different families of LSIP problems through some properties of their constraint systems σ={atxbt,tT}, which have a great influence on

Second order optimality conditions

A natural way to obtain optimality conditions for SIP is the so-called reduction approach. This approach goes back to Wetterling [98] and has been developed further in [45], [46]. It is based on strong assumptions (which for SIP are generically fulfilled at local minimizers; cf., the remarks at the end of this section).

In many papers second order necessary and sufficient optimality conditions have been derived under essentially weaker assumptions. We refer for SIP problems, e.g., to [78], [8,

Numerical methods

Nowadays the numerical approach to SIP has become an active research area. An excellent review on SIP algorithms is [74]. Implementations of numerical methods are described e.g., in [36], [37], [42], [44]. Recently, the NEOS Server has included the program NSIPS, coded in AMPL [96], [97]. From a conceptual viewpoint, as in finite programming, we can distinguish between primal and dual solution methods. In the so-called discretization methods the SIP problem is directly replaced by a finite

GSIP and related problems

In this section we wish to shortly mention the recent developments in GSIP (see e.g., [57], [84]). This class of programs appears to be an essentially harder problem than standard SIP. In GSIP the feasible set can be non-closed and may posses so-called re-entrant corner points. These phenomena are stable under smooth perturbations and are not known in standard SIP.

Example 4

Non-closedness [84]: The feasible setF={xR2|g(x,y)y0yY(x)}andY(x)={yR|yx1,yx2}can be described asF={x|y0y[x1,x2]}={x|x1x2

References (100)

  • J.F. Bonnans et al.

    Perturbation Analysis of Optimization Problems

    (2000)
  • J.M. Borwein

    The limiting Lagrangian as a consequence of the Helly’s theorem

    Journal of Optimization. Theory and Applications

    (1981)
  • J.M. Borwein, Semi-infinite programming duality: How special is it?, in: A.V. Fiacco, K.O. Kortanek (Eds.),...
  • B. Brosowski

    Parametric Semi-infinite Optimization

    (1982)
  • M.J. Cánovas et al.

    Upper semicontinuity of the feasible set mapping for linear inequality systems

    Set-Valued Analysis

    (2002)
  • M.J. Cánovas et al.

    Stability in the discretization of a parametric semi-infinite convex inequality system

    Mathematics of Operations Research

    (2002)
  • M.J. Cánovas et al.

    Stability and well-posedness in linear semi-infinite programming

    SIAM Journal on Optimization

    (1999)
  • M.J. Cánovas et al.

    Solving strategies and well-posedness in linear semi-infinite programming

    Annals of Operations Research

    (2001)
  • A. Charnes et al.

    Duality, Haar programs and finite sequence spaces

    Proceedings of the National Academy of Science

    (1962)
  • A. Charnes et al.

    Duality in semi-infinite programs and some works of Haar and Carathéodory

    Management Sciences

    (1963)
  • A. Charnes et al.

    On the theory of semi-infinite programming and a generalization of the Kuhn–Tucker saddle point theorem for arbitrary convex functions

    Naval Research Logistics Quarterly

    (1969)
  • M. Dall’Aglio

    On some applications of LSIP to probability and statistics

  • M. Dambrine et al.

    About stability of equilibrium shapes

    Mathematical Modelling and Numerical Analysis

    (2000)
  • U. Faigle et al.

    Algorithmic Principles of Mathematical Programming

    (2002)
  • M.D. Fajardo et al.

    Locally Farkas–Minkowski systems in convex semi-infinite programming

    Journal of Optimization Theory and Applications

    (1999)
  • A. Ferrer

    Applying global optimization to a problem in short-term hydrothermal scheduling

  • A.V. Fiacco, K.O. Kortanek (Eds.), Semi-Infinite Programming and Applications, Lecture Notes in Economics and...
  • V. Gayá et al.

    Stability in convex semi-infinite programming and rates of convergence of optimal solutions of discretized finite subproblems

    Optimization

    (2003)
  • K. Glashoff et al.

    Linear Optimization and Approximation

    (1983)
  • M.A. Goberna, Linear semi-infinite optimization: Recent advances, in: V. Jeyakumar, A.M. Rubinov (Eds.), Continuous...
  • M.A. Goberna et al.

    Optimal value function in semi-infinite programming

    Journal of Optimization. Theory and Applications

    (1988)
  • M.A. Goberna et al.

    Topological stability of linear semi-infinite inequality systems

    Journal of Optimization Theory and Applications

    (1996)
  • M.A. Goberna et al.

    Linear Semi-Infinite Optimization

    (1998)
  • M.A. Goberna et al.

    A comprehensive survey of linear semi-infinite optimization theory

  • M.A. Goberna et al.

    Stability theory for linear inequality systems

    SIAM Journal on Matrix Analysis and Applications

    (1996)
  • M.A. Goberna et al.

    Stability theory for linear inequality systems II: Upper semicontinuity of the solution set mapping

    SIAM Journal on Optimization

    (1997)
  • W. Gómez et al.

    Cutting plane algorithms for robust conic convex optimization problems

    Optimization Methods and Software

    (2006)
  • S. Görner, Ein Hybridverfahren zur Lösung nicht-linearer semi-infiniter Optimierungsprobleme, Ph.D. Thesis, Technische...
  • G. Gramlich, R. Hettich, A software package for semi-infinite optimization, in: Numerical Methods of Nonlinear...
  • F. Guerra, J.A. Orozco, J.-J. Rückmann, On constraint qualifications in semi-infinite optimization, Parametric...
  • J. Guddat, F. Guerra-Vázquez, H.Th. Jongen, Parametric optimization: Singularities, pathfollowing and jumps, B.G....
  • S.-A. Gustafson et al.

    Numerical treatment of a class of semi-infinite programming problems

    Naval Research Logistics Quarterly

    (1973)
  • A. Haar

    Über lineare Ungleichungen

    Acta Mathematica Szeged

    (1924)
  • E. Haaren-Retagne, A semi-infinite programming algorithm for robot trajectory planning, Dissertation, University Trier,...
  • R. Hettich

    An implementation of a discretization method for semi-infinite programming

    Mathematical Programming

    (1986)
  • R. Hettich et al.

    Semi-infinite programming: Conditions of optimality and applications

  • R. Hettich et al.

    Numerische Methoden der Approximation und der semi-infiniten Optimierung

    (1982)
  • R. Hettich, G. Still, On Generalized semi-infinite programming problems, Working paper, University of Trier,...
  • R. Hettich et al.

    Semi-infinite programming: Theory, methods and applications

    SIAM Review

    (1993)
  • Cited by (0)

    View full text