Invited ReviewSemi-infinite programming
Introduction
A semi-infinite program (SIP) is an optimization problem in finitely many variables on a feasible set described by infinitely many constraints:where T is an infinite index set. For the sake of shortness, we omit additional equality constraints hi(x) = 0, i = 1, 2, … , m.
By we denote the feasible set of P, whereas is the optimal value, and is the optimal set or set of minimizers of the problem. We say that P is feasible or consistent if , and set v = +∞ when . With the only exception of Sections 4 Linear SIP, 6.3 Homotopy methods, we assume that f is continuously differentiable on , that T is a compact set in , and that the functions g(.,.) and ∇xg(.,.) are continuous on
An important special case is given by the linear semi-infinite problem (LSIP), where the objective function f and the constraint function g are linear in x:We also study here the generalized SIP (GSIP), for which the index set T = T(x) is allowed to be dependent on x,During the last five decades the field of semi-infinite programming has known a tremendous development. More than 1000 articles and 10 books have been published on the theory, numerical methods and applications of SIP.
Although the origins of SIP are related to Chebyshev approximation, to the classical work of Haar on linear semi-infinite systems [41], and to the Fritz John optimality condition [52], the term SIP was coined in 1962 by Charnes, Cooper and Kortanek in some papers devoted to LSIP [16], [17], [18]. The last author, who contributed significantly to the development of the first applications of SIP in economics, game theory, mechanics, statistical inference, etc., has recently pointed out [63] the historical role of a paper published by Tschernikow in 1963 [95]. Gustafson and Kortanek proposed, during the early 1970s, the first numerical methods for SIP models arising in applications (see, for instance, [40]). The publication around 1980 of the following six books converted SIP in a mature and independent subfield in optimization. Two volumes of Lecture Notes on Mathematics, edited by Hettich [43] and by Fiacco and Kortanek [24], and four monographs by Tichatschke [93], Glashoff and Gustafson [26], Hettich and Zencke [46] (providing numerical methods and applications to approximation problems) and Brosowski [11] (devoted to stability in SIP). More recently, Goberna and López presented in [30] an extensive approach to LSIP, including both theory and numerical aspects. Reputed optimization books devoted some chapters to SIP e.g., Krabs [64], Anderson and Nash [2], Guddat, Guerra and Jongen [39], Bonnans and Shapiro [8] and Polak [73]. We also mention the review article of Polak [72] (on mathematical foundations of feasible directions methods) and [48] where Hettich and Kortanek surveyed, in a superb manner, theoretical results, methods and applications of SIP. Recently Goberna [27] and Goberna and López [31], [32] reviewed the LSIP model. Following the tracks of [3], [11], during the last years, the stability analysis in SIP became an important research issue (see e.g., [12], [13], [14], [15], [25], [29], [33], [34], [56], [62], [79], [80], etc., as a sample of recent papers on this topic). Since a first contribution [47] the GSIP model (3) became a topic of intensive research. The interested reader is referred, e.g., to [57], [60], [84].
The paper is organized as follows. After fixing the notation in 1.4, Section 2 gives an account of a representative collection of motivating SIP and GSIP models in many different application fields. Section 3 presents the first order (primal and dual) optimality conditions. Section 4 is focused on different families of LSIP problems (continuous, FM and LFM), and the Haar duality theory is discussed in detail. Also in Section 4, some cone constrained optimization problems (in particular, the semi-definite and the second order conic programming problems) are presented as special cases of LSIP. Section 5 surveys the second order optimality conditions, although proofs are not included due to the technical difficulties of the subject. Section 6 introduces the principles of the main algorithmic approaches. Special attention is paid to the discretization methods, including some results about the discretizability of the general LSIP (with arbitrary index set), and to the exchange methods which are, in general, more efficient than the pure discretization algorithms. The last section, Section 7, is devoted to explore the relationship between the GSIP model and other important classes of optimization problems, like bi-level problems and mathematical programs with equilibrium constraints. The paper finishes with the list of cited references, but the reader will find an exhaustive bibliography on SIP and GSIP in [100].
0p will denote the null-vector in the Euclidean space and ∥.∥ represents the Euclidean norm.
If , by aff X, conv X, coneX, D(X, x), and X0 we shall denote the affine hull of X, the convex hull of X, the conical convex hull of X (always including the null-vector), the cone of feasible directions of X at x, and the positive polar cone of X , respectively. From the topological side, int X, cl X and bd X represent the interior, the closure and the boundary of X, respectively, whereas rint X and rbd X are the relative interior and the relative boundary of X, respectively (the interior and the boundary in the topology relative to aff X).
Given a function , the set is called hypograph of f and is denoted by hypo f. The function f is a proper concave function if hypo f is a convex set in and . The closure cl f of a proper concave function f is another proper concave function, defined as the upper-semi-continuous hull of f, i.e., (cl f)(x) = limsupy→xf(y). The following facts are well-known: hypo(cl f) = cl(hypo f), dom f ⊂ dom(cl f) ⊂ cl(dom f), and both functions f and cl f coincide except perhaps at points of rbd(dom f).
The vector u is a subgradient of the proper concave function f at the point x ∈ dom f if, for every , f(y) ⩽ f(x) − u⊤(y − x). The set of all the subgradients of f at x is called subdifferential of f at x, and is denoted by ∂f(x). The subdifferential is a closed convex set, and the differentiability of f at x is equivalent to ∂f(x) = {∇f(x)}. Moreover, ∂f(x) ≠ ∅ if x ∈ rint(dom f), and ∂f(x) is a non-empty compact set if and only if x ∈ int(dom f).
For later purposes we give two theorems of the alternative (see [30] for a proof). Lemma 1 Generalized Gordan lemma Let be a set such that convA is closed (e.g., A compact). Then exactly one of the following alternatives is true: 0 ∈ convA. There exists some such that a⊤d < 0 ∀a ∈ A.
Lemma 2 Generalized Farkas lemma
Let be an arbitrary non-empty set of vectors (a, b), , , such that the system σ = {a⊤z ⩾ b, (a, b) ∈ S} is feasible. Then the following statements are equivalent:
- (i)
The inequality c⊤z ⩾ γ is a consequent relation of the system σ.
- (ii)
where .
Section snippets
Examples and applications
In the review papers [48], [72], as well as in [30], the reader will find many applications of SIP in different fields such as Chebyshev approximation, robotics, mathematical physics, engineering design, optimal control, transportation problems, fuzzy sets, cooperative games, robust optimization, etc. There are also significant applications in statistics [19], [30], e.g., the generalized Neyman–Pearson (present in the origin of linear programming), optimal experimental design in regression,
First order optimality conditions
In this section, first order optimality conditions are derived for the SIP problem P in (1).
A feasible point is called a local minimizer of P if there is some ε > 0 such thatThe minimizer is said to be global if this relation holds for every ε > 0. We call a strict local minimizer of order p > 0 if there exist some q > 0 and ε > 0 such thatFor we consider the active index set:Since g is
Different models in LSIP
This section deals with the general LSIPHere T is an arbitrary (infinite) set, and the vectors , as well as the scalars , are also arbitrary. The functions t ↦ at ≡ a(t) and t ↦ bt ≡ b(t) need not to have any special property. As an intersection of closed halfspaces, the feasible set of P is a closed convex set.
We introduce different families of LSIP problems through some properties of their constraint systems , which have a great influence on
Second order optimality conditions
A natural way to obtain optimality conditions for SIP is the so-called reduction approach. This approach goes back to Wetterling [98] and has been developed further in [45], [46]. It is based on strong assumptions (which for SIP are generically fulfilled at local minimizers; cf., the remarks at the end of this section).
In many papers second order necessary and sufficient optimality conditions have been derived under essentially weaker assumptions. We refer for SIP problems, e.g., to [78], [8,
Numerical methods
Nowadays the numerical approach to SIP has become an active research area. An excellent review on SIP algorithms is [74]. Implementations of numerical methods are described e.g., in [36], [37], [42], [44]. Recently, the NEOS Server has included the program NSIPS, coded in AMPL [96], [97]. From a conceptual viewpoint, as in finite programming, we can distinguish between primal and dual solution methods. In the so-called discretization methods the SIP problem is directly replaced by a finite
GSIP and related problems
In this section we wish to shortly mention the recent developments in GSIP (see e.g., [57], [84]). This class of programs appears to be an essentially harder problem than standard SIP. In GSIP the feasible set can be non-closed and may posses so-called re-entrant corner points. These phenomena are stable under smooth perturbations and are not known in standard SIP. Example 4 Non-closedness [84]: The feasible setcan be described as
References (100)
- et al.
Robust solutions of uncertain linear programs
Operations Research Letters
(1999) - et al.
Equilibrium constrained optimization problems
European Journal of Operational Research
(2006) - et al.
Linear semi-infinite programming theory: An updated survey
European Journal of Operations Research
(2002) - et al.
On generalized semi-infinite optimization and bilevel optimization
European Journal of Operational Research
(2002) Generalized semi-infinite programming: Theory and methods
European Journal of Operational Research
(1999)- et al.
Strong duality for inexact linear programming problems
Optimization
(2001) - et al.
Linear Programming in Infinite Dimensional Spaces
(1987) - et al.
Non-Linear Parametric Optimization
(1983) Practical bilevel optimization
(1998)- et al.
Optimization problems with perturbations: A guided tour
SIAM Review
(1998)