A New Criterion for Optimality in Nonlinear Programming

We establish a sufficient condition for the existence of minimizers of real-valued convex functions on closed subsets of finite dimensional spaces. We compare this condition with other related results.


Introduction
ecessary and/or sufficient conditions for determining whether an optimization problem has an optimum have been studied for centuries.For instance, a very well-known classical result is the Bolzano-Weierstrass Theorem, which states that any continuous function attains its minimum value on a compact subset of its domain.Many other conditions have been proposed and, in general, these criteria are useful not just for theoretical purposes, but also from an algorithmic point of view.
Here, for a real-valued convex objective and a closed constraint set, we propose a new condition for establishing the existence of optima.First, let us introduce some notation which will be used in the sequel.If C ⊂ R n , then the recession cone of C, which is denoted by 0 + C, is the set of directions contained in C, i.e., 0 + C = {v ∈ R n │C + tv ⊂ C ∀ t ≥ 0}.We denote by conv(C) the convex hull of C, i.e., the 'smallest' convex set that contains C, that is to say, the set of all convex combinations of elements of C. The set cl(C) will stand for the closure of C. For a function f : R n → R and α ∈ R, the α-level set of f is given by Γ α = {x ∈ R n │f(x) ≤ α}.Finally, argmin x∈C f (x) is the set of minimizers of f on C.

N 2. The optimality criterion
In this section we propose our optimality criterion: a set of conditions which ensures the existence of an optimum for a constrained nonlinear problem with convex objective function.
Proof.By virtue of Bolzano-Weierstrass Theorem, we can assume that Ω is unbounded.Let inf x∈Ω f(x) = ν ∈ R and {x k } ⊂ Ω be a minimizing sequence, i.e., such that lim k→∞ f(x k ) = ν.If {x k } has a bounded subsequence, since Ω is closed and f is continuous (because it is a real-valued convex function), there exists ∈ Ω such that f( ) = ν and, therefore, x ∈ argmin x∈Ω f(x) and the proof is complete.
So let us assume that {x k } has no bounded subsequences.Refining the sequence if necessary, we can assume that, for some x ∈ R n with ˆ1, Since {x k }⊂ Ω, we have that x t x  is the limit of convex combinations of elements of cl(conv(Ω)).Hence, x t x  ∈ cl(conv(Ω)) and our claim is true.
Let us now see that x ˆ∈ Γ f(0) .Indeed,     using the continuity of f in the first equality, its convexity in the inequality, and the facts that f(x k ) → ν ∈ R and 10 k x  in the last equality.Thus, x ˆ∈ Γ f(0) and, therefore, x ˆ∈ cl(conv(Ω)) ∩ Γ f(0) = {0}, in contradiction with the fact that ˆ1.

Final remarks
Clearly, the novelty of Theorem 2.1 lies in hypothesis (ii), and thus it is worthwhile to discuss it further.We observe first that it implies that Ω ∩ Γ f(0) is bounded.Indeed, since this set is contained in U = [cl(conv(Ω))] ∩ Γ f(0) , it suffices to establish the boundedness of U. Since U is convex by the convexity of f , if it were unbounded, then a well known property of the recession cone (see Rockafellar, 1970) entails that there exists a nonzero ) , using another well known property of the recession cone in the inclusion.Since 0 belongs to Γ f(0) , it follows that u = 0+u belongs to Γ f(0) , and hence to O + (cl(conv(Ω)) ∩ Γ f(0) , contradicting (ii), and thus establishing the boundedness of Ω ∩ Γ f(0) .Now, since Ω ∩ Γ f(0) is clearly closed, if it were not only bounded but also nonempty, it would be compact, in which case, by Bolzano-Weierstrass result, f would attain its minimum on Ω ∩ Γ f(0) , but a minimizer of f on this set obviously also minimizes f on Ω, establishing the result of Theorem 2.1 in a direct way.In other words, under (ii), the result of Theorem 2.1 is rather immediate when Γ f(0) intersects the feasible set Ω.The point here is that we do not assume that Γ f(0) intersects Ω.In cases where Ω ∩ Γ f(0) is empty, the above mentioned boundedness conclusion becomes void, and in principle it says nothing about the existence of solutions of the optimization problem.For instance, taking n = 1, f(x) = max{0, x + 1} and Ω = [1, +∞), we have Γ f(0) = (−∞, 0], so that Ω ∩ Γ f(0) = Ø , but on the other hand, O + (cl(conv(Ω)) = [0, +∞) and (ii) holds, as well as the conclusion of Theorem 2.1.
Secondly, we mention that Theorem 2.1 has a certain resemblance to the following result, which appears as Theorem 1 in Graña Drummond et al. (2008): In fact, the prooflines of both Theorems 2.1 and 3.1 are rather similar, but it is important to point out that despite this resemblance (resulting basically from the similarity of assumption (ii) in both theorems), there is an essential difference, which makes them quite independent of each other.Indeed, in Theorem 2.1 the set which is equal to {0} according to assumption (ii) is a subset of the domain of f, namely R n , while in assumption (ii) of Theorem 3.1 such a set is a subset of the codomain of F, namely R m .This is made clear also in the closedness assumptions of both theorems: in Theorem 2.1 we assume that Ω is closed, while in Theorem 3.1, F(Ω) is assumed to be closed.We mention that if we look at Theorem 3.1 in the scalar case, namely m = 1, it becomes trivial.Taking, without loss of generality, w = 1, we get H w = {0}, so that (ii) holds automatically, and the remaining assumptions just indicate that F(Ω) ⊂ R is closed and bounded below, so that it has a minimum, and hence F has a minimizer in Ω.On the other hand, in the case of Theorem 2.1 (for which we have always m = 1), assumption (ii) is not automatically satisfied, and we need indeed an additional property of f, namely its convexity, in order to establish that it attains its minimum on Ω (note that Theorem 3.1 does not require any convexity properties of F, and in fact not even its continuity; closedness of its image, in conjunction with assumption (ii), does the job).
Finally, we mention the following related existence result, which appears as Theorem 4.3 in (Iusem and Sosa, 2003 We remark that the result of Theorem 3.2 might be rephrased also in terms of a recession cone, making it look more like Theorems 2.1 and 3.1, but it has an essential difference with regard to them: its validity is Let Ω ⊂ R be closed and convex, and f : R n → R ∪ {+∞} a proper, convex and lower semicontinuous function.If the following auxiliary problem (AP) : find x ∈ R n such that ||x|| = 1 and f (x + y) ≤ f (y) for all y ∈ Ω, does not have solutions, then argmin x∈Ω f(x) ≠ Ø .