# [SOLVED]Nonlinear programming problem, mathematical programming

#### malamenm

##### New member
Hello,

I need to

minimize {- f (x) | a <=x <= b}
where f ( x) is a concave and twice differentiable function. In addition, a and b are
positive constants such that a <b. Assume that -f (x) exists in the given interval [a, b] .
Show that

if the optimal solution is at x*= a , then delta f (a) < 0 must hold and
if the optimal solution is at x*= b * , then delta f (b) > 0 must hold.

Any help is much appreciated

#### Ackbach

##### Indicium Physicus
Staff member
I have a number of questions here:

1. Does $f:\mathbb{R} \to \mathbb{R}$? That is, is $f$ a function from the reals to the reals?

2. What do you mean by $\delta f$? Do you mean the derivative $df/dx$?

It sounds to me like your question is asking you to prove that if the minimum is at the left-hand endpoint, then the derivative must be positive, and if the minimum is at the right-hand endpoint, then the derivative must be negative. Is this what you're asked to do?

#### malamenm

##### New member
Hi, ya I mean the derivative df/dx, I am assuming that it is a real to real transformation but the only information provided is that it is a nonlinear programming problem.

You are completely right, I am trying to prove that if the minimum is at the left-hand endpoint, then the derivative must be positive, and if the minimum is at the right-hand endpoint, then the derivative must be negative but dont really know if my methodolgy is right, here is what i have so far:

The problem can be re-formulated:

𝑚𝑖𝑛𝑖𝑚𝑧𝑒 −𝑓(𝑥) 𝑎 ≤𝑥 ≤𝑏 ≡ 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑓(𝑥) 𝑎 ≤𝑥 ≤𝑏

f(x) is concave and twice differentiable implies it is also continuous along [a,b].
Proof by contradiction. If the (unique) optimal solution is at 𝑥∗ = 𝑎 then 𝑓 𝑥∗ > 𝑓 𝑥 ∀ 𝑥 ∈ (𝑎, 𝑏].
Now instead assume ∇𝑓 𝑎 ￼ ≥ 0. This means ∃𝜕 > 0 such that, within some neighbourhood of a, 𝑎 + 𝜕 ≥ 𝑓 𝑎 = 𝑓(𝑥∗) (as f is continuous along [a,b]. But this directly contradicts 𝑓 𝑥∗ >
𝑓 𝑥 ∀𝑥∈(𝑎,𝑏].Therefore,∇𝑓 𝑎 <0musthold.
Similarly, if the (unique) optimal solution is at 𝑥∗ = 𝑏 then 𝑓 𝑥∗ > 𝑓 𝑥 ∀ 𝑥 ∈ [𝑎, 𝑏).
Now instead assume ∇𝑓(𝑏) ≤ 0. This means ∃𝜕 > 0 such that, within some neighbourhood of b, 𝑓 𝑏 − 𝜕 ≥ 𝑓 𝑏 = 𝑓(𝑥∗) (as f is continuous along [a,b]. But this directly contradicts 𝑓 𝑥∗ >
𝑓 𝑥 ∀𝑥∈(𝑎,𝑏].Therefore,∇𝑓 𝑏 >0 musthold.

Could you please advise if that is close to the actual proof and if it makes sense,

Thank you

#### Ackbach

##### Indicium Physicus
Staff member
You might be making this more complicated than it needs to be. Take the first case. You know that $-f(x)$ is convex; since $f$ is twice-differentiable, that is equivalent to $-f''(x)>0$ everywhere. Then, if a point inside the interval was the minimum, it would have to be a critical point, by Fermat's Theorem, and hence $-f'(x)=0$ there. Since the minimum occurs at the left-hand endpoint instead, you've just shown that $-f'(x)\not=0$ on the interval. That is, the derivative can't change sign on the interval. So, either the derivative starts out negative and is always increasing (convex function), or the derivative starts out positive and is always increasing. The former would imply that the minimum was the RH endpoint. The latter gives you the result you want.

This argument is not very rigorous, but I think it's getting at what needs to happen. The other case is exactly analogous.

#### malamenm

##### New member
Oh wow thank you, that is much simpler and straight foreword than the approach i tried

kind regards

Last edited by a moderator: