Welcome to our community

Be a part of something great, join today!

[SOLVED] Nonlinear programming problem, mathematical programming

malamenm

New member
Apr 4, 2013
5
Hello,

I need to

minimize {- f (x) | a <=x <= b}
where f ( x) is a concave and twice differentiable function. In addition, a and b are
positive constants such that a <b. Assume that -f (x) exists in the given interval [a, b] .
Show that

if the optimal solution is at x*= a , then delta f (a) < 0 must hold and
if the optimal solution is at x*= b * , then delta f (b) > 0 must hold.

Any help is much appreciated
 

Ackbach

Indicium Physicus
Staff member
Jan 26, 2012
4,197
I have a number of questions here:

1. Does $f:\mathbb{R} \to \mathbb{R}$? That is, is $f$ a function from the reals to the reals?

2. What do you mean by $\delta f$? Do you mean the derivative $df/dx$?

It sounds to me like your question is asking you to prove that if the minimum is at the left-hand endpoint, then the derivative must be positive, and if the minimum is at the right-hand endpoint, then the derivative must be negative. Is this what you're asked to do?
 

malamenm

New member
Apr 4, 2013
5
Hi, ya I mean the derivative df/dx, I am assuming that it is a real to real transformation but the only information provided is that it is a nonlinear programming problem.

You are completely right, I am trying to prove that if the minimum is at the left-hand endpoint, then the derivative must be positive, and if the minimum is at the right-hand endpoint, then the derivative must be negative but dont really know if my methodolgy is right, here is what i have so far:



The problem can be re-formulated:

π‘šπ‘–π‘›π‘–π‘šπ‘§π‘’ βˆ’π‘“(π‘₯) π‘Ž ≀π‘₯ ≀𝑏 ≑ π‘šπ‘Žπ‘₯π‘–π‘šπ‘–π‘§π‘’ 𝑓(π‘₯) π‘Ž ≀π‘₯ ≀𝑏

f(x) is concave and twice differentiable implies it is also continuous along [a,b].
Proof by contradiction. If the (unique) optimal solution is at π‘₯βˆ— = π‘Ž then 𝑓 π‘₯βˆ— > 𝑓 π‘₯ βˆ€ π‘₯ ∈ (π‘Ž, 𝑏].
Now instead assume βˆ‡π‘“ π‘Ž οΏΌ β‰₯ 0. This means βˆƒπœ• > 0 such that, within some neighbourhood of a, π‘Ž + πœ• β‰₯ 𝑓 π‘Ž = 𝑓(π‘₯βˆ—) (as f is continuous along [a,b]. But this directly contradicts 𝑓 π‘₯βˆ— >
𝑓 π‘₯ βˆ€π‘₯∈(π‘Ž,𝑏].Therefore,βˆ‡π‘“ π‘Ž <0musthold.
Similarly, if the (unique) optimal solution is at π‘₯βˆ— = 𝑏 then 𝑓 π‘₯βˆ— > 𝑓 π‘₯ βˆ€ π‘₯ ∈ [π‘Ž, 𝑏).
Now instead assume βˆ‡π‘“(𝑏) ≀ 0. This means βˆƒπœ• > 0 such that, within some neighbourhood of b, 𝑓 𝑏 βˆ’ πœ• β‰₯ 𝑓 𝑏 = 𝑓(π‘₯βˆ—) (as f is continuous along [a,b]. But this directly contradicts 𝑓 π‘₯βˆ— >
𝑓 π‘₯ βˆ€π‘₯∈(π‘Ž,𝑏].Therefore,βˆ‡π‘“ 𝑏 >0 musthold.



Could you please advise if that is close to the actual proof and if it makes sense,

Thank you
 

Ackbach

Indicium Physicus
Staff member
Jan 26, 2012
4,197
You might be making this more complicated than it needs to be. Take the first case. You know that $-f(x)$ is convex; since $f$ is twice-differentiable, that is equivalent to $-f''(x)>0$ everywhere. Then, if a point inside the interval was the minimum, it would have to be a critical point, by Fermat's Theorem, and hence $-f'(x)=0$ there. Since the minimum occurs at the left-hand endpoint instead, you've just shown that $-f'(x)\not=0$ on the interval. That is, the derivative can't change sign on the interval. So, either the derivative starts out negative and is always increasing (convex function), or the derivative starts out positive and is always increasing. The former would imply that the minimum was the RH endpoint. The latter gives you the result you want.

This argument is not very rigorous, but I think it's getting at what needs to happen. The other case is exactly analogous.
 

malamenm

New member
Apr 4, 2013
5
Oh wow thank you, that is much simpler and straight foreword than the approach i tried:)

kind regards
 
Last edited by a moderator: