Constrained Optimization Proof

In summary, the conversation discusses a proof for local minima implying global minima in a constrained optimization problem. The proof involves showing that a specific expression, φ'(0)+φ''(0), is greater than or equal to 0, and then using it to arrive at a contradiction by showing that f(x1)-f(x0) is also greater than or equal to 0. The conversation also touches on the difficulty of proving the convexity of the function f(x) when the matrix A is not invertible.
  • #1
kingwinner
1,270
0

Homework Statement


optim4.JPG



Homework Equations


Constrined optimzation


The Attempt at a Solution


("o" means dot product)
Let M={x|Ax=c} and f(x)=(1/2)x o Qx - b o x
Suppose x0 is a local min point.
Suppose, on the contrary, that x0 is NOT a global min point. Then there must exist a point x1 s.t. f(x1)<f(X0).

Let x(s) = (1-s)x0 + s x1, s is any real number.
x(0)=x0, x(1)=x1
Let φ(s)=f(x(s)), s is any real number. Then φ has a local min at s=0.
=> φ'(0)=0 and φ''(0)≥0 (1st and 2nd order necessary conditions for a local min)

By chain rule, 0 = φ'(0) = x1 o Qx0 - x0 o Qx0 - b o (x1-x0)
and 0 ≤ φ''(0) = x1 o Qx1 - x1 o Qx0 - x0 o Qx1 + x0 o Qx0
and I'm stuck here...how can I arrive at a contradiction from here? I tried adding them to get φ'(0)+φ''(0)≥0, but it doesn't seem to help.

Any help would be much appreciated!
 
Physics news on Phys.org
  • #2
Am I on the right track by adding φ'(0)+φ''(0)≥0? If not, can someone give me some hints on what to do next? It does not seem to lead to a contradiction.

Thanks.
 
  • #3
By the way, I would like to clarify that A and Q are matrices and x is some point in Rd (in case you are confused with the question).
 
  • #4
kingwinner said:
By the way, I would like to clarify that A and Q are matrices and x is some point in Rd (in case you are confused with the question).

The fact that you are restricting yourself to an affine feasible set is crucial. You need to think about what happens when you project x down onto the feasible set. For example, assuming the matrix A is mxn with m < n and full row rank, the feasible set is an (n-m)-dimensional hyperplane, and within that hyperplane you can choose a basis consisting of (n-m) vectors, then write the objective a quadratic in the (n-m) coefficients in the expansion of x in terms of that basis. You then have an unconstrained problem in fewer dimensions, and you are assuming you have a local minimum. I won't say any more, but just give a simple example to show what is happening.

Suppose we want to minimize f(x1,x2) = 2x1^2 - x2^2, (a non-convex function) over the line L: x1 - x2 = 0. A basis for feasible points is the vector (1,1), and any feasible x has the form (t,t), t in R. For such points the objective is f_L(t) = t^2, which certainly is a convex function.

RGV
 
Last edited:
  • #5
I'm sorry, but I don't understand the point you're making in relation to this proof.

So I've shown that
0 = φ'(0) = x1 o Qx0 - x0 o Qx0 - b o (x1-x0)
and 0 ≤ φ''(0) = x1 o Qx1 - x1 o Qx0 - x0 o Qx1 + x0 o Qx0

=> φ'(0)+φ''(0)≥0

I think from here I need to show that f(x1)-f(X0)≥0. This would be a contradiction.
But the problem is that the expression for φ'(0)+φ''(0) is a mess and doesn't seem to simplify to f(x1)-f(X0). What else can I do from here?

Thanks.
 
Last edited:
  • #6
kingwinner said:
I'm sorry, but I don't understand the point you're making in relation to this proof.

So I've shown that
0 = φ'(0) = x1 o Qx0 - x0 o Qx0 - b o (x1-x0)
and 0 ≤ φ''(0) = x1 o Qx1 - x1 o Qx0 - x0 o Qx1 + x0 o Qx0

=> φ'(0)+φ''(0)≥0

I think from here I need to show that f(x1)-f(X0)≥0. This would be a contradiction.
But the problem is that the expression for φ'(0)+φ''(0) is a mess and doesn't seem to simplify to f(x1)-f(X0). What else can I do from here?

Thanks.

I was ignoring your proof and going back to first principles. The point is: the constraint is absolutely crucial. Just look at the little example I gave.

RGV
 
  • #7
OK, surely without the constriant the statement can be false. But we are given that the constraint IS to be satisfied in this situation, and asked to PROVE that local min => global min under these assumptions.

So I struggled for another day...still can't get the proof to work. :frown:

Would someone be nice enough to help me out with the proof? Any input/comments/hints is appreciated!
 
  • #8
0 = φ'(0) = x1 o Qx0 - x0 o Qx0 - b o (x1-x0)
and 0 ≤ φ''(0) = x1 o Qx1 - x1 o Qx0 - x0 o Qx1 + x0 o Qx0

=> φ'(0)+φ''(0)≥0

I think from here I need to show that f(x1)-f(x0)≥0. (which is a contradiction)


Will a Taylor expansion work? But the problem is I think this will introduce a remainder term...how can I get rid of the remainder?
 
  • #9
kingwinner said:
0 = φ'(0) = x1 o Qx0 - x0 o Qx0 - b o (x1-x0)
and 0 ≤ φ''(0) = x1 o Qx1 - x1 o Qx0 - x0 o Qx1 + x0 o Qx0

=> φ'(0)+φ''(0)≥0

I think from here I need to show that f(x1)-f(x0)≥0. (which is a contradiction)


Will a Taylor expansion work? But the problem is I think this will introduce a remainder term...how can I get rid of the remainder?

I think that I have given all the help/hints allowed by Forum rules.

RGV
 
  • #10
Let M={x|Ax=c} and f(x)=(1/2)x o Qx - b o x.
I think you're suggesting there is an eaiser way to do the proof and I should probably discard my method? So I think you're suggesting that f: M->R is a convex function. But how can this be rigorously proved?

Now A is not necessarily invertible, in fact A is not even necessarily a square matrix, so we can't solve for x in Ax=c by taking inverse, and plug it into f(x).

Thanks.
 

Related to Constrained Optimization Proof

1. What is constrained optimization?

Constrained optimization is a mathematical technique used to find the optimal solution to a problem, subject to a set of constraints or limitations.

2. How is constrained optimization different from unconstrained optimization?

Constrained optimization involves finding the optimal solution within a restricted set of values, while unconstrained optimization does not have any limitations on the variables.

3. What are the types of constraints in constrained optimization?

The types of constraints in constrained optimization include equality constraints, where the variables must satisfy a specific equation, and inequality constraints, where the variables must satisfy a specific inequality (e.g. greater than or less than).

4. How is the optimal solution determined in constrained optimization?

The optimal solution in constrained optimization is determined by finding the values of the variables that satisfy all of the constraints and maximize or minimize the objective function.

5. What is the role of Lagrange multipliers in constrained optimization?

Lagrange multipliers are used in constrained optimization to convert the constrained problem into an unconstrained problem, making it easier to find the optimal solution. They help to incorporate the constraints into the objective function by introducing additional variables called multipliers.

Similar threads

  • Calculus and Beyond Homework Help
Replies
30
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
777
  • Calculus
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
906
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
386
  • Calculus and Beyond Homework Help
Replies
12
Views
884
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
Back
Top