What is the optimal solution for constrained nonlinear programming problems?

In summary: Since ##x_1^2 + x_2^2 = 4 \leq 1## this satisfies all the conditions. However, I'm not sure how to get there without guess and check.One way to get there is to plot the feasible set (you can do this by graphing the three constraints and seeing where they overlap) and then use the plot to make an educated guess. In this case, the optimal point is at the intersection of two of the constraints, which is a common occurrence in these types of problems. If you have a good understanding of the problem geometrically, you can often make an educated guess about the optimal solution without having to solve it explicitly.Also, I'm not sure what you mean by
  • #1
squenshl
479
4

Homework Statement


1. Maximise ##x_1^2+(x_2-5)^2## subject to ##x_1 \geq 0, x_2 \geq 0## and ##2x_1+x_2 \leq 4##.
2. Minimise ##x_1^2+(x_2-5)^2## subject to ##x_1 \geq 0, x_2 \geq 0## and ##2x_1+x_2 \leq 4##.
3. Maximise ##2x_2^2-x_1## subject to ##x_1 \geq 0, x_2 \geq 0## and ##x_1^2+x_2^2 \leq 1##.

Homework Equations

The Attempt at a Solution


1. After doing the process of Lagrange multipliers I get ##x_1 = \lambda## and ##x_2 = \frac{\lambda+10}{2}## which in turn gives ##\lambda = -\frac{2}{5}## meaning ##x_1 = -\frac{2}{5}## and ##x_2 = \frac{24}{5}##. But since ##x_1, \lambda < 0## which don't satisfy ##x_1 \geq 0## and Kuhn-Tucker conditions I'm not sure what to do next.
2. After doing the process of Lagrange multipliers I get ##x_1 = -\lambda## and ##x_2 = \frac{10-\lambda}{2}## which in turn gives ##\lambda = \frac{2}{5}## meaning ##x_1 = -\frac{2}{5}## and ##x_2 = \frac{24}{5}##. But since ##x_1, \lambda < 0## which don't satisfy ##x_1 \geq 0## and Kuhn-Tucker conditions I'm not sure what to do next.
3. After doing the process of Lagrange multipliers I get ##x_1 = -\frac{1}{2\lambda}## and ##x_2 = 0## which in turn gives ##\lambda = 2## meaning ##x_1 = -\frac{1}{4}##. But since ##x_1 < 0## which don't satisfy ##x_1 \geq 0## and Kuhn-Tucker conditions I'm not sure what to do next.
 
Physics news on Phys.org
  • #2
squenshl said:

Homework Statement


1. Maximise ##x_1^2+(x_2-5)^2## subject to ##x_1 \geq 0, x_2 \geq 0## and ##2x_1+x_2 \leq 4##.
2. Minimise ##x_1^2+(x_2-5)^2## subject to ##x_1 \geq 0, x_2 \geq 0## and ##2x_1+x_2 \leq 4##.
3. Maximise ##2x_2^2-x_1## subject to ##x_1 \geq 0, x_2 \geq 0## and ##x_1^2+x_2^2 \leq 1##.

Homework Equations

The Attempt at a Solution


1. After doing the process of Lagrange multipliers I get ##x_1 = \lambda## and ##x_2 = \frac{\lambda+10}{2}## which in turn gives ##\lambda = -\frac{2}{5}## meaning ##x_1 = -\frac{2}{5}## and ##x_2 = \frac{24}{5}##. But since ##x_1, \lambda < 0## which don't satisfy ##x_1 \geq 0## and Kuhn-Tucker conditions I'm not sure what to do next.
2. After doing the process of Lagrange multipliers I get ##x_1 = -\lambda## and ##x_2 = \frac{10-\lambda}{2}## which in turn gives ##\lambda = \frac{2}{5}## meaning ##x_1 = -\frac{2}{5}## and ##x_2 = \frac{24}{5}##. But since ##x_1, \lambda < 0## which don't satisfy ##x_1 \geq 0## and Kuhn-Tucker conditions I'm not sure what to do next.
3. After doing the process of Lagrange multipliers I get ##x_1 = -\frac{1}{2\lambda}## and ##x_2 = 0## which in turn gives ##\lambda = 2## meaning ##x_1 = -\frac{1}{4}##. But since ##x_1 < 0## which don't satisfy ##x_1 \geq 0## and Kuhn-Tucker conditions I'm not sure what to do next.

In (1), you may have ##x_1 = 0## at the optimal solution; the only way to be sure is to try it out and see if you satisfy all the Kuhn-Tucker conditions. If you write the Lagrangian as
[tex] L = x_1^2 + (x_2 - 5)^2 + \lambda (4 - 2 x_1 - x_2) [/tex]
then you need
[tex] \begin{array}{l} \partial L /\partial x_1 \leq 0 , \: x_1 \geq 0 \; \text{and at least one of these } \: = 0\\
\partial L /\partial x_2 \leq 0 , \: x_2 \geq 0 \; \text{and at least one of these } \: = 0\\
\lambda \geq 0 , \; 2x_1 + x_2 \leq 4 , \; \text{and at least one of these is an equality}
\end{array}
[/tex]
Can you satisfy these conditions when you assume ##x_1 = 0## and ##x_2 > 0##? If not, guess again and start over.

Similarly for the other cases that are giving you trouble.

Note: inequality-constrained nonlinear optimization problems are inherently much harder than equality-constrained ones. Often it is necessary to make several "guesses" about the nature of the optimum (variable = 0 or derivative = 0?, multiplier = 0 or constraint is tight?). Many NLP solvers solve a controlled sequence of equality-constrained problems (at least to sufficient accuracy to know correct signs of multipliers and variables, not necessarily to full accuracy), and then change one or more of the binding constraints, to see if the new solution works out, etc.
 
Last edited:
  • #3
Great thanks.

So if ##x_1 = 0##, then ##\lambda = 0## because ##x_1 = \lambda##. So ##x_2 = 5##. This satisfies the conditions above but doesn't satisfy the other Kuhn-Tucker condition of ##\frac{\partial L}{\partial \lambda} \geq 0## as ##\frac{\partial L}{\partial \lambda} = 4-0-5 = -1.##
 
  • #4
squenshl said:
Great thanks.

So if ##x_1 = 0##, then ##\lambda = 0## because ##x_1 = \lambda##. So ##x_2 = 5##. This satisfies the conditions above but doesn't satisfy the other Kuhn-Tucker condition of ##\frac{\partial L}{\partial \lambda} \geq 0## as ##\frac{\partial L}{\partial \lambda} = 4-0-5 = -1.##

You cannot assume ##\lambda = x_1## because you cannot assume ##\partial L/ \partial x_1 = 0##. If you do assume ##x_2 > 0## then you have two conditions to solve: ##\partial L / \partial x_2 = 0## and ##2 x_1 + x_2 = 4##, to find the "unknowns" ##x_2, \lambda##. You will now have a candidate set of values for ##x_1 (= 0), x_2## and ##\lambda##, so you can check if all the KT conditions work out for that candidate set.

Anyway, I edited my previous post, to alter/expand on the suggestion. You should re-read it.
 
Last edited:
  • #5
Great thanks alot!
P.S. I just saw your edited post.

For ##x_1 = 0##, we get ##x_2 \leq 4## but all of these values result in a negative ##\lambda## violating the KT conditions
 
Last edited:
  • #6
squenshl said:
Great thanks alot!
P.S. I just saw your edited post.

For ##x_1 = 0##, we get ##x_2 \leq 4## but all of these values result in a negative ##\lambda## violating the KT conditions

So, try again! Like I said, these problems can require several guesses.

However, sometimes it makes sense to think about what the problem means before you even start to try solving it. Can you think of a simple geometric interpretation of the problem?
 
Last edited:
  • #7
Great.

The answer should be ##x_1 = 2##, ##x_2 = 0## and ##\lambda = 2##.
 

Related to What is the optimal solution for constrained nonlinear programming problems?

1. What is nonlinear programming?

Nonlinear programming is a mathematical optimization technique used to solve problems with nonlinear constraints or objective functions. It involves finding the optimal values for a set of decision variables that satisfy a set of constraints while minimizing or maximizing an objective function.

2. How is nonlinear programming different from linear programming?

Nonlinear programming differs from linear programming in that it allows for nonlinear constraints and objective functions, while linear programming only allows for linear constraints and objective functions. This means that nonlinear programming can handle more complex and realistic problems that cannot be solved with linear programming.

3. What are some common applications of nonlinear programming?

Nonlinear programming has many applications in various fields such as engineering, economics, finance, and operations research. Some common applications include portfolio optimization, scheduling and resource allocation, and parameter estimation.

4. What are the main challenges in solving nonlinear programming problems?

Solving nonlinear programming problems can be challenging due to the complexity of the mathematical models and the large number of decision variables and constraints involved. Additionally, finding the global optimal solution can be difficult as the objective function may have multiple local optimal solutions.

5. What are some techniques used to solve nonlinear programming problems?

There are multiple techniques used to solve nonlinear programming problems, including gradient-based methods, interior point methods, and genetic algorithms. These methods use different approaches to iteratively find the optimal solution by updating the decision variables based on the current solution. The choice of technique depends on the specific problem and its complexity.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
856
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
Replies
3
Views
838
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
862
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Back
Top