Lagrange multipliers with vectors and matrices

In summary, Lagrange multipliers can be understood using Einstein notation, where the first order necessary condition is equivalent to the gradient of F being equal to 2wC-λu. This notation allows for a more general and systematic approach to understanding and solving problems involving Lagrange multipliers.
  • #1
IniquiTrance
190
0
My textbook is using Lagrange multipliers in a way I'm not familiar with.

F(w,λ)=wCwT-λ(wuT-1)

Why is the first order necessary condition?:

2wC-λu=0

Is it because:
[itex]\nabla[/itex]F=2wC-λu

Why does [itex]\nabla[/itex]F equal this?

Many thanks!

Edit: C is a covariance matrix
 
Physics news on Phys.org
  • #2
The easiest way to think about these things is to use http://en.wikipedia.org/wiki/Einstein_notation" .

F = wmCmnwn-λ(wmum-1)

where repeated indices are summed over their range.

Then using ∂wiwj = δij, where δij is the http://en.wikipedia.org/wiki/Kronecker_delta"

We can calculate ∇F:

(∇F)i = ∂wiF = δimCmnwn+wmCmnδin-λ(δimum-0) = Cinwn+wmCmi-λui = 2wmCmi-λui = (2wC - λu)i

where we have used the fact that C and δ are symmetric matrices.

This calculation can also be done symbolically, but I find it easier (and often make less errors with transposes etc) using index notation. It also generalises to more complicated situations where you have covariant and contravariant indices, and different classes of indices (such as holomorphic and antiholomorphic in complex cases), http://en.wikipedia.org/wiki/DeWitt_notation" .
 
Last edited by a moderator:

Related to Lagrange multipliers with vectors and matrices

1. What are Lagrange multipliers used for?

Lagrange multipliers are used in optimization problems to find the maximum or minimum value of a function subject to constraints. They allow us to incorporate these constraints into the optimization process.

2. How do Lagrange multipliers work?

Lagrange multipliers use the concept of gradients and the dot product to find the critical points of a function subject to constraints. The gradient of the function and the gradient of the constraint function are set equal to each other, and the resulting system of equations can be solved to find the optimal solution.

3. Can Lagrange multipliers be used with vectors and matrices?

Yes, Lagrange multipliers can be used with functions that have vector or matrix inputs and outputs. The gradients in the optimization process would then be vectors or matrices, and the dot product would be extended to accommodate these structures.

4. What are the benefits of using Lagrange multipliers with vectors and matrices?

Using Lagrange multipliers with vectors and matrices allows us to optimize functions with multiple variables and constraints. It also allows for more complex and realistic optimization problems to be solved, such as those involving machine learning or engineering applications.

5. Are there any limitations to using Lagrange multipliers with vectors and matrices?

One limitation is that the optimization process can become computationally intensive when dealing with large matrices or high-dimensional problems. Additionally, there may not always be a feasible solution to the optimization problem, in which case the use of Lagrange multipliers may not be effective.

Similar threads

  • Calculus and Beyond Homework Help
Replies
8
Views
558
Replies
25
Views
1K
Replies
13
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
908
  • Calculus and Beyond Homework Help
Replies
10
Views
784
  • Calculus and Beyond Homework Help
Replies
18
Views
1K
  • Precalculus Mathematics Homework Help
2
Replies
58
Views
3K
  • Calculus and Beyond Homework Help
Replies
15
Views
921
  • Linear and Abstract Algebra
Replies
8
Views
2K
Replies
6
Views
1K
Back
Top