Taylor expansion, of gradient of a function, in multiple dimensions

In summary, the taylor expansion for a multidimensional function can be written as f(\overline{X} + \overline{P}) = f(\overline{X}) + \nabla f(\overline{X}+t\overline{P})(\overline{P}), where t is on (0,1). The last term involves a dot product between the gradient vector and the vector P. As for the integral in the Taylor expansion, it is a part of the derivation and can be found in the lecture slides provided. This formula is similar to the Mean Value Theorem for vector-valued functions.
  • #1
Whenry
23
0
Hello all,

I understand that the taylor expansion for a multidimensional function can be written as

[itex]f(\overline{X}[/itex] + [itex]\overline{P}[/itex]) = [itex]f(\overline{X}) + \nabla f(\overline{X}+t\overline{P})(\overline{P})[/itex]

where t is on (0,1).

Although I haven't seen that form before, it makes sense.

But I don't understand the integral in the following the Taylor expansion,

[itex]\nabla f(\overline{X}[/itex] + [itex]\overline{P}[/itex]) = [itex]\nabla f(\overline{X}) + \int^{1}_{0} \nabla^{2} f(\overline{X}+t\overline{P})(\overline{P})dt[/itex]

Could someone help me understand the derivation?

Thank you,

Will
 
Last edited:
Physics news on Phys.org
  • #2
Whenry said:
Hello all,

I understand that the taylor expansion for a multidimensional function can be written as

[itex]f(\overline{X}[/itex] + [itex]\overline{P}[/itex]) = [itex]f(\overline{X}) + \nabla f(\overline{X}+t\overline{P})(\overline{P})[/itex]

where t is on (0,1).

I don't understand it. What kind of multiplication is going on in the last term? It appears to be two vectors multiplied together. Is it a dot product?


But I don't understand the integral in the following the Taylor expansion,

[itex]\nabla f(\overline{X}[/itex] + [itex]\overline{P}[/itex]) = [itex]\nabla f(\overline{X}) + \int^{1}_{0} \nabla^{2} f(\overline{X}+t\overline{P})(\overline{P})dt[/itex]

I don't either, but this is an interesting formula and I would like to know where you saw it. Is this from a subject like fluid dynamics? Can you give a link to a page?
 
  • #3
  • #4

Related to Taylor expansion, of gradient of a function, in multiple dimensions

1. What is a Taylor expansion?

A Taylor expansion is a mathematical technique used to approximate a function using a series of polynomials. It is based on the idea that any function can be written as an infinite sum of polynomials, with each polynomial representing a different degree of approximation.

2. What is the gradient of a function?

The gradient of a function is a vector that represents the rate of change of the function at a particular point. It is defined as a vector containing the partial derivatives of the function with respect to each of its independent variables.

3. How is Taylor expansion used to find the gradient of a function in multiple dimensions?

Taylor expansion can be used to find the gradient of a function in multiple dimensions by considering each dimension as a separate independent variable. The gradient at a particular point can be found by taking the partial derivatives of the function with respect to each of these variables and combining them into a vector.

4. What are the benefits of using Taylor expansion to find the gradient of a function?

Taylor expansion allows for a more accurate approximation of a function compared to using only a few terms of a polynomial series. This can be especially useful in higher dimensions where traditional methods may be less accurate. It also provides a general framework for finding the gradient of a function in multiple dimensions.

5. Are there any limitations to using Taylor expansion for finding the gradient of a function?

One limitation of Taylor expansion is that it assumes the function can be written as an infinite series of polynomials, which may not always be the case. Additionally, the accuracy of the approximation depends on the smoothness of the function and the number of terms used in the expansion. In some cases, the Taylor expansion may not accurately represent the behavior of the function, leading to errors in the gradient calculation.

Similar threads

Replies
1
Views
1K
Replies
18
Views
2K
Replies
1
Views
789
Replies
11
Views
2K
  • Calculus
Replies
13
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
3
Views
1K
Replies
3
Views
1K
Replies
3
Views
4K
Back
Top