Taylor Series: Equivalence of Two Forms Explained

In summary: Then we only need to work out the taylor series for the x_i separately, then just multiply them all together. If we have a function of two variables, then you can do e^{\mathbf{r}.\hat{\nabla}} e^{\mathbf{r}.\hat{\nabla}} f(0)=f(\mathbf{r}).In summary, the two forms of the Taylor series are equivalent because the first equation is the Taylor series of a function of h, while the second equation is the Taylor series of a function of x. By setting g(h)=x+h, the second equation can be derived from the first equation. The Taylor series can also
  • #1
IMGOOD
32
0
I don't get how these two forms of the taylor series are equivalent:

[tex]f(x+h)= \sum_{k=0}^{\infty} \frac{f^k(x)}{k!} h^k [/tex]

[tex]f(x) = \sum_{k=0}^{\infty} \frac{f^k(0)}{k!}x^k[/tex]

The second one makes sense but I just can't derive the first form using the second. I know its something very simple but I keep confusing myself!
 
Last edited:
Physics news on Phys.org
  • #2
Replace x by x' in the second eqn.
Then the second eqn. is gotten from the first by

x=0
x'=h
 
  • #3
Or, let g(h)=x+h. Then, f(x+h)=(f o g)(h).

The first equation is the Taylor (MacLaurin) series of (f o g)(h).

[tex](f\circ g)^{(k)}(0) = (f^{(k)}\circ g)(0)\cdot 1=f^{(k)}(x+0)=f^{(k)}(x)[/tex]

[tex]\Rightarrow f(x+h)=\sum_{k=0}^{\infty} \frac{f^{(k)}(x)}{k!}h^k [/tex]
 
  • #4
quasar987 said:
[tex](f\circ g)^{(k)}(0) = (f^{(k)}\circ g)(0)\cdot 1=f^{(k)}(x+0)=f^{(k)}(x)[/tex]
Your approach makes sense but could you explain the above line in a little more detail. Specifically, I don't get how you got [tex]f^{(k)}(x+0)[/tex] in the above equation.
 
Last edited:
  • #5
[tex](f^{(k)}\circ g)(0)\cdot 1=f^{(k)}(g(0))=f^{(k)}(x+0)=f^{(k)}(x)[/tex].

Sorry for steping in, I was bored.
 
  • #6
Thanks!...
 
  • #7
quasar987 said:
Or, let g(h)=x+h. Then, f(x+h)=(f o g)(h).

The first equation is the Taylor (MacLaurin) series of (f o g)(h).

[tex](f\circ g)^{(k)}(0) = (f^{(k)}\circ g)(0)\cdot 1=f^{(k)}(x+0)=f^{(k)}(x)[/tex]

[tex]\Rightarrow f(x+h)=\sum_{k=0}^{\infty} \frac{f^{(k)}(x)}{k!}h^k [/tex]

Actually, I am still kinda confused. I know now how you got [tex]f^{(k)}(x)[/tex] but how did you get [tex]h^k[/tex]?
 
  • #8
Because we'Re computing the Taylor series of a function of h. Recall, I set g(h)=x+h, a function of h. x is considered constant. And this h dependence is passed on to f(x+h): f(x+h)=(f o g)(h).
 
  • #9
Taylor series are something that is never written consistently, with some authors choosing to expand the series in x about zero, and others choosing to expand the series in h(or a) about x. Still more choose to evaluate the function at x, with the series expanded around a point a, and the powers being of (x-a). So you can see lots of things like:

[tex]f(x) = \sum_{k=0}^{\infty} \frac{f^k(0)}{k!}x^k[/tex]

[tex]f(x+h)= \sum_{k=0}^{\infty} \frac{f^k(x)}{k!} h^k [/tex]

[tex]f(x)= \sum_{k=0}^{\infty} \frac{f^k(a)}{k!} (x-a)^k [/tex]

Now personally, I prefer to expand in x about the point a, as there is no guarantee that you will be able to expand about zero. The function may not even be defined there, or may be singular. Secondly, it's nice to be able to just write f(x), and not have to worry too much about the "arbitrary but fixed" point a (detest this phrase). You can keep the regular f(x) notation and then just change a at will.

By the way, anyone interested in Taylor series in higher dimensions should look into the rather nice multi-index notation for multi-variable analysis. It enables you to wite things like.

[tex]f(\mathbf{x})=\sum_{|\grave{\alpha}| \ge 0} \frac{\mathbf{D}^{\grave{\alpha}}f(\mathbf{x})}{\grave{\alpha}!} (\mathbf{x}-\mathbf{a})^{\grave{\alpha}}[/tex]

Here [tex]\mathbf{x}[/tex] is vector of n variables [tex](x_1,x_2,...x_n)[/tex], and [tex]\grave{\alpha}[/tex] is... complicated. It's called multi-index notation and is very useful for compacting the oft times awkward Taylor expansions in n dimensions. It takes a bit of getting used to but is worth it.

My personal favourite is it's compression of the inherantly forgettable, but undeniably useful multinomial expansion.
[tex](x_1 + x_2 + \cdots + x_n )^k = k! \sum_{|\grave{\alpha}|=k} \frac{\mathbf{a}^{\grave{\alpha}}}{\grave{\alpha}!}[/tex]

Which is a good deal more memorable than the usual expansion.
 
Last edited:
  • #10
For taylor series in multiple dimensions, the easiest form to use is the operator

[itex] e^{\mathbf{r}.\hat{\nabla} [/itex]

which gives

[itex] e^{\mathbf{r}.\hat{\nabla}} f(0)=f(\mathbf{r})[/itex]
 

Related to Taylor Series: Equivalence of Two Forms Explained

1. What is a Taylor series?

A Taylor series is an infinite sum of terms that represents a function as an infinite polynomial. It is used to approximate the value of a function at a given point by using the function's derivatives.

2. What is the equivalence of two forms in Taylor series?

The equivalence of two forms in Taylor series refers to the fact that a Taylor series can be expressed in two different forms - the Taylor form and the Lagrange form. These two forms are equivalent and can be derived from each other.

3. How is the equivalence of two forms in Taylor series explained?

The equivalence of two forms in Taylor series can be explained by understanding that both forms represent the same function, but the Lagrange form uses the remainder term to account for the error in the approximation. This allows for a more accurate representation of the function.

4. Why is the equivalence of two forms important?

The equivalence of two forms in Taylor series is important because it allows for flexibility in choosing which form to use in different situations. The Taylor form is useful for finding the value of a function at a specific point, while the Lagrange form is useful for estimating the error in the approximation.

5. How is the equivalence of two forms used in practical applications?

The equivalence of two forms in Taylor series is used in many practical applications, such as in engineering, physics, and economics. It is used to approximate functions and make predictions based on the values of their derivatives. It is also used in numerical analysis to find solutions to differential equations and other mathematical problems.

Similar threads

Replies
2
Views
1K
Replies
3
Views
1K
Replies
2
Views
1K
Replies
1
Views
1K
Replies
2
Views
1K
Replies
3
Views
2K
Replies
2
Views
932
Replies
9
Views
1K
  • Calculus
Replies
5
Views
1K
Back
Top