Power series solution to differential equation

In summary, the conversation is about finding the power series solution of a differential equation around the point x=0. The attempt at a solution involves substituting the power series for y in the equation and solving for the coefficients. The conversation also discusses the meaning of "around the point x=0" and whether the condition for the coefficients to be satisfied is always true. The conversation also touches on rewriting the equation in a different form to avoid having to multiply out the two power series. Finally, the main question is about the validity of equating a power series to zero and whether it covers all cases.
  • #1
V0ODO0CH1LD
278
0

Homework Statement



Find the power series solution of the differential equation
[tex] y''-\frac{2}{(1-x)^2}y=0 [/tex]
around the point ##x=0##.

Homework Equations


[tex] y=\sum_{n=0}^\infty{}c_nx^n [/tex]
[tex] y'=\sum_{n=0}^\infty{}c_{n+1}(n+1)x^n [/tex]
[tex] y''=\sum_{n=0}^\infty{}c_{n+2}(n+2)(n+1)x^n [/tex]

The Attempt at a Solution



If I substitute the power series for ##y## in the differential equation (and mess around with it a bit) I get:
[tex] \sum_{n=0}^\infty{}\left[c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n\right]x^n=0. [/tex]
Okay, so I replaced the problem of solving a differential equation with the problem of finding the coefficients of an infinite power series that satisfy the equation above, right? So what is the condition that the coefficients have to satisfy in order that the equation above is true for (at least) every ##x\in\mathbb{R}-{1}##. Usually the next step here is to say that
[tex] c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n=0\,\Longrightarrow\, c_{n+2}=\frac{2}{(n+2)(n+1)(1-x)^2}c_n, [/tex]
but why is this the "general" condition that the ##c_i##'s have to satisfy? Are the terms inside the brackets always bigger than or equal to zero?

Also, what is the meaning of "around the point ##x=0##"? I assume I should take the limit as ##x## goes to zero at some point, but when?
 
Physics news on Phys.org
  • #2
V0ODO0CH1LD said:

Homework Statement



Find the power series solution of the differential equation
[tex] y''-\frac{2}{(1-x)^2}y=0 [/tex]
around the point ##x=0##.

Homework Equations


[tex] y=\sum_{n=0}^\infty{}c_nx^n [/tex]
[tex] y'=\sum_{n=0}^\infty{}c_{n+1}(n+1)x^n [/tex]
[tex] y''=\sum_{n=0}^\infty{}c_{n+2}(n+2)(n+1)x^n [/tex]

The Attempt at a Solution



If I substitute the power series for ##y## in the differential equation (and mess around with it a bit) I get:
[tex] \sum_{n=0}^\infty{}\left[c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n\right]x^n=0. [/tex]
Okay, so I replaced the problem of solving a differential equation with the problem of finding the coefficients of an infinite power series that satisfy the equation above, right? So what is the condition that the coefficients have to satisfy in order that the equation above is true for (at least) every ##x\in\mathbb{R}-{1}##. Usually the next step here is to say that
[tex] c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n=0\,\Longrightarrow\, c_{n+2}=\frac{2}{(n+2)(n+1)(1-x)^2}c_n, [/tex]
but why is this the "general" condition that the ##c_i##'s have to satisfy? Are the terms inside the brackets always bigger than or equal to zero?

Also, what is the meaning of "around the point ##x=0##"? I assume I should take the limit as ##x## goes to zero at some point, but when?

Try rewriting the DE as ##(1-x)^2 \, y'' - 2 y = 0##, or ##(1 - 2x + x^2) y'' - 2 y = 0##.
 
  • #3
Ray Vickson said:
Try rewriting the DE as ##(1-x)^2 \, y'' - 2 y = 0##, or ##(1 - 2x + x^2) y'' - 2 y = 0##.

Why?
 
  • #4
V0ODO0CH1LD said:
Okay, so I replaced the problem of solving a differential equation with the problem of finding the coefficients of an infinite power series that satisfy the equation above, right? So what is the condition that the coefficients have to satisfy in order that the equation above is true for (at least) every ##x\in\mathbb{R}-{1}##. Usually the next step here is to say that
[tex] c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n=0\,\Longrightarrow\, c_{n+2}=\frac{2}{(n+2)(n+1)(1-x)^2}c_n, [/tex]
but why is this the "general" condition that the ##c_i##'s have to satisfy? Are the terms inside the brackets always bigger than or equal to zero?

Also, what is the meaning of "around the point ##x=0##"? I assume I should take the limit as ##x## goes to zero at some point, but when?
##x## shouldn't appear in the recurrence relation.
 
  • #5
V0ODO0CH1LD said:
Why?

To avoid having to multiply out the two power series for y(x) and 1/(1-x)^2.
 
  • #6
V0ODO0CH1LD said:
Why?

To expand on my previous answer: if you keep the ##y/(1-x)^2## form you must evaluate the product of the two infinite series ##y(x) =\sum_{n=0}^{\infty} c_n x^n## and ##1/(1-x)^2 = \sum_{n=0}^{\infty} (n+1) x^n##. You must do that in order to have ##y'' - 2y/(1-x)^2## expressed as an infinite series in ##x##, whose coefficients would then all be equated to zero. The way you did it was not valid because your LHS was not an infinite series in ##x.##
 
  • #7
Ray Vickson said:
To expand on my previous answer: if you keep the ##y/(1-x)^2## form you must evaluate the product of the two infinite series ##y(x) =\sum_{n=0}^{\infty} c_n x^n## and ##1/(1-x)^2 = \sum_{n=0}^{\infty} (n+1) x^n##. You must do that in order to have ##y'' - 2y/(1-x)^2## expressed as an infinite series in ##x##, whose coefficients would then all be equated to zero. The way you did it was not valid because your LHS was not an infinite series in ##x.##

I get that it wouldn't be an infinite power series in ##x##, but why isn't it an infinite series in ##x##?

Also, I know that the whole ##1/(1-x)^2## thing shouldn't be there, but it will disappear when I take the limit as ##x## approaches ##0## of it. I just don't know when I am supposed to do it.

My main question however is: even if the original equation didn't contain ##1/(1-x)^2## why is it that equating a power series to zero means that either ##x=0## (this I get) or that every coefficient of the series equals zero? I feel like if the coefficients are all zero the series is zero for all ##x## but the other way around doesn't cover all cases, does it?
 
  • #8
V0ODO0CH1LD said:
My main question however is: even if the original equation didn't contain ##1/(1-x)^2## why is it that equating a power series to zero means that either ##x=0## (this I get) or that every coefficient of the series equals zero? I feel like if the coefficients are all zero the series is zero for all ##x## but the other way around doesn't cover all cases, does it?

Because the powers [itex]x^k[/itex] are linearly independent.
 
  • #9
V0ODO0CH1LD said:
I get that it wouldn't be an infinite power series in ##x##, but why isn't it an infinite series in ##x##?

Also, I know that the whole ##1/(1-x)^2## thing shouldn't be there, but it will disappear when I take the limit as ##x## approaches ##0## of it. I just don't know when I am supposed to do it.

My main question however is: even if the original equation didn't contain ##1/(1-x)^2## why is it that equating a power series to zero means that either ##x=0## (this I get) or that every coefficient of the series equals zero? I feel like if the coefficients are all zero the series is zero for all ##x## but the other way around doesn't cover all cases, does it?

Your infinite series ##\sum t_n## has terms of the form
[tex] t_n = c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n[/tex]
Just having ##\sum t_n \equiv 0## does NOT mean you can say that ##t_n = 0## for all ##x##, or even for most ##x##. In fact, for any given values of the ##c_k## your ##t_n## will be non-zero except at (maybe) two values of ##x##; these would be where
[tex] (1-x)^2 = \frac{2 c_n}{(n+1)(n+2) c_{n+2}} [/tex]
If the right-hand-side here is ##> 0## there are two values of ##x## that make ##t_n = 0##; if the right-hand-side = 0 there is 1 value of ##x## that works. If the right-hand-side is ##< 0## we cannot ever have ##t_n = 0##.

No, the only way you can guarantee the truth of the statement that "a sum of zero for all x implies that all terms vanish" is to have linearly-independent terms, so your terms must all be of the form ##r_n x^n## for constants ##r_n##. Since you want this to = 0 for ALL x, it absolutely requires that you have ##r_n = 0## for all ##n##.

Here is a little exercise for you: assuming that the (nicely convergent) infinite series ##S(x) = \sum_n r_n x^n## is zero for all ##x##, PROVE that this implies ##r_n = 0## for all ##x##. I'll show you how to start. First, ##0 = S(0) = r_0,## so we have ##r_0 = 0##. Therefore, we have ##S(x) = r_1 x + r_2 x^2 + \cdots##. Since this is supposed to be identically equal to 0 we must have ##S'(x) = 0## for all ##x##. Therefore, we have ##S'(0) = r_1 = 0##. And so it goes.
 
  • #10
V0ODO0CH1LD said:
I get that it wouldn't be an infinite power series in ##x##, but why isn't it an infinite series in ##x##?
I'm not sure what your point is here.

Also, I know that the whole ##1/(1-x)^2## thing shouldn't be there, but it will disappear when I take the limit as ##x## approaches ##0## of it. I just don't know when I am supposed to do it.
You generally don't take limits in this kind of problem where you're simply trying to find a power-series solution to a differential equation.

My main question however is: even if the original equation didn't contain ##1/(1-x)^2## why is it that equating a power series to zero means that either ##x=0## (this I get) or that every coefficient of the series equals zero? I feel like if the coefficients are all zero the series is zero for all ##x## but the other way around doesn't cover all cases, does it?
Right, and since the differential equation has to hold for all ##x##, you want all of the coefficients to vanish.
 
  • #11
Okay, I got why I can't have ##1/(1-x)^2## in there, but how do I get rid of it so that I got left is a power series with constant coefficients?
 
  • #12
Follow Ray's hint in Post # 2 , rewrite the equation in the form ##(1-x)^2 y'' - 2 y = 0 ##.ehild
 
Last edited:
  • #13
V0ODO0CH1LD said:
Okay, I got why I can't have ##1/(1-x)^2## in there, but how do I get rid of it so that I got left is a power series with constant coefficients?

Use the suggestion in post #2:

[tex](1 - x)^2y'' -2y = y'' - 2xy'' + x^2y'' -2y \\
= \sum_{n=0}^{\infty} a_n n(n-1)x^{n-2} - 2x\sum_{n=0}^\infty a_nn(n-1)x^{n-2}
+ x^2 \sum_{n=0}^\infty a_n n(n-1)x^{n-2} - 2\sum_{n=0}^\infty a_nx^n \\
= \sum_{n=0}^{\infty} a_n n(n-1)x^{n-2} - 2\sum_{n=0}^\infty a_nn(n-1)x^{n-1}
+ \sum_{n=0}^\infty a_n n(n-1)x^{n} - 2\sum_{n=0}^\infty a_nx^n[/tex]

Alternatively you can expand [itex](1 - x)^{-2}[/itex] in binomial series, but you'll need to treat the cases [itex]|x| < 1[/itex] and [itex]|x| > 1[/itex] separately.
 
  • #14
V0ODO0CH1LD said:
Okay, I got why I can't have ##1/(1-x)^2## in there, but how do I get rid of it so that I got left is a power series with constant coefficients?

I already told you that in posts #2 and #6.
 
  • #15
Ray Vickson said:
I already told you that in posts #2 and #6.

Sorry, it didn't seem obvious that the ##x##'s could be "absorbed" into the summation that way.

But thanks!
 
  • #16
V0ODO0CH1LD said:
Sorry, it didn't seem obvious that the ##x##'s could be "absorbed" into the summation that way.

But thanks!

The product ##\sum_n u_n x^n \, \times \, \sum_n v_n x^n## can be expressed as ##\sum_n w_n x^n##, where the ##w##-sequence is the convolution of the ##u## and ##v## sequences; that is,
[tex] w_n = \sum_{k=0}^n u_k v_{n-k} [/tex]
see, eg., http://en.wikipedia.org/wiki/Power_series .

So the product of
[tex] y(x) = \sum_{n=0}^{\infty} c_n x^n [/tex]
and
[tex] 1/(1-x)^2 = (1-x)^{-2} = \sum_{n=0} (n+1) x^n [/tex]
is
[tex] \frac{y(x)}{(1-x)^2} = \sum_{n=0}^{\infty} d_n x^n,\\
\text{where}\\
d_n = \sum_{k=0}^n (k+1) c_{n-k} = c_n + 2c_{n-1} + 3 c_{n-2}+ \cdots + (n+1) c_0 [/tex]
 
  • #17
Thanks!

But what if ##p(x)## and ##q(x)## in
[tex] y''+py'+qy=0 [/tex]
can't be expanded around ##x=0##? I know there's the Frobenius method but could't I just expand ##y## around some other ##c\in\mathbb{R}##? So that the assumed solution would look like
[tex] \sum_{n=0}^\infty{}a_n(x-c)^{n} [/tex]
and then also expand ##p(x)## and ##q(x)## around ##c## so I can use the convolution formula and factor out the ##(x-c)^n## to get the recurrence relation?

I know that if ##x=0## is a regular singular point of the differential equation there's the Frobenius method (which I know how to use but I don't understand why it works).

Instead of assuming a solution in a "regular" power series we assume a solution of the form
[tex] \sum_{n=0}^\infty{}a_nx^{n+r} [/tex]
but what is this ##r##? What does it represent? Is it somehow compensating for the fact that a singularity happens for some ##a_i##'s? Why does it work?

EDIT: Also, is the frobenius method a generalization of the power series solution method (i.e. does it only work if the point in question is a regular singular point or does it also work if the point is ordinary)?
 
Last edited:

Related to Power series solution to differential equation

1. What is a power series solution to a differential equation?

A power series solution to a differential equation is a method of solving a differential equation by representing the solution as an infinite sum of terms, each containing higher powers of the independent variable. This allows for a more precise and accurate solution compared to other methods such as numerical approximation.

2. When is it appropriate to use a power series solution?

A power series solution is typically used when the differential equation cannot be solved using standard methods such as separation of variables or substitution. It is also useful when the solution is known to be an analytic function, which means it can be represented by a power series.

3. How do you find the coefficients in a power series solution?

The coefficients in a power series solution can be found by substituting the power series into the differential equation and solving for each coefficient. This can often be done by using recurrence relations or by comparing coefficients on both sides of the equation.

4. Can a power series solution always be used to solve a differential equation?

No, a power series solution may not always be applicable. In some cases, the differential equation may have singular points or the solution may not be an analytic function, making it impossible to represent as a power series. In these cases, other methods must be used.

5. Are there any limitations to using a power series solution?

One limitation of using a power series solution is that it may only provide an approximation of the true solution. The accuracy of the solution depends on the number of terms included in the power series. Additionally, the convergence of the power series may be limited by the radius of convergence, which can be affected by the coefficients and the initial conditions of the differential equation.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
461
  • Calculus and Beyond Homework Help
Replies
2
Views
741
  • Calculus and Beyond Homework Help
Replies
1
Views
573
  • Calculus and Beyond Homework Help
Replies
1
Views
347
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
282
  • Calculus and Beyond Homework Help
Replies
16
Views
612
  • Calculus and Beyond Homework Help
Replies
6
Views
443
  • Calculus and Beyond Homework Help
Replies
6
Views
305
  • Calculus and Beyond Homework Help
Replies
18
Views
1K
Back
Top