Picard method of succesive approximation

In summary, Picard's method is an iterative approach to finding solutions to a differential equation. By checking each guess and using it to produce a better approximation, the method eventually converges to a fixed point, which is the solution to the differential equation. This process is similar to Newton's method and is based on the existence and uniqueness theorem, which requires the function to be Lipschitz and continuous. However, there are cases where this method may not work, such as when the function does not satisfy these requirements.
  • #1
matematikuvol
192
0
[tex]y(t) = y0 +
\int^{ t}_{t_0}
f(s, y(s)) ds.[/tex]

Picard’s method starts with the definition of what it means to be a solution: if you guess that a function
φ(t) is a solution, then you can check your guess by substituting it into the right-hand side of equation (2) and
comparing it to the left-hand side, which is simply φ(t) itself. The new idea is that the process of checking
each guess produces a new guess which, even if it is not the correct solution, is a better approximation
than the one you started with. In this way we obtain an iterative solution, with each new approximation
computed from the previous one by the right-hand side of equation (2). This should be reminiscent of
Newton’s method. In fact, the proof that Picard’s method produces a convergent sequence is similar to the
proof for Newton’s method.

How can I be sure that nth solution is better than n-1th solution. Is there some easy way to see this? Is there a case where this is incorrect?
 
Physics news on Phys.org
  • #2
The point is that you can establish a uniformly convergent sequence U_n ( of functions ) that converges to a function U ( a differentiable function, actually ) . Since the sequence converges uniformly, by the properties of uniform convergence, we have
U_n = y0 + integral of f ( U_n-1 ) --> U = y0 + integral of f ( U ) ( uniform limits can be passed through integrals, and f is continuous, so the limit of the sequence in its arguments holds as such ). Now the limit function U is clearly a solution to the differential equation, just by virtue of checking. That is how the approximations work.
( and that they eventually converge to the solution is what it means to be "better" approximations each time. In other words, pick a large n, and || U_n - U || < (some small epsilon) ) For the existence and uniqueness theorem to work, f must be Lipschitz in X and continuous in t ( where f is of the form f( t, X ) ) ( I dropped the "time variable" t above, but it doesn't matter.
For example, a differential equation that doesn't satisfy the above requirements is X ' = X^(-1/3 ) with the initial condition y( 0 ) = 0. This system is not Lipschitz in X, and two solutions can be found for the system around 0 ( one can be found by separation of variables, and the other can be found by just setting y = the zero function )
 
Last edited:
  • #3
And what is [tex]X[/tex]?
 
  • #4
matematikuvol said:
And what is [tex]X[/tex]?

We can consider a differential equation/system as a function f with a domain D in R^(n+1) so that f ( t , y1( t ) , y2 ( t ) ,... , yn ( t ) ) = ( y1 ' ( t ) , y2 ' ( t ) , ... yn ' ( t ) ) = X ' where X = ( y1 ( t ) , y2 ( t ) , ... , yn ( t ) )
 
  • #5
I'm not sure that each one is actually better than the last. Often, sequences converge, but they alternate between getting better and worse as they do so.

Out of laziness, I might be a little vague here, but if nothing else, this will convey the flavor of how I think about it.

I like to think of this as a fixed point problem.

You have a vector field that you want to integrate. You start out with some guess as to what the solution is, maybe a constant flow. Then you integrate all the vectors along that guess (sort of add them up). If the curve obtained by that construction is the curve itself, i.e., a fixed point, it solves the ODE because the tangent vector at each point agrees with the vector field.

Under the condition of the vector field being locally Lipschitz, this process yields a contraction mapping of the space of differentiable curves. That is, the curves get closer together by some fixed factor, C less than 1, each time (the distance between two curves here is defined by the maximum distant apart over all time t--sup norm). So, the first time, the curves can't move apart by more than C, then next time, no more than C^2, the next time, no more than C^3, and so on. Add all these up and you get a convergent series. The only way this could happen is for the curves to be converging to a fixed point (fixed point because it's the result of iterating a mapping from an appropriate set of curves to itself). And as we said, a fixed point solves the ODE.
 

Related to Picard method of succesive approximation

1. What is the Picard method of successive approximation?

The Picard method of successive approximation is a mathematical technique used to solve ordinary differential equations. It involves breaking down a complex equation into simpler, smaller equations that can be solved iteratively. This method is particularly useful for solving nonlinear equations that cannot be solved using traditional methods.

2. How does the Picard method work?

The Picard method works by using an initial guess for the solution of an equation and then using this guess to generate a sequence of increasingly accurate approximations. Each approximation is used to generate the next one until the desired level of accuracy is achieved. This process continues until the solution converges to a fixed point.

3. What are the advantages of using the Picard method?

The Picard method is advantageous because it can be used to solve a wide range of nonlinear equations, including those that are difficult or impossible to solve using other methods. It is also relatively simple and straightforward to implement, making it a popular choice for solving many types of differential equations.

4. Are there any limitations to the Picard method?

While the Picard method is a powerful tool for solving nonlinear equations, it does have some limitations. One limitation is that it may fail to converge if the initial guess is not sufficiently close to the true solution. Additionally, the method may require a large number of iterations to achieve the desired level of accuracy, which can be time-consuming for complex equations.

5. How is the Picard method different from other numerical methods?

The Picard method differs from other numerical methods in that it relies on a sequence of approximations rather than a single solution. This iterative approach allows for greater flexibility and can often yield more accurate results. Additionally, the Picard method is specifically designed for solving nonlinear equations, while other methods may be better suited for linear equations.

Similar threads

  • Differential Equations
Replies
1
Views
886
  • Differential Equations
Replies
1
Views
770
Replies
7
Views
2K
Replies
3
Views
2K
  • Differential Equations
Replies
6
Views
2K
Replies
14
Views
2K
Replies
11
Views
2K
Replies
2
Views
2K
  • General Math
Replies
11
Views
2K
  • Differential Equations
Replies
4
Views
3K
Back
Top