Euler's Method

Status
Not open for further replies.

alane1994

Active member
Here is my problem, I have been trying this for a couple of hours. I have sought help with a professor, and yet we still couldn't get it. Here is the question in full.
[HR][/HR]Consider the initial value problem below to answer the following.
a)Find the approximations to $$y(0.2)$$ and $$y(0.4)$$ using Euler's method with time steps of $$\Delta{t}=0.2,0.1,0.05, \text{and} 0.025$$
b)Using the exact solution given, compute the errors in the Euler approximations at $$t=0.2$$ and $$t=0.4$$.
c)Which time step results in the more accurate approximation? Explain your observations.
d)In general, how does halving the time step affect the error at $$t=0.2$$ and $$t=0.4$$?

$$y\prime(t)=-2y$$, $$y(0)=1$$, $$y(t)=e^{-2t}$$
[HR][/HR]I am approaching the point of crying because nothing that I do seems to work... I would put my work so far... but I have about 4 pages of it, and that would just be a waste of time for me to type all of that out. Any and all help would be appreciated...

I am confused as to how to start, and I don't expect work to be done for me... I just need liberal amounts of guidance to get me on my way.

Last edited:

Jameson

Staff member
Using the variables defined in this problem, you have a point $(t, y(t))$. The derivative is expressed in terms of $y$ but for calculations you'll need to write it in terms of $t$ I believe. So I would rewrite the derivative as $y'(t)=-2e^{-2t}$. This is the derivative you get by differentiating $y(t)=e^{-2t}$ so everything makes sense so far.

Let's say $s_0=y(0)$, $s_1=y(0)+(\Delta t) y'(0)$ and more generally $s_{n+1}=y(s_n)+(\Delta t) y'(s_n)$. I hope that makes sense. These aren't normally the variables I see used in books.

Let's get from $t=0$ to $t=0.2$ with $\Delta t = 0.2$.

$s_0=y(0)=1$

$s_1=y(0)+(0.2)(y(0))=1+(0.2)(1)=1.2$

I believe that's how it's done, but it's been a while. Where are you having problems?

MarkFL

Staff member
Here are some excerpts taken from notes I wrote on the topic when I was a student:

The Fundamental Theorem of Calculus (FTOC) provides the vital connecting link between the two tools of calculus: differentiation and integration. Use of this link offers a way to demonstrate that the methods for approximating definite integrals and the numerical methods for approximating solutions to first order ordinary initial value problems are mathematically equivalent. To illustrate this, let's begin with the so-called derivative form of the FTOC:

The existence of an indefinite integral of $\displaystyle f(x)$, i.e., of a function $\displaystyle F(x)$ whose derivative is $\displaystyle f(x)$ is given by:

(1) $\displaystyle \frac{d}{dx}\int_a^x f(u)\,du=f(x)$ or $\displaystyle F(x)+C=\int_a^x f(u)\,du$, C a constant.

It should be remarked that the variable u in the symbol $\displaystyle f(u)\,du$ given in (1) is only a "dummy variable." For the variable of integration, whatever it may be called, is "integrated out" and disappears. Having established this, let's go on to define:

$\displaystyle F(x_1)+C=\int_a^{x_1} f(u)\,du$ and $\displaystyle F(x_2)+C=\int_a^{x_2} f(u)\,du$ where $\displaystyle x_1<x_2$

Then we have:

$\displaystyle \left(F(x_1)+C \right)-\left(F(x_2)+C \right)=\left(\int_a^{x_1} f(u)\,du \right)-\left(\int_a^{x_2} f(u)\,du \right)$

$\displaystyle F(x_1)-F(x_2)=\left(\int_a^{x_1} f(u)\,du \right)-\left(\int_a^{x_1} f(u)\,du+\int_{x_1}^{x_2} f(u)\,du \right)$

(2) $\displaystyle \int_{x_1}^{x_2} f(u)\,du=F(x_2)-F(x_1)$

This is the anti-derivative form of the FTOC, and from this we may also conclude:

$\displaystyle \int_a^b f(u)\,du=-\int_b^a f(u)\,du$

We have tacitly assumed that $\displaystyle f(x)$ is continuous and differentiable on $\displaystyle [a,b]$.

While the existence of indefinite and definite integrals of continuous functions is established, the technique of finding them may be far from simple. In fact in many cases the integrals of elementary functions cannot be expressed in terms of elementary functions themselves. For example, consider the simple functions:

$\displaystyle f(x)=\sqrt{x^3+1}$ and $\displaystyle f(x)=e^{x^2}$

While there are substitutions and techniques to transform many into forms found in a finite table of integrals, the evaluation of a definite integral may prove exceedingly difficult if not impossible. It therefore becomes useful to to develop methods for approximating definite integrals as they find a wide application in physical problems. Besides the use in plane areas, the definite integral is used for volumes, arc lengths, surface areas, rectilinear motion, center of mass, moments of inertia, electrostatic and gravitational potentials, liquid pressure, biology, economics, etc.

Approximate Integration

Suppose we have some approximation or numeric scheme A such that:

(4) $\displaystyle A\approx\int_a^b f(x,y)\,dx$, where $\displaystyle \frac{dy}{dx}=f(x,y)$

Since the slope of y is assumed to be continuous on the interval $\displaystyle [a,b]$, given some initial value for y, we may suppose that an explicit relationship between x and y can be found within the rectangle $\displaystyle (a,y(a))-(b,y(b))$ by the uniqueness and existence theorem.

Now, also suppose we have divided the interval $\displaystyle [a,b]$ into n sub-intervals of equal width where:

$\displaystyle a\le x_k<x_{k+1}\le b$ and $\displaystyle x_0=a,\,x_n=b,\,0\le k\le n-1$.

Since by the additive property we have:

$\displaystyle \int_a^b f(x,y)\,dx=\sum_{k=0}^{n-1}\left[\int_{x_k}^{x_{k+1}} f(x,y)\,dx \right]$ we may define:

(5) $\displaystyle A=\sum_{k=0}^{n-1}A_k$ where $\displaystyle A_k\approx\int_{x_k}^{x_{k+1}} f(x,y)\,dx$

If the approximation converges, then we must have:

$\displaystyle \lim_{n\to\infty}\left[\sum_{k=0}^{n-1}A_k \right]=\int_a^b f(x,y)\,dx$

By defining $\displaystyle y_k\equiv y(x_k)$ and applying (2) we have:

$\displaystyle A_n\approx y_{n+1}-y_n$

Solving for $\displaystyle y_{n+1}$ gives rise to the numeric scheme:

(6) $\displaystyle y_{n+1}\approx y_n+A_n$

Thus, the approximating summation for the definite integral has become an approximation to the first order IVP:

(6a) $\displaystyle \frac{dy}{dx}=f(x,y)$, $\displaystyle y(x_0)=y_0$

We now have a means of approximating the solution $\displaystyle y(x)$ at $\displaystyle x=x_n$ where $\displaystyle a\le x_n\le b$.

Riemann Sums and the Approximation Method of Euler

In my opinion, one of the most straight-forward approximation methods available for definite integrals are the Riemann sums with regular partitions, which approximates the definite integral with a series of rectangles of equal width and whose heights are the function's value at $\displaystyle x_n$. But, as usual, the tradeoff for the simplicity of the method is that it does not converge very rapidly.

First, approximate the following definite integral using a Riemann sum of one partition:

(10) $\displaystyle \int_{x_n}^{x_{n+1}} f(x,y)\,dx\approx\Delta x\cdot f(x,y)$, where $\displaystyle \Delta x=x_{n+1}-x_n$

which leads to the numeric scheme:

(11) $\displaystyle y_{n+1}\approx y_n+\Delta x\cdot f(x,y)$, $\displaystyle y_0=y(x_0)$

where $\displaystyle A_n=\Delta x\cdot f(x,y)$ which is base $\displaystyle \Delta x$ times height $\displaystyle f(x,y)$. This is an example of an explicit scheme, i.e., it can be solved for $\displaystyle y_{n+1}$.

Approximation Method of Euler (tangent line method)

When we use the direction field method to sketch a particular solution to an IVP, we try to visualize the intermediate directions between the isoclines we have drawn. If we follow a finite number of these directions, the sketch becomes a polygonal curve or chain of line segments. This polygonal curve is, visually speaking, an approximation to the solution. We can construct values $\displaystyle y_n$ that approximate the solution values $\displaystyle y(x_n)$ as follows:

One method we may use to demonstrate the derivation of Euler's method is through the use of the differential to obtain a linear approximation (the tangent line). Another method would be to use the point-slope formula or Taylor formula of order 1. At the point $\displaystyle (x_n,y_n)$, the slope of the solution is given by:

$\displaystyle \frac{dy}{dx}=f(x_n,y_n)$

Recall that the definition of the differential of the dependent variable is $\displaystyle \Delta y\approx\Delta x\frac{dy}{dx}$

Using $\displaystyle \Delta y=y_{n+1}-y_{n}$ this yields the recursive formula:

(12) $\displaystyle y_{n+1}\approx y_n+\Delta x\cdot f(x,y)$

Thus, the solution at $\displaystyle x=x_n$ may be approximated by (12). This is equivalent to (11), showing that the Riemann sums and approximation method of Euler are equivalent.

In a similar fashion, the trapezoidal method and the improved Euler's method are analogs, as are the mid-point rule and the second order Runge-Kutta method, and Simpson's Rule and the fourth order Runge-Kutta method, but I will leave these for later. alane1994

Active member
CORRECT
 $$\Delta{t}$$ Approximation to y(0.2) Approximation to y(0.4) 0.2 0.60000 0.36000 0.1 0.64000 0.40960 0.05 0.65610 0.43047 0.025 0.66342 0.44013

 $$\Delta{t}$$ Approximations to y(0.2) Approximations to y(0.4) 0.2 0.80000 0.64000 0.1 0.81000 0.65610 0.05 0.81451 0.66342 0.025 0.81665 0.66692

I cannot tell where I went wrong... I am so confused right now. As well as a little angry that I have to do this crap for Calculus I...

My Calculus class is a little messed up... We have covered trapezoidal rule, simpsons rule, and other things that I have been told are typically in calculus 2...

Last edited:

CaptainBlack

Well-known member
CORRECT
 $$\Delta{t}$$ Approximation to y(0.2) Approximation to y(0.4) 0.2 0.60000 0.36000 0.1 0.64000 0.40960 0.05 0.65610 0.43047 0.025 0.66342 0.44013

 $$\Delta{t}$$ Approximations to y(0.2) Approximations to y(0.4) 0.2 0.80000 0.64000 0.1 0.81000 0.65610 0.05 0.81451 0.66342 0.025 0.81665 0.66692

I cannot tell where I went wrong... I am so confused right now. As well as a little angry that I have to do this crap for Calculus I...

My Calculus class is a little messed up... We have covered trapezoidal rule, simpsons rule, and other things that I have been told are typically in calculus 2...

Consider $$\Delta t=0.2$$, then $$y(0.2) = y(0)+\Delta t\times y'(0) = 1-0.2 \times 2=0.6$$ and $$y(0.4)=y(0.2)+\Delta t \times y'(0.2)=0.6-0.2\times 1.2=0.36$$

CB

alane1994

Active member
Consider $$\Delta t=0.2$$, then $$y(0.2) = y(0)+\Delta t\times y'(0) = 1-0.2 \times 2=0.6$$ and $$y(0.4)=y(0.2)+\Delta t \times y'(0.2)=0.6-0.2\times 1.2=0.36$$

CB
How to you calculate $$y^{\prime}(0)$$?
I am confused as to how it is =2

alane1994

Active member
OK, I have a similar problem to try.
Instead of y(0.2) & y(0.4), it is y(0.4) & y(0.8).
 $$\Delta{t}$$ Approximations of y(0.4) Approximations of y(0.8) 0.4 0.20000 ????? 0.2 0.1 0.05

So... for the first blank I have 0.2. Is that correct? And how would you set up the second blank. I have this started.

$$y(0.8)=y(0.4)+(\Delta{t} \times y^{\prime}(0.4))$$
$$y(0.8)=0.2-(0.4 \times ....$$

I am unsure how to calculate $$y^{\prime}(0.4)$$

CaptainBlack

Well-known member
How to you calculate $$y^{\prime}(0)$$?
I am confused as to how it is =2
It comes from the statement of the problem; the ODE we are solving is $$y'(t)=-2y(t)$$ and the initial condition $$y(0)=1$$, so $$y'(0)=-2y(0)=-2 \times 1=-2$$

CB

CaptainBlack

Well-known member
Ok, then $y'(t)$ can be expressed as $y'(t)=-2y$ or $y'(t)=-2e^{-2t}$. Use the second one to plug in values. You want to find $y'(0.4)$. Plugging that you in you get $$\displaystyle y'(0.4)=-2e^{-.8}$$. You'll want a decimal approximation of that.

Make sense?
You should not be using the $$y(t)=e^{-2t}$$ information at all, it is the solution to the initial value problem and is only there for you to use to compare the numerical solution with.

CB

Jameson

You should not be using the $$y(t)=e^{-2t}$$ information at all, it is the solution to the initial value problem and is only there for you to use to compare the numerical solution with.
Fair enough and good point. I am used to seeing these problems with the derivative explicitly given, but assuming we can use $y(t)$ to help with computations is too much I agree. Thanks for clearing that up guys.