# Numerical analysis homework

#### Thorra

##### Member
Sorry, but this is the only subject I could not pass even if I gave it my all every day and night of the semester. And I will still surely fail this subject, but as a last resort I will try to post my problem here, hoping to get solution and maybe an explanation. Sorry if some of the phrasing might be confusing, I'm merely translating from my native language.

1. The problem statement, all variables and given/known data
Differential operator L ir approximated with a positively defined difference operator $$\Lambda$$>0, that has a full special-function (λ) and special-value set and 0<λmin<λ<λmax. Explore the numerical stability in relation to the initial conditions s and right-hand side function w of the following difference schemes:
$$\frac{y^{n+1}_{i}-y^{n}_{i}}{\tau}$$-k$$\Lambda$$$$\frac{y^{n+1}_{i}-y^{n}_{i}}{2}$$=$$w^{n}_{i}$$; $$y^{0}_{i}$$=$$s_{i}$$
if k - a given constant and w - a given function of the grid.

2. Relevant equations
Any basic explanations as to what is what will do as I am extreemly clueless in this entire ordeal.
I will take any help I can get if anybody is willing.

3. The attempt at a solution
I haven't had one yet and based on previous experience in this subject, all my attempts would be very, very futile and very, very wrong.

#### Klaas van Aarsen

##### MHB Seeker
Staff member
Sorry, but this is the only subject I could not pass even if I gave it my all every day and night of the semester. And I will still surely fail this subject, but as a last resort I will try to post my problem here, hoping to get solution and maybe an explanation. Sorry if some of the phrasing might be confusing, I'm merely translating from my native language.

1. The problem statement, all variables and given/known data
Differential operator L ir approximated with a positively defined difference operator $$\Lambda$$>0, that has a full special-function (λ) and special-value set and 0<λmin<λ<λmax. Explore the numerical stability in relation to the initial conditions s and right-hand side function w of the following difference schemes:
$$\frac{y^{n+1}_{i}-y^{n}_{i}}{\tau}$$-k$$\Lambda$$$$\frac{y^{n+1}_{i}-y^{n}_{i}}{2}$$=$$w^{n}_{i}$$; $$y^{0}_{i}$$=$$s_{i}$$
if k - a given constant and w - a given function of the grid.

2. Relevant equations
Any basic explanations as to what is what will do as I am extreemly clueless in this entire ordeal.
I will take any help I can get if anybody is willing.

3. The attempt at a solution
I haven't had one yet and based on previous experience in this subject, all my attempts would be very, very futile and very, very wrong.
Welcome to MHB, Thorra! To be honest, as yet I can't make too much sense of your formulas.

You refer to $y_i^n$. Would you mind explaining what it stands for?
Apparently we're talking about y values on a grid of x values, each x value being identified by $x_i$.
But... is $y_i^n$ a vector?
What does the $n$ stand for? An iteration number? A step along a solution of a differential equation? The nth derivative?
The same question for $w_i^n$. If those are values on a grid, I don't understand how $n$ is involved. Or is it a typo?

You refer to $\Lambda$ as a difference operator.
From the context it seems it is a matrix.
Can it be that it should read "positive definite" instead of "positively defined"?
And that your $\lambda$ are eigenvalues?
Either way, in your difference scheme, $\Lambda$ does not seem to have the function of a matrix.
Can you clarify?

What is $\tau$? Is it a step size?

#### Thorra

##### Member
Hey! Thanks for correcting my horrible missyntaxing of the formulas. I copied it from phyiscsforums, but found no such luck here and left.

You refer to $y_i^n$. Would you mind explaining what it stands for?
I'm sorry to say I'm not even sure. In previous exercises in "n"'s place was "k", which indicated the time layer. Like n+1 would be the function's value in one point in time, but just n would be the function's value one step before.

But... is $y_i^n$ a vector?
Well... the professor drew some matrixes on the board on occasion but not that often, so I don't know, I'm sorry.

What does the $n$ stand for? An iteration number? A step along a solution of a differential equation? The nth derivative?
The "n" is I guess an iteration value indeed. As I said above in a different way.
The same question for $w_i^n$. If those are values on a grid, I don't understand how $n$ is involved. Or is it a typo?
I think $w_i^n$ means the initial data. But again, I'm really not that sure. This is why I am so hopelessly lost. I have no real material to learn from and I didn't gather every detail in the lectures.

You refer to $\Lambda$ as a difference operator.
From the context it seems it is a matrix.
Can it be that it should read "positive definite" instead of "positively defined"?
Could be! In my notes I noted down that $\Lambda$y=-y$\overline{x}$x
I'm really sorry about the wrong syntax again, but I cannot find where I can do such advanced text editing here.

And that your $\lambda$ are eigenvalues?
Either way, in your difference scheme, $\Lambda$ does not seem to have the function of a matrix.
Can you clarify?
You're right, they're eigenvalues! I didn't know the appropriate word and yeah they're not functions, sorry.
What is $\tau$? Is it a step size?
Yes, it is the time step!

#### Klaas van Aarsen

##### MHB Seeker
Staff member
Hey! Thanks for correcting my horrible missyntaxing of the formulas. I copied it from phyiscsforums, but found no such luck here and left.
Actually, I didn't. Someone else on staff here did. Okay.
Here's my interpretation.

You have a real valued function $y(t,x)$, where t is the time, and x is the x coordinate.
And I think you have the partial differential equation:
$$\frac{\partial y}{\partial t} - k \mathcal L y = w(t,x)$$
where w(t,x) is a known function, the initial condition $y(0,x)=s(x)$ is a known function, $k$ is a constant, and $\mathcal L$ is a differential operator with respect to x, which might for instance be $$\displaystyle \mathcal L = \frac{\partial^2}{\partial x^2}$$.
So it might for instance be:
$$\frac{\partial y}{\partial t} - k \frac{\partial^2 y}{\partial x^2} = w(t,x)$$

The range for x has been divided in a partition $x_0, x_1, ..., x_i, ..., x_M$.
We might as well assume the range for x has been divided in steps of size h, and that there is a total of M steps.
In other words: $x_i = x_0 + ih$.

The range for t has also been divided into discrete time steps of size $\tau$.
So $t_n = n \tau$.

Now we define $y_i^n = y(n \tau, x_i)$.
That is, the y coordinate at time $t = n\tau$ and at $x = x_i$.

Could be! In my notes I noted down that $\Lambda y=-y_{\overline{x}x}$.
I'm really sorry about the wrong syntax again, but I cannot find where I can do such advanced text editing here.
I think that means that you're using something like:
$$\left.\frac{\partial^2 y}{\partial x^2}\right|_{x=x_i} \approx \frac{y_{i-1} - 2y_i + y_{i+1}}{h^2}$$
The corresponding matrix expression would be:
$$\frac{\partial^2 y}{\partial x^2} \approx \Lambda y = \frac{1}{h^2} \begin{bmatrix} \ddots \\ 1 & -2 & 1 & \\ & 1 & -2 & 1 & \\ && 1 & -2 & 1 & \\ &&&& \ddots \\ \end{bmatrix} \begin{bmatrix} \vdots \\ y_{i-1} \\ y_i \\ y_{i+1} \\ \vdots \end{bmatrix}$$

Does it sound about right so far? #### Thorra

##### Member
Yeah, that sounds right!

Edit: Well, at least I've seen that kind of stuff around. So hey!

Last edited:

#### Klaas van Aarsen

##### MHB Seeker
Staff member
Yeah, that sounds right!

Edit: Well, at least I've seen that kind of stuff around. So hey!
Good! So... since you are supposed to learn this kind of stuff, can you explain how they would get from:
$$\frac{\partial y}{\partial t} - k \frac{\partial^2 y}{\partial x^2} = w(t,x); \qquad y(0, x)=s(x)$$
to the following?
$$\frac{y^{n+1}_{i}-y^{n}_{i}}{\tau} - k\Lambda \frac{y^{n+1}_{i}-y^{n}_{i}}{2}=w^{n}_{i}; \qquad y^{0}_{i}=s_{i}$$

And... suppose we have an error $\Delta y_i^n$ in $y_i^n$, can you say something about the error in $\Lambda y_i^n$?
Because that is what numerical stability is about.

#### Thorra

##### Member
So... since you are supposed to learn this kind of stuff, can you explain how they would get from:
...
to the following?
...
Well I think it's the process of discretization an "ideal" mathematical function thing (or, well, differential equation). The partial derivative from t employs the change of time (with the not infinitely small step Tau) over the change of function (the not infinitely small y2-y1). The upper indexes in the discrete function resemble the time step, the lower indexes - the other (x) step. Not sure about the $\Lambda$, but it looks like an operator that makes a derivative into a discretisized (is that even a word?) derivative that takes within itself one of the orders of percision.. if that makes any sense. I mean, a 2nd order derivative became a first order derivative (in discrete form) times a $\Lambda$.
the w(t,x) becoming indexed with n and i steps is the same process. So I guess only the lambda is really not that clear for me.

And... suppose we have an error $\Delta y_i^n$ in $y_i^n$, can you say something about the error in $\Lambda y_i^n$?
Because that is what numerical stability is about.
Hm.. The error is gonna be either bigger or smaller?
Edit: And I wanted to add, if the error gets bigger, then the system is unstable, if smaller- then stable... figuratively speaking, unless I'm barking up the wrong tree.

Last edited:

#### Klaas van Aarsen

##### MHB Seeker
Staff member
Well I think it's the process of discretization an "ideal" mathematical function thing (or, well, differential equation). The partial derivative from t employs the change of time (with the not infinitely small step Tau) over the change of function (the not infinitely small y2-y1). The upper indexes in the discrete function resemble the time step, the lower indexes - the other (x) step. Not sure about the $\Lambda$, but it looks like an operator that makes a derivative into a discretisized (is that even a word?) derivative that takes within itself one of the orders of percision.. if that makes any sense. I mean, a 2nd order derivative became a first order derivative (in discrete form) times a $\Lambda$.
the w(t,x) becoming indexed with n and i steps is the same process. So I guess only the lambda is really not that clear for me.
Sounds good! Actually, I'm just realizing that there may be a typo in the formula.
Can it be it should be:
$$k \Lambda \frac{y_i^{n+1} + y_i^n}{2}$$

$$\frac{\partial y}{\partial t} \approx \frac{y^{n+1} - y^n}{\tau}$$
at time $t = n\tau + \frac 1 2 \tau$,
at the same time $t$ we have that:
$$y \approx \frac{y^{n+1} + y^n}{2}$$
It seems to me that this average was intended (with a plus sign instead of a minus sign).

Then, to explain what $\Lambda$ is, from that average vector we can determine a derivative like $\partial^2/\partial x^2$.
That is because:
\begin{array}{}
\frac{\partial }{\partial x}y(t, x_i - h/2) &\approx& \frac{y(t, x_i) - y(t, x_{i-1})}{h} \\
\frac{\partial }{\partial x}y(t, x_i + h/2) &\approx& \frac{y(t, x_{i+1}) - y(t, x_i)}{h} \\
\frac{\partial^2 }{\partial x^2}y(t, x_i) &\approx& \frac{\frac{y_{i+1}(t) - y_i(t)}{h} - \frac{y_{i}(t) - y_{i-1}(t)}{h}}{h} \\
&=& \frac{y_{i-1}(t) - 2 y_{i}(t) + y_{i+1}(t)}{h^2}
\end{array}

Hm.. The error is gonna be either bigger or smaller?
Edit: And I wanted to add, if the error gets bigger, then the system is unstable, if smaller- then stable... figuratively speaking, unless I'm barking up the wrong tree.
True enough. More specifically, if $\Lambda$ is symmetric and positive definite it is given that when applying it to a vector, the norm is between its lowest and highest eigenvalue times the norm of that vector.
That is, if the lowest and highest eigenvalue are $0 < \lambda_{min} \le \lambda_{max}$, then
$$\lambda_{min} ||\Delta y|| \le ||\Lambda \Delta y|| \le \lambda_{max} ||\Delta y||$$

The numerical stability of a system is usually determined by calculating how an error propagates through the algorithm in a worst case scenario. If the worst case error becomes bigger, it is called unstable, and if the error becomes smaller, it is called stable.

Anyway, you have a difference equation containing $y_i^{n+1}$ and $y_i^{n}$.
If I'm not mistaken, you're supposed to "solve" $y_i^{n+1}$ from that to find the next iteration. And then you have to check how any errors propagate.
As the first step, can you solve the equation for $y_i^{n+1}$?

#### Thorra

##### Member
Sounds good! Actually, I'm just realizing that there may be a typo in the formula.
Can it be it should be:
$$k \Lambda \frac{y_i^{n+1} + y_i^n}{2}$$

$$\frac{\partial y}{\partial t} \approx \frac{y^{n+1} - y^n}{\tau}$$
at time $t = n\tau + \frac 1 2 \tau$,
at the same time $t$ we have that:
$$y \approx \frac{y^{n+1} + y^n}{2}$$
It seems to me that this average was intended (with a plus sign instead of a minus sign).
You're right, there's two horrible typos in that formula. :/
The expression is actually:
$$\frac{y^{n+1}_i - y^n_i}{\tau}+k\Lambda\frac{y^{n+1}_i + y^n_i}{2}=w^n_i$$
(Finally starting to learn this writing! Only through following your example, though. Is there a more official reference table somewhere?

Then, to explain what $\Lambda$ is, from that average vector we can determine a derivative like $\partial^2/\partial x^2$.
That is because:
\begin{array}{}
\frac{\partial }{\partial x}y(t, x_i - h/2) &\approx& \frac{y(t, x_i) - y(t, x_{i-1})}{h} \\
\frac{\partial }{\partial x}y(t, x_i + h/2) &\approx& \frac{y(t, x_{i+1}) - y(t, x_i)}{h} \\
\frac{\partial^2 }{\partial x^2}y(t, x_i) &\approx& \frac{\frac{y_{i+1}(t) - y_i(t)}{h} - \frac{y_{i}(t) - y_{i-1}(t)}{h}}{h} \\
&=& \frac{y_{i-1}(t) - 2 y_{i}(t) + y_{i+1}(t)}{h^2}
\end{array}
I think I get it, but but half a step $h/2$ forth and back, why not a full step? The right side of the equation makes me think it's taken with a full step.

More specifically, if $\Lambda$ is symmetric and positive definite it is given that when applying it to a vector, the norm is between its lowest and highest eigenvalue times the norm of that vector.
That is, if the lowest and highest eigenvalue are $0 < \lambda_{min} \le \lambda_{max}$, then
$$\lambda_{min} ||\Delta y|| \le ||\Lambda \Delta y|| \le \lambda_{max} ||\Delta y||$$
So wait... $\Lambda$ is like a central eigenvalue or something? Why does it get to be in the normed $||$s instead of outside like the small lambdas $\lambda$?

Anyway, you have a difference equation containing $y_i^{n+1}$ and $y_i^{n}$.
If I'm not mistaken, you're supposed to "solve" $y_i^{n+1}$ from that to find the next iteration. And then you have to check how any errors propagate.
As the first step, can you solve the equation for $y_i^{n+1}$?
I dunno... $y_i^{n+1}= y_i^n + \tau \frac{dy}{dt}$ ?

Or you mean like approximate it with the Taylor series?
$y(x_i^{n+1})=y(x_i^n) + \tau y'(x_i^n) + 0,5\tau y''(x_i^n) +O(\tau^3)$
Sorry if I am way off.

P.S. Thank you for the time and effort you're dedicating for helping me. I know I'm not that good at learning this stuff, so thanks and stuff. This forum is awesome.

Last edited:

#### Klaas van Aarsen

##### MHB Seeker
Staff member
You're right, there's two horrible typos in that formula. :/
The expression is actually:
$$\frac{y^{n+1}_i - y^n_i}{\tau}+k\Lambda\frac{y^{n+1}_i + y^n_i}{2}=w^n_i$$
Aha!

(Finally starting to learn this writing! Only through following your example, though. Is there a more official reference table somewhere?
We have a sub forum for it which includes a sticky post with a reference table:
LaTeX Tips and Tutorials

I think I get it, but but half a step $h/2$ forth and back, why not a full step? The right side of the equation makes me think it's taken with a full step.
Well, if you have f(x) and f(x+h), an approximation for the derivative at x is obviously
$$\frac{f(x+h) - f(x)}{h}$$
However, we get a more accurate approximation by taking the f values symmetrically around the x value:
$$\frac{f(x+h/2) - f(x-h/2)}{h}$$

So wait... $\Lambda$ is like a central eigenvalue or something? Why does it get to be in the normed $||$s instead of outside like the small lambdas $\lambda$?
No. $\Lambda$ is a matrix. It might also have been written as $A$.
Each matrix has so called eigenvalues, usually denoted as $\lambda$, which are scalars.
That means that there is a vector $\mathbf v$, such that $A \mathbf v=\lambda \mathbf v$.
If A is a symmetric positive-definite matrix, all eigenvalues are positive real numbers.
That means that for any vector $\mathbf w$ we can say something about the norm of the vector $||A \mathbf w||$.

I dunno... $y_i^{n+1}= y_i^n + \tau \frac{dy}{dt}$ ?

Or you mean like approximate it with the Taylor series?
$y(x_i^{n+1})=y(x_i^n) + \tau y'(x_i^n) + 0,5\tau y''(x_i^n) +O(\tau^3)$
Sorry if I am way off.
Not quite.

We have
$$\frac{y^{n+1}_i - y^n_i}{\tau}+k\Lambda\frac{y^{n+1}_i + y^n_i}{2}=w^n_i$$
Since $\Lambda$ is a matrix, we can rewrite this as:
\begin{array}{}
\frac{1}{\tau} (y^{n+1}_i - y^n_i) + \frac k 2 \Lambda (y^{n+1}_i + y^n_i) &=& w^n_i \\
\frac{1}{\tau} y^{n+1}_i - \frac{1}{\tau}y^n_i + \frac k 2 \Lambda y^{n+1}_i + \frac k 2 \Lambda y^n_i &=& w^n_i \\
\frac{1}{\tau} y^{n+1}_i + \frac k 2 \Lambda y^{n+1}_i &=& w^n_i + \frac{1}{\tau}y^n_i - \frac k 2 \Lambda y^n_i \\
\Big(\frac{1}{\tau} I + \frac k 2 \Lambda\Big) y^{n+1}_i &=& w^n_i + \Big(\frac{1}{\tau} I - \frac k 2 \Lambda\Big) y^n_i \\
y^{n+1}_i &=& \Big(\frac{1}{\tau} I + \frac k 2 \Lambda\Big)^{-1} \left(w^n_i + \Big(\frac{1}{\tau} I - \frac k 2 \Lambda\Big) y^n_i\right)
\end{array}
where $I$ is the identity matrix.

P.S. Thank you for the time and effort you're dedicating for helping me. I know I'm not that good at learning this stuff, so thanks and stuff. This forum is awesome.
Thanks! #### Thorra

##### Member
Ah! That is quite clever. So, wait, that's all I have to do in this? Can't be that simple... Or is this just the beginning? (See this is how far supremely above me this subject is? Ugh...)

I don't know if you'll reply but I'll make another thread featuring a hopefully simpler set of tasks (and my takes on most of them) that I have do learn so I can write that stuff out next thursday. Yeah... I know I should do this stuff by myself, but by-myself earned me roughly... 4%. I'm not that horrible at other subjects. Usually I can get 40% just from attending lectures, and maybe 50-80% if I actually apply myself (Relevant only in this semester. Last semester was much easier). Anyway, won't go more offtopic.

#### Klaas van Aarsen

##### MHB Seeker
Staff member
Ah! That is quite clever. So, wait, that's all I have to do in this? Can't be that simple... Or is this just the beginning? (See this is how far supremely above me this subject is? Ugh...)
Almost all... we still haven't touched the subject of numerical stability, which is what you were supposed to discuss.
That is, how would an error in $w_i^n$ or in $y_i^n$ propagate in the worst case scenario.
That depends on the norm of the matrices involved, which is determined by their largest absolute eigenvalue.

I don't know if you'll reply but I'll make another thread featuring a hopefully simpler set of tasks (and my takes on most of them) that I have do learn so I can write that stuff out next thursday. Yeah... I know I should do this stuff by myself, but by-myself earned me roughly... 4%. I'm not that horrible at other subjects. Usually I can get 40% just from attending lectures, and maybe 50-80% if I actually apply myself (Relevant only in this semester. Last semester was much easier). Anyway, won't go more offtopic.
It does appear that you were biting off a rather large chunk all at once. #### Klaas van Aarsen

##### MHB Seeker
Staff member
$$y^{n+1}_i = \Big(\frac{1}{\tau} I + \frac k 2 \Lambda\Big)^{-1} \left(w^n_i + \Big(\frac{1}{\tau} I - \frac k 2 \Lambda\Big) y^n_i\right)$$
To continue the argument, it follows that:
$$y^{n+1}_i = \Big(\frac{1}{\tau} I + \frac k 2 \Lambda\Big)^{-1} w^n_i + \Big(\frac{1}{\tau} I + \frac k 2 \Lambda\Big)^{-1} \Big(\frac{1}{\tau} I - \frac k 2 \Lambda\Big) y^n_i$$

Let's define:
\begin{array}{}
A &=& \frac{1}{\tau} I + \frac k 2 \Lambda \\
B &=& \frac{1}{\tau} I - \frac k 2 \Lambda
\end{array}

So we have:
$$y^{n+1}_i = A^{-1} w^n_i + A^{-1} B y^n_i$$

If $A$ would have an eigenvalue of (almost) zero, it's inverse would have an (almost) infinitely large eigenvalue, and $||A^{-1}|| \to \infty$.
As a result any error in $w^n_i$ would be blown out of proportion.
So we really want $A$ to be, say, positive definite.
To do so, we need to pick $\tau$ small enough to get A positive definite (lowest eigenvalue significantly positive), so that its inverse has a reasonably bounded norm.

The same argument holds for $A^{-1} B$, which must have a norm small enough so that errors in $y^n_i$ do not blow up out of proportion.
That is, $\tau$ small enough to get $A^{-1}$ reasonably bounded, where $B$ should already be reasonably bounded.