Welcome to our community

Be a part of something great, join today!

Numerical Methods!

Thorra

Member
Dec 7, 2013
40
Okay, I'm in a bit of a pickle here. Got the exam on thursday and (surprise) I am utterly clueless.

I cannot grasp a lot of concepts, but here's some I'd like to at least get an idea of:

Factorization method. I only scrapped that it is a special case of Gauss' Exclusion method, that you take a 2nd order difference and turn it into three 1st order differences (or 4th order difference to turn into five 2nd order differences I guess). But I simply could not follow the math. Very with playing with coefficients and this "recurrence equation" that goes like this. $$y_i=\alpha_{i+1}y_{i+1}+\beta_{i+1}$$
and they're put into some mysterious $$A_iy_{i-1}-C_i y_i + B_i y_{i+1}=-F_i; i=1,2,...,N-1$$
And later on some big importance is given to this $$|C_i|\geqslant|A_i|+|B_i|$$


And I really don't know anything about stability. It's the first thread I made here, too. But yeah I can't find any materials that would actually explain to me what stability is and how the various elements like those eigenvalues $\lambda$ and eigenfunctions and whatnot come into play with it.

I guess that's all I can muster to write now.
 

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,780
Okay, I'm in a bit of a pickle here. Got the exam on thursday and (surprise) I am utterly clueless.

I cannot grasp a lot of concepts, but here's some I'd like to at least get an idea of:
Well, we'll just start with the first. ;)

Factorization method. I only scrapped that it is a special case of Gauss' Exclusion method, that you take a 2nd order difference and turn it into three 1st order differences (or 4th order difference to turn into five 2nd order differences I guess). But I simply could not follow the math.
No clue what you mean here.
Generally, factorization means writing an expression as a product of factors.
For instance $a^2-b^2=(a-b)(a+b)$.
There you go, we have factorized $a^2-b^2$.

Very with playing with coefficients and this "recurrence equation" that goes like this. $$y_i=\alpha_{i+1}y_{i+1}+\beta_{i+1}$$
and they're put into some mysterious $$A_iy_{i-1}-C_i y_i + B_i y_{i+1}=-F_i; i=1,2,...,N-1$$
Those look like 2 different recurrence equations that have nothing in common.
A recurrence equation is just an equation that has stuff like $y_i$ and $y_{i+1}$ in it.


And later on some big importance is given to this $$|C_i|\geqslant|A_i|+|B_i|$$
Yes. This is leading to numerical stability.
It is related to the second recurrence equation you gave.
For the recurrence equation to be numerically stable, that condition should hold.
Presumably your text book gives some more explanation why that is the case.


And I really don't know anything about stability. It's the first thread I made here, too. But yeah I can't find any materials that would actually explain to me what stability is and how the various elements like those eigenvalues $\lambda$ and eigenfunctions and whatnot come into play with it.
From WolframMathWorld:

Numerical stability refers to how a malformed input affects the execution of an algorithm. In a numerically stable algorithm, errors in the input lessen in significance as the algorithm executes, having little effect on the final output.
On the other hand, in a numerically unstable algorithm, errors in the input cause a considerably larger error in the final output.

So suppose we have a recurrence equation of the form:
$$y_{i+1} = \frac 1 2 y_i$$
There you go: a numerically stable algorithm! (Angel)
 

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,780
Very with playing with coefficients and this "recurrence equation" that goes like this. $$y_i=\alpha_{i+1}y_{i+1}+\beta_{i+1}$$
This would probably be intended to explain numerical stability.
If $|\alpha_{i+1}| < 1$ the recurrence equation is unstable (assuming we use a forward algorithm running from $y_i$ to $y_{i+1}$), since the error in $y_{i+1}$ would be greater than the error in $y_i$.
In particular $\beta_{i+1}$ has no influence on the stability.
If $|\alpha_{i+1}| > 1$ the recurrence equation is stable (in a forward algorithm).


In the (forward) algorithm
$$y_{i+1}=A_{i+1}y_{i}+\beta_{i+1}$$
where $y_i$ would be a vector, and $A_{i+1}$ is a matrix, the same principle holds.
If $||A_{i+1}|| > 1$ the algorithm is unstable and if $||A_{i+1}|| < 1$ the recurrence equation is stable.
Here $|| \cdot ||$ is the matrix norm.
 

Thorra

Member
Dec 7, 2013
40
No clue what you mean here.
Generally, factorization means writing an expression as a product of factors.
For instance $a^2-b^2=(a-b)(a+b)$.
There you go, we have factorized $a^2-b^2$.
Well I'm sure knowing this aS the base idea won't hurt me.

Those look like 2 different recurrence equations that have nothing in common.
A recurrence equation is just an equation that has stuff like $y_i$ and $y_{i+1}$ in it.
Ah


Yes. This is leading to numerical stability.
It is related to the second recurrence equation you gave.
For the recurrence equation to be numerically stable, that condition should hold.
Presumably your text book gives some more explanation why that is the case.
Aah
From WolframMathWorld:

Numerical stability refers to how a malformed input affects the execution of an algorithm. In a numerically stable algorithm, errors in the input lessen in significance as the algorithm executes, having little effect on the final output.
On the other hand, in a numerically unstable algorithm, errors in the input cause a considerably larger error in the final output.
Yeeeah?
So suppose we have a recurrence equation of the form:
$$y_{i+1} = \frac 1 2 y_i$$
There you go: a numerically stable algorithm! (Angel)
say what?
i is going from N to 1, right? So $y_i=2y_{i+1}$ - it gets bigger so the error probably gets bigger as well. Or am I supposed to understand it like - the error is constant and the bigger the $y_i$ gets the smaller part of the result the error will hold?

This would probably be intended to explain numerical stability.
If $|\alpha_{i+1}| < 1$ the recurrence equation is unstable (assuming we use a forward algorithm running from $y_i$ to $y_{i+1}$), since the error in $y_{i+1}$ would be greater than the error in $y_i$.
In particular $\beta_{i+1}$ has no influence on the stability.
If $|\alpha_{i+1}| > 1$ the recurrence equation is stable (in a forward algorithm).
It does go together with this.

In the (forward) algorithm
$$y_{i+1}=A_{i+1}y_{i}+\beta_{i+1}$$
where $y_i$ would be a vector, and $A_{i+1}$ is a matrix, the same principle holds.
If $||A_{i+1}|| > 1$ the algorithm is unstable and if $||A_{i+1}|| < 1$ the recurrence equation is stable.
Here $|| \cdot ||$ is the matrix norm.
By matrix norm you mean that A includes both the coefficients and the divisions by steps?
I find the $A_{i+1}y_{i}$ bit strange. Why have a coefficient from a further spot directly affect your current spot rather than the current spot's coefficient?
 

Thorra

Member
Dec 7, 2013
40
On a different note: do you know the differences and advantages between one-step and multiple-step methods of Cauchy problem? And difference between closed schemes $\phi(t_i,h_i,\omega_i,\omega_{i+1})$ and open schemes $\phi(t_i,h_i,\omega_i)$ besides just the fact that the $\omega_{i+1}$ is easier to be told in an open scheme?

Edit: I meant to do that symbol that is often used as the $f$ when it's about an approximate form. I thought it's called phi. Oh well.
 

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,780
say what?
i is going from N to 1, right? So $y_i=2y_{i+1}$ - it gets bigger so the error probably gets bigger as well. Or am I supposed to understand it like - the error is constant and the bigger the $y_i$ gets the smaller part of the result the error will hold?
In a forward algorithm, $i$ would be going from 1 to N.
Or that is at least what I intended.


By matrix norm you mean that A includes both the coefficients and the divisions by steps?
Huh?
The matrix norm $||A||$ is defined as the greatest value \(\displaystyle \frac{||Ax||}{||x||}\) can attain, where $||\cdot||$ is the regular vector norm.

Formally:
$$||A|| \overset{def}{=} \max_{x} \frac{||Ax||}{||x||} = \max_{||x||=1} ||Ax||$$

I find the $A_{i+1}y_{i}$ bit strange. Why have a coefficient from a further spot directly affect your current spot rather than the current spot's coefficient?
I have given it no thought whether it may be realistic or not.
The matrix $A_{i+1}$ is merely intended to be a matrix that can be accurately calculated and does not depend on the solution $y$.


On a different note: do you know the differences and advantages between one-step and multiple-step methods of Cauchy problem?
From wiki:
Multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge-Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used.

What do you think that the advantages and disadvantages are?


And difference between closed schemes $\phi(t_i,h_i,\omega_i,\omega_{i+1})$ and open schemes $\phi(t_i,h_i,\omega_i)$ besides just the fact that the $\omega_{i+1}$ is easier to be told in an open scheme?
Say what?
Give some context perhaps?


Edit: I meant to do that symbol that is often used as the $f$ when it's about an approximate form. I thought it's called phi. Oh well.
An approximation of $f$ is often denoted as $\tilde f$.
Is that what you mean?
 

Thorra

Member
Dec 7, 2013
40
From wiki:
Multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge-Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used.

What do you think that the advantages and disadvantages are?
Sounds like multi-step methods are better. They take in account the previous points in the case one point happesn to be a bit off. What possible disadvantage? Maybe if the function changes rapidly at one point, then the multistep method goes off the track cause of the previous multi-points trying to keep it in it's usual place? I'm just using my imagination, really.


Say what?
Give some context perhaps?
I remember the context being that we need to find $\omega_{i+1}$ and if it was in a closed scheme it was a part of the function and thus a harder time trying to determine it's function.


An approximation of $f$ is often denoted as $\tilde f$.
Is that what you mean?
Oh, I just meant this guy: $\varphi$. I'm pretty sure it's used as a difference approximation alternative to $f$ ($f$ going in the analytic solution).
 

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,780
Sounds like multi-step methods are better. They take in account the previous points in the case one point happesn to be a bit off. What possible disadvantage? Maybe if the function changes rapidly at one point, then the multistep method goes off the track cause of the previous multi-points trying to keep it in it's usual place? I'm just using my imagination, really..
From the top of my head:
advantages:
- multistep methods are more accurate (higher order)
- give an opportunity to cancel out forward and backward errors
disadvantages:
- multistep methods take more evaluations
- multistep methods may use steps that are not "on the grid"

To illustrate, Runge-Kutta is the favored method to solve differential equations.
It is a 4-step method that has order $\mathcal O(h^4)$, which is much better than for instance the 1-step Euler method with order $\mathcal O(h)$.

In addition the 1-step Euler methods gives errors that usually only increase in the same direction, while Runge-Kutta will give errors that can cancel each other out.

In principle we could also use methods with more steps per iteration that will indeed achieve a higher order of accuracy, but the required number of additional computations is worse than using Runge-Kutta with a smaller step size, such that it achieves the same accuracy.

The main disadvantage of Runge-Kutta is that its steps are not "on the grid". If you have to solve a differential equation of which only specific values are given at specific distances, you won't be able to use it.
 

Thorra

Member
Dec 7, 2013
40
Hey, I know it's been a while, but could you explain in a bit more detail?
advantages:
- give an opportunity to cancel out forward and backward errors
disadvantages:
- multistep methods take more evaluations
- multistep methods may use steps that are not "on the grid"
The main disadvantage of Runge-Kutta is that its steps are not "on the grid". If you have to solve a differential equation of which only specific values are given at specific distances, you won't be able to use it.
Though Runge-Kutta also has a 2-step version, right? With order of accuracy $\mathcal O(h^2)$

Btw could you explain the difference between local and global truncation errors (LTE and GTE)? As I understand it, the LTE is the $\mathcal O(h^n)$ that we see at the end of any Tailor series expansion. But in the end result we use GTE, right? Like when we did differential approxation errors, were they GTEs? I just keep reading that GTE is lower by 1 order of approximation than LTE. And I don't know if it's supposed to only mean something for the single/multistep methods or if it's fit for the later things too with differential approximation & so forth. I have some really chaotic learning materials so it's problematic.

Thanks.
 
Last edited: