Are Gauss-Seidel and Jacobi Methods Guaranteed to Converge on Certain Matrices?

  • MHB
  • Thread starter evinda
  • Start date
  • Tags
    Convergence
In summary, the Gauss-Seidel and Jacobi methods may converge or diverge depending on the characteristics of the matrix being solved. The 4-1 band matrix is easily convergent due to being symmetric and strictly diagonally dominant, while the 2-(-1) band matrix is an edge case and may diverge due to rounding errors. The Hilbert matrix, although symmetric positive-definite, may also diverge due to being ill-conditioned. Both methods have different reasons for convergence or divergence, such as the matrix being symmetric positive-definite or strictly diagonally dominant.
  • #1
evinda
Gold Member
MHB
3,836
0
Hello :)
Could you tell me,why both of the Gauss-Seidel and Jacobi method,when we apply them at the tridiagonal matrix with the number 4 at the main diagonal and the number 1 at the first diagonal above the main and also the number 1 at the first diagonal under the main diagonal converge,but they do not converge ,when we apply them at the Hilbert matrix and also at the tridiagonal matrix with the number 2 at the main diagonal and the number -1 at the 2 other diagonals?
 
Mathematics news on Phys.org
  • #2
The fact that the methods do not converge for the Hilbert matrix,has it to with the fact that this matrix is ill-conditioned?And what's with the other two matrices?
 
  • #3
evinda said:
Hello :)
Could you tell me,why both of the Gauss-Seidel and Jacobi method,when we apply them at the tridiagonal matrix with the number 4 at the main diagonal and the number 1 at the first diagonal above the main and also the number 1 at the first diagonal under the main diagonal converge,but they do not converge ,when we apply them at the Hilbert matrix and also at the tridiagonal matrix with the number 2 at the main diagonal and the number -1 at the 2 other diagonals?

evinda said:
The fact that the methods do not converge for the Hilbert matrix,has it to with the fact that this matrix is ill-conditioned?And what's with the other two matrices?

Hi! (Smile)

From the Gauss-Seidel method on wiki we can see that Gauss-Seidel is known to converge if either:
  • A is symmetric positive-definite,
  • A is strictly or irreducibly diagonally dominant.
The Gauss–Seidel method sometimes converges even if these conditions are not satisfied.

Note that it will still only converge if the rounding errors made by the computer are within small enough bounds.Your 4-1 band matrix is easily strictly diagonally dominant, so it converges.

Your 2-(-1) band matrix is not strictly diagonally dominant, so it is an edge case.
Combined with large enough matrices and rounding errors, it will probably diverge.
I checked that with a 4x4 matrix A and some choices for b and the initial guess, Gauss-Seidel was actually convergent.

Your Hilbert matrix is symmetric positive-definite, so it will converge if the matrix is small enough.
Thanks to Opalg we already know that large Hilbert matrices have a determinant of near-zero, making it ill-conditioned, meaning that the rounding errors will become too large if the matrix is large enough.
Another check with a 4x4 Hilbert matrix and some choices for b and the initial guess showed that Gauss-Seidel was convergent.
 
  • #4
I like Serena said:
Hi! (Smile)

From the Gauss-Seidel method on wiki we can see that Gauss-Seidel is known to converge if either:
  • A is symmetric positive-definite,
  • A is strictly or irreducibly diagonally dominant.
The Gauss–Seidel method sometimes converges even if these conditions are not satisfied.

Note that it will still only converge if the rounding errors made by the computer are within small enough bounds.Your 4-1 band matrix is easily strictly diagonally dominant, so it converges.

Your 2-(-1) band matrix is not strictly diagonally dominant, so it is an edge case.
Combined with large enough matrices and rounding errors, it will probably diverge.
I checked that with a 4x4 matrix A and some choices for b and the initial guess, Gauss-Seidel was actually convergent.

Your Hilbert matrix is symmetric positive-definite, so it will converge if the matrix is small enough.
Thanks to Opalg we already know that large Hilbert matrices have a determinant of near-zero, making it ill-conditioned, meaning that the rounding errors will become too large if the matrix is large enough.
Another check with a 4x4 Hilbert matrix and some choices for b and the initial guess showed that Gauss-Seidel was convergent.

Is the 4-1 band matrix also symmetric positive-definite or not??
Also,do these two method converge/diverge for the same reasons or for different ones?
 
  • #5
evinda said:
Is the 4-1 band matrix also symmetric positive-definite or not??
Also,do these two method converge/diverge for the same reasons or for different ones?

Yes, the 4-1 band matrix is positive definite, since it is symmetric, strictly diagonally dominant, and has positive real entries on its diagonal (see wiki).

I'd say that the other 2 matrices diverge for different reasons.
The Hilbert matrix is supposed to converge, but big matrices won't due to being ill-conditioned.
The 2-(-1) band matrix is an edge case for Gauss-Seidel to begin with. Its conditioned well enough, but certainly with rounding errors it may not converge.
 
  • #6
I like Serena said:
Yes, the 4-1 band matrix is positive definite, since it is symmetric, strictly diagonally dominant, and has positive real entries on its diagonal (see wiki).

I'd say that the other 2 matrices diverge for different reasons.
The Hilbert matrix is supposed to converge, but big matrices won't due to being ill-conditioned.
The 2-(-1) band matrix is an edge case for Gauss-Seidel to begin with. Its conditioned well enough, but certainly with rounding errors it may not converge.

So,the Jacobi method won't converge for the 2-(-1) band matrix because it is not strictly dominant??And why could the Gauss-Seidel method converge for it? :confused:
 
  • #7
Also,is the Hilbert matrix strictly dominant,so that the Jacobi methodi would converge,but it doesn't because of the fact that the matrix is ill-conditioned??Or isn't it strictly dominant? :confused:
 
  • #8
evinda said:
So,the Jacobi method won't converge for the 2-(-1) band matrix because it is not strictly dominant??And why could the Gauss-Seidel method converge for it? :confused:

It is more accurate to say that the 2-(-1) band matrix might converge for both the Jacobi and the Gauss-Seidel method, but based on the strictly diagonally dominant criterium, it is not known to be.

Furthermore, the 2-(-1) band matrix is not symmetric positive-definite, so the Gauss-Seidel is not known to be convergent based on that criterium.

However, I believe the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1, making the Jacobi method convergent.
 
  • #9
I like Serena said:
It is more accurate to say that the 2-(-1) band matrix might converge for both the Jacobi and the Gauss-Seidel method, but based on the strictly diagonally dominant criterium, it is not known to be.

Furthermore, the 2-(-1) band matrix is not symmetric positive-definite, so the Gauss-Seidel is not known to be convergent based on that criterium.

However, I believe the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1, making the Jacobi method convergent.

Yes,I also noticed that the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1 for all the dimensions I checked,but I get as result that the methods do not converge..Why does this happen?? :confused:
 
  • #10
evinda said:
Also,is the Hilbert matrix strictly dominant,so that the Jacobi methodi would converge,but it doesn't because of the fact that the matrix is ill-conditioned??Or isn't it strictly dominant? :confused:

The Hilbert matrix is not strictly diagonally dominant.
You may want to look up the definition if you want to use it in your work.
evinda said:
Yes,I also noticed that the iterative matrix of the 2-(-1) band matrix ($D^{-1}R$) has a spectral radius smaller than 1 for all the dimensions I checked,but I get as result that the methods do not converge..Why does this happen?? :confused:

How large is your matrix?
Is it perhaps convergent for matrices, say, up to 10x10?
 
  • #11
I like Serena said:
The Hilbert matrix is not strictly diagonally dominant.
You may want to look up the definition if you want to use it in your work.

So,doesn't the Jacobi method converge because of the fact that the matrix is not strictly dominant and bacause it is also ill conditioned..Or for an other reason?

How large is your matrix?
Is it perhaps convergent for matrices, say, up to 10x10?

No,I have checked it for 50<=dimension<=1000..and I found the spectral radious p:
0.9962<=p<=1..But the methods do not converge :confused:
 
  • #12
evinda said:
No,I have checked it for 50<=dimension<=1000..and I found the spectral radious p:
0.9962<=p<=1..But the methods do not converge :confused:

Note that the spectral radius approaches 1 when the matrix becomes large enough.
At that point the Jacobi method will converge very slowly.
Then even small rounding errors will throw the method off to diverge.

Perhaps you want to try if for dimensions up to 10?
You might want to do the same thing for the Hilbert matrix.
 
  • #13
I like Serena said:
Note that the spectral radius approaches 1 when the matrix becomes large enough.
At that point the Jacobi method will converge very slowly.
Then even small rounding errors will throw the method off to diverge.

Perhaps you want to try if for dimensions up to 10?
You might want to do the same thing for the Hilbert matrix.

I think I am not able to do this,because I haven't get taught how to do this in MATLAB :confused: So,if I am asked to tell why the methods do not converge,is the right answer that they would converge because the spectral radious is smaller or equal to one,but they don't because of the rounding errors??or because of the precision of the digits?? :confused:
 
  • #14
evinda said:
I think I am not able to do this,because I haven't get taught how to do this in MATLAB

Huh? I do not understand. :confused:
If you can do it for 50 and 1000, it seems to me you should also be able to do it for 10.
So,if I am asked to tell why the methods do not converge,is the right answer that they would converge because the spectral radious is smaller or equal to one,but they don't because of the rounding errors??or because of the precision of the digits?? :confused:

Yes.
But note that for the Jacobi method, the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1.

Btw, it is the lack of precision of the digits is the reason for the rounding errors.
If we had infinitely many significant digits, there wouldn't be any rounding errors.
 
  • #15
Here's my result for n=5 for the Hilbert matrix.
(This is the output of MathCAD instead of Matlab. ;))View attachment 1748
 

Attachments

  • Hilbert_matrix.png
    Hilbert_matrix.png
    8.4 KB · Views: 62
  • #16
I like Serena said:
Huh? I do not understand. :confused:
If you can do it for 50 and 1000, it seems to me you should also be able to do it for 10.
Yes.
But note that for the Jacobi method, the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1.

Btw, it is the lack of precision of the digits is the reason for the rounding errors.
If we had infinitely many significant digits, there wouldn't be any rounding errors.

Oh,sorry!I thought you meant that I should change the precision of the digits.. :eek:

So,the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1,and the the 2-(-1) band matrix bacause of the rounding errors?? :eek:

- - - Updated - - -

I like Serena said:
Here's my result for n=5 for the Hilbert matrix.
(This is the output of MathCAD instead of Matlab. ;))View attachment 1748

I found the max eigenvalue 3.4441 using the Jacobi method,and 1.0000,using the Gauss-Seidel method..
 
  • #17
evinda said:
Oh,sorry!I thought you meant that I should change the precision of the digits.. :eek:

That explains it. (Drunk)

So,the Hilbert matrix diverges because the spectral radius of the iteration matrix is larger than 1,and the the 2-(-1) band matrix because of the rounding errors?? :eek:

That would be for the Jacobi method yes.

Now that I have made the calculations for the Hilbert matrix with the Gauss-Seidel method, I can see that the iteration matrix for the Hilbert matrix has a spectral radius approaching 1, meaning it also diverges due to rounding errors.

I found the max eigenvalue 3.4441 using the Jacobi method,and 1.0000,using the Gauss-Seidel method..

Looks like the same result then! (Cool)
 
  • #18
I like Serena said:
That explains it. (Drunk)
That would be for the Jacobi method yes.

Now that I have made the calculations for the Hilbert matrix with the Gauss-Seidel method, I can see that the iteration matrix for the Hilbert matrix has a spectral radius approaching 1, meaning it also diverges due to rounding errors.
Looks like the same result then! (Cool)

I also found that the spectral radius is approaching 1 with the Gauss Seidel,and >= 42.6769 with the Jacobi method..Have you found the same result? :)
 
  • #19
evinda said:
I also found that the spectral radius is approaching 1 with the Gauss Seidel,and >= 42.6769 with the Jacobi method..Have you found the same result? :)

That's what I have for a dimension of 50 for the Hilbert matrix.
With dimension 500, I get 1 respectively 435.661. :eek:
 
  • #20
Btw, it still only means that the methods might diverge.
It still depends on the actual b vector and the initial guess what will happen.
The best we can say is that they are not guaranteed to converge.
Only the 4-1 band matrix is guaranteed to converge.

For the 4-1 band matrix I get a spectral radius of 0.249 for Gauss-Seidel and 0.499 for Jacobi. ;)
 
  • #21
I like Serena said:
That's what I have for a dimension of 50 for the Hilbert matrix.
With dimension 500, I get 1 respectively 435.661. :eek:

I get the same result! :)

- - - Updated - - -

I like Serena said:
Btw, it still only means that the methods might diverge.
It still depends on the actual b vector and the initial guess what will happen.
The best we can say is that they are not guaranteed to converge.
Only the 4-1 band matrix is guaranteed to converge.

For the 4-1 band matrix I get a spectral radius of 0.249 for Gauss-Seidel and 0.499 for Jacobi. ;)

I understand!Thank you very much! :D
 

Related to Are Gauss-Seidel and Jacobi Methods Guaranteed to Converge on Certain Matrices?

What is convergence of methods?

Convergence of methods is a concept in scientific research that refers to the idea that different methods or approaches can be used to reach the same conclusion or solution to a problem. It involves finding a common ground or agreement between different methods, and is often seen as a way to increase the validity and reliability of research findings.

Why is convergence of methods important?

Convergence of methods is important because it allows for a more comprehensive and robust understanding of a particular phenomenon or problem. By using multiple methods, researchers can confirm the validity of their findings and identify any potential biases or limitations. It also encourages a more interdisciplinary approach to research, which can lead to new and innovative discoveries.

What are some examples of methods that can converge?

There are many different types of methods that can converge, including quantitative and qualitative research methods, experimental and observational approaches, and theoretical and empirical methods. For example, a study may use both surveys and interviews to gather data, or combine statistical analysis with case studies to support a hypothesis.

What are the benefits of using a convergence of methods approach?

One of the main benefits of convergence of methods is that it can help to overcome the limitations of individual methods. By using multiple methods, researchers can compensate for the weaknesses of each method and obtain a more complete understanding of a topic. It also allows for triangulation, where data from different methods can be compared and contrasted to confirm findings.

What are some challenges of using a convergence of methods approach?

One challenge of convergence of methods is that it can be time-consuming and resource-intensive. It may also require a high level of expertise in multiple methods, as well as effective communication and collaboration among researchers. Additionally, there may be differences in terminology and assumptions between methods, which can create difficulties in integration and interpretation of findings.

Similar threads

Replies
7
Views
1K
  • General Math
Replies
2
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
944
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
800
Replies
4
Views
735
Replies
5
Views
958
Replies
1
Views
4K
  • General Math
Replies
2
Views
5K
Back
Top