Verification sequence of eigenvalue problem

In summary, the conversation discusses the normal procedure for verifying the correctness of results in an eigenvalue problem. The lapack library is used to solve the problem, but negative results for eigenvalues raise suspicion. The solution must comply with the equation K*X=λ*M*X, where λ is the eigenvalue and X is the eigenvector. The conversation also mentions the confusion about which quantities are scalars, matrices, or vectors in the equation and provides a clarification on how to correctly write the equation. In summary, the conversation discusses the verification process for eigenvalue problem results and clarifies the correct way to write the equation for this problem.
  • #1
Ronankeating
63
0
Hi all,

What is the normal procedure to verify that I got the correct results (eigenvalues and eigen vectors) from the eigenvalue problem?

I'm using the lapack library to solve eigenvalue problem summarized below. I've 2 matrices K and M and I get the negative results for eigenvalues which arouse my suspicion that the results could be wrong. In order to verify that, the given solutions (eigenvalues and eigenvectors) must comply with that K*X = λ*M*X equation where λ=eigenvalues and X =eigenvectors. I multiplied that λ*M, where I was expecting the K matrix but the results(named as "Verification phase" below) are not even close to those inner products of K matrix.

What am I doing wrong here ?


K MATRIX
0.2400 0.3900 0.4200 -0.1600
0.3900 -0.1100 0.7900 0.6300
0.4200 0.7900 -0.2500 0.4800
-0.1600 0.6300 0.4800 -0.0300
M MATRIX
4.1600 -3.1200 0.5600 -0.1000
-3.1200 5.0300 -0.8300 1.0900
0.5600 -0.8300 0.7600 0.3400
-0.1000 1.0900 0.3400 1.1800

EIGENVECTORS ARE:
-0.6901E-01 0.3080E+00 -0.4469E+00 -0.5528E+00
-0.5740E+00 0.5329E+00 -0.3708E-01 -0.6766E+00
-0.1543E+01 -0.3496E+00 0.5048E-01 -0.9276E+00
0.1400E+01 -0.6211E+00 0.4743E+00 0.2510E+00

EIGENVALUES
-2.2254 -0.4548 0.1001 1.1270

VERIFICATION PHASE (λ*M )
-9.2579 6.9434 -1.2463 0.2225
1.4188 -2.2874 0.3774 -0.4957
0.0560 -0.0831 0.0761 0.0340
-0.1127 1.2285 0.3832 1.3299
 
Physics news on Phys.org
  • #2
It looks OK to me. For the first eigenpair
$$ Kx = K\begin{bmatrix}-0.0690 \\ -0.5740 \\ -1.5430 \\ 1.4000 \end{bmatrix} =
\begin{bmatrix} -1.1125 \\ -0.3007 \\ 0.5753 \\ -1.1332 \end{bmatrix}$$
$$ Mx = M\begin{bmatrix}-0.0690 \\ -0.5740 \\ -1.5430 \\ 1.4000 \end{bmatrix} =
\begin{bmatrix} 0.4997 \\ 0.1348 \\ -0.2589 \\ 0.5086 \end{bmatrix}$$
And that is consistent with Kx = -2.2254 Mx.
 
  • #3
Just to add: I think you got confused about whch quantites are scalars matrices or vectors in your equation K*X = λ*M*X

If you write it in that form, X is a column vector corresponding to the scalar λ, so you can't "cancel the vector X from each side". Of course there are really N separate equations, one for each λ and the corresponding X.

You can write it as one big matrix equation, but then you have to put the eigenvalues on the right hand side, ##KX = MX \Lambda## where ##\Lambda## is a diagonal matrix containing the λ's. ##\Lambda MX## multiplies the rows of MX by the eigenvalues, but ##MX\Lambda## multipkies the columns of MX which is what you want.

And you can't "cancel out X" from ##KX = MX \Lambda## either, but you can write ##X^TKX = X^TMX \Lambda##, and you should find that ##X^TKX## and ##X^TMX## are both diagonal matrices. In fact the eigenvectors are often scaled so that ##X^TMX = I## and ##X^TKX = \Lambda##.
 

Related to Verification sequence of eigenvalue problem

1. What is a verification sequence for an eigenvalue problem?

A verification sequence for an eigenvalue problem is a series of steps that are used to check the accuracy of the computed eigenvalues and eigenvectors. It involves comparing the computed values to known analytical solutions or using numerical methods to verify their accuracy.

2. Why is it important to verify the eigenvalues and eigenvectors in an eigenvalue problem?

Verification is important because it ensures that the computed eigenvalues and eigenvectors are accurate and reliable. This is crucial in many applications, such as quantum mechanics and structural analysis, where the results must be precise for the problem to be solved correctly.

3. What are some common methods for verifying eigenvalues and eigenvectors?

Some common methods for verifying eigenvalues and eigenvectors include comparing the computed values to known analytical solutions, using perturbation theory to estimate the accuracy, and performing a residual analysis to check for convergence.

4. How do you know if the computed eigenvalues and eigenvectors are accurate?

One way to determine the accuracy of the computed eigenvalues and eigenvectors is by comparing them to known analytical solutions. If the values are within a certain tolerance of the analytical values, then they can be considered accurate. Performing a residual analysis can also help determine the accuracy by checking for convergence of the solution.

5. Can the verification sequence be used for all types of eigenvalue problems?

Yes, the verification sequence can be used for all types of eigenvalue problems, including both symmetric and non-symmetric problems. However, the specific methods used for verification may vary depending on the type of problem being solved.

Similar threads

  • Linear and Abstract Algebra
Replies
4
Views
1K
Back
Top