Regarding Eigen values Of a Matrix

In summary, if a matrix A is not invertible, then the equation Ax= b has an infinite number of solutions. If A is invertible, then Ax= b has the unique solution x= A^{-1}b.
  • #1
the_amateur
13
0
Given a square matrix A, the condition that characterizes an eigenvalue, λ, is the existence of a nonzero vector x such that A x = λ x; this equation can be rewritten as follows:

[itex]Ax=\lambda x[/itex]

[itex](A-\lambda )x[/itex] = 0

[itex](A-\lambda I )x[/itex] = 0 -------------------- 1




After the above process we find the determinant of [itex]A-\lambda I [/itex] and then equate it to 0.

det([itex]A-\lambda I [/itex]) = 0 -------------------- 2

Then from the above characteristic equation we find the Eigen values.

My question is how does the equation 1 imply the above condition 2
 
Physics news on Phys.org
  • #2
Well, if the nullspace of the transformation,

[tex]A-\lambda I[/tex]

is not the zero subspace, as you stated, then certainly the determinant of the transformation will be zero. Recall that this is a property of singular linear transformations. This is true because a singular transformation is non-invertible, which implies the determinant is zero.
 
  • #3
If a matrix, A, is NOT invertible, then the equation Ax= b either has no solution or an infinite number of solutions (the set of solutions will be a subspace). Obviously, the equation Ax= 0 has the "trivial solution", x= 0, so if A is not invertible, the equation has an infinite number of solutions.

Conversely, if A IS invertible, then Ax= b has the unique solution, [itex]x= A^{-1}b[/itex]. Obviously if A is invertible, then Ax= 0 has the unique solution x= 0.

So [itex](A- \lambda I)x= 0[/itex] has only the trivial solution x= 0 if [itex]A- \lambda I[/itex] has non-zero determinant (and so has an inverse) and has an infinite number of soltions if [itex]A- \lambda I[/itex] has determinant 0 (and so is not invertible).
 
Last edited by a moderator:
  • #4
HallsofIvy said:
If a matrix, A, is NOT invertible, then the equation Ax= b either has no solution or an infinite number of solutions (the set of solutions will be a subspace). Obviously, the equation Ax= 0 has the "trivial solution", x= 0, so if A is not invertible, the equation has an infinite number of solutions.

Conversely, if A IS invertible, then Ax= b has the unique solution, [itex]x= A^{-1}b[/itex]. Obviously if A is invertible, then Ax= 0 has the unique solution x= 0.

So [itex](A- \lambda I)x= 0[/itex] has only the trivial solution x= 0 if A has non-zero determinant (and so has an inverse) and has an infinite number of soltions if A has determinant 0 (and so is not invertible).

Shouldn't that last bit read
has only the trivial solution x= 0 if [itex](A- \lambda I)[/itex] has non-zero determinant (and so has an inverse) and has an infinite number of soltions if [itex](A- \lambda I)x= 0[/itex] has determinant 0 (and so is not invertible). ?
 
  • #5
Yes, thanks. I confused myself by by using "A" in general and then using [itex]A- \lambda I[/itex] for the specific response about eigenvalues! I will edit my post.
 
  • #6
HallsofIvy said:
Yes, thanks. I confused myself by by using "A" in general and then using [itex]A- \lambda I[/itex] for the specific response about eigenvalues! I will edit my post.

If it had been anyone else it might not have mattered, but being you the student would go crazy for not seeing how to get your answer which he would not dream could be wrong. :biggrin:
 

Related to Regarding Eigen values Of a Matrix

1. What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are concepts used in linear algebra to describe the behavior of a matrix. An eigenvalue is a scalar value that represents how a transformation affects a particular eigenvector. An eigenvector is a vector that, when multiplied by a square matrix, results in a scalar multiple of itself.

2. How are eigenvalues and eigenvectors calculated?

To calculate the eigenvalues and eigenvectors of a matrix, we need to solve the characteristic equation, which is det(A - λI) = 0, where A is the matrix, λ is the eigenvalue, and I is the identity matrix. The resulting eigenvalues and eigenvectors can then be used to understand the properties and behavior of the matrix.

3. Why are eigenvalues and eigenvectors important?

Eigenvalues and eigenvectors are important because they provide insight into the behavior of a matrix. They can be used to understand the stability of a system, identify patterns in data, and solve differential equations. They also have applications in physics, engineering, and computer science.

4. What is the relationship between eigenvalues and eigenvectors?

The relationship between eigenvalues and eigenvectors is that each eigenvector is associated with a specific eigenvalue. When a matrix is multiplied by its corresponding eigenvector, the resulting vector is a scalar multiple of the original eigenvector, where the scalar is the eigenvalue. This relationship is often visualized with the geometric interpretation of eigenvalues and eigenvectors as stretching or compressing transformations.

5. Can a matrix have complex eigenvalues and eigenvectors?

Yes, a matrix can have complex eigenvalues and eigenvectors. In fact, complex eigenvalues and eigenvectors often occur when dealing with real-world problems, such as in quantum mechanics and signal processing. These complex values can provide a more accurate and complete understanding of the behavior of a matrix.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
441
  • Calculus and Beyond Homework Help
Replies
6
Views
359
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
19
Views
352
  • Calculus and Beyond Homework Help
Replies
2
Views
213
  • Calculus and Beyond Homework Help
Replies
13
Views
2K
  • Calculus and Beyond Homework Help
Replies
10
Views
547
  • Calculus and Beyond Homework Help
Replies
2
Views
585
  • Calculus and Beyond Homework Help
Replies
4
Views
888
  • Calculus and Beyond Homework Help
Replies
7
Views
607
Back
Top