Proof that if T is Hermitian, eigenvectors form an orthonormal basis

In summary, the theorem states that if a linear map is Hermitian or Skew-Hermitian, then there exists a basis of eigenvectors that form an orthonormal basis for the vector space. The proof uses induction, starting with the base case of n=1 and then using the hypothesis that for n-1 dimensional spaces, there exists a basis of eigenvectors. The motivation behind the theorem is to find a way to diagonalize a Hermitian matrix, and it is likely that people discovered this property for real symmetric matrices early on.
  • #1
Hall
351
87
Actual statement:
Assume ##dim~V=n## and let ##T: V \to V## be Hermitian or Skew-Hermitian. Then there exists ##n## eigenvectors ##u_1, \cdots u_n## of T which form an orthonormal basis for ##V##. Hence, the matrix of ##T## relative to this basis is the diagonal matrix ##\Lambda = diag (\lambda_1, \cdots, \lambda_n)## where ##\lambda_k## is the eigenvalue belonging to ##u_k##.
Proof (of Mr. Tom Apostol): We will do the proof by induction on ##n##.

Base Case: n=1. When ##n=1##, the matrix of T will be have just one value and therefore, the characteristic polynomial ##det(\lambda I -A)=0## will have only one solution. So, the Eigenvector corresponding to this value will act as a basis for ##V##.

Hypothesis: Let us assume that if ##dim~V= n-1##, then there exists eigenvectors ##u_1, u_2, \cdots u_n## of ##T:V \to V## which are orthogonal and will be act as a basis for ##V##.

Induction: ##dim ~ V = n##.
Take any eigenvalue of ##T## and call it ##\lambda_1##; the corresponding eigenvector with norm can be named as ##u_1##.

##S = \text{span} ( u_1)##
##S^{\perp}## = space of all elements of ##V## which are orthogonal to ##S##

As ##u_1## is a basis for ##S##, and ##S## is subspace of ##V##, therefore ##u_1## forms a part of basis for ##V##. Let the basis be ##(u_1, v_2, v_3, \cdots v_n)##, we can assume, without loss in generality, that this is an orthonormal basis (else we can convert to one with Gram-Schmidt process, keeping ##u_1## as the first basis element).

Take any ##x## in ##S^{\perp}## and write
##x= x_1 u_1 + x_2 v_2 \cdots x_n v_n##
##\langle x, u_1 \rangle = \langle x_1 u_1 , u_1 \rangle +0##
##langle x, u_1 \rangle = x_1 ##
As ##x \in S^{\perp}## and ##u_1 \in S##, their inner product is zero, and we must have, then, ##x_1=0##.
That means, ##x= \sum_{2}^{n} x_i v_i##, thus implying that the dimension of ##S^{\perp}=n-1##.

Now, we have to prove that if ##T## is applied on ##S^{\perp}##, the results will into ##S^{\perp}## itself, only then we shall be able to use our hypothesis.

##\langle T(x), u_1 \rangle= \langle x, T(u_1) \rangle## as T is Hermitian.
##\langle T(x) , u_1 \rangle= \langle x, \lambda_1 u_1 \rangle##
##\langle T(x), u_1 \rangle = \bar{\lambda_1} \langle x, u_1 \rangle = 0##
Thus, implying ##T(x) \in S^{\perp}##.

By hypothesis, ##S^{\perp}## has as a basis ##n-1## eigenvectors which is orthonormal. Therefore, adding ##u_1## to that set will preserve the orthonormality of set, and as any set of orthogonal elements are independent and we have ##n## of them, they will form a basis for ##V##.

This completes the proof.

I'm not able to absorb this theorem, there are two reasons for that: first, I don't see the motivation behind it, I mean what exactly do we want to achieve by it? (I know this type of question is senseless in Mathematics), and second, the proof involves a few things which seem quite sour to me, like that involvement S perpendicular and proving that T maps it to itself.
 
Physics news on Phys.org
  • #2
Intuitively, the idea is: if we find an n-1 dimensional space ##W## such that ##T## maps ##W## to ##W##, then ##T## is a Hermitian map on ##W## and we get ##n-1## orthogonal eigenvectors by the inductive hypothesis. So we just need to find a ##W## where this works, and where the last orthogonal dimension to fill out ##V## is also an eigenvector. So a natural place to start is pick an eigenvector, and hope the orthogonal space works.
 
  • Like
Likes PeroK
  • #3
Office_Shredder said:
Intuitively, the idea is: if we find an n-1 dimensional space ##W## such that ##T## maps ##W## to ##W##, then ##T## is a Hermitian map on ##W## and we get ##n-1## orthogonal eigenvectors by the inductive hypothesis. So we just need to find a ##W## where this works, and where the last orthogonal dimension to fill out ##V## is also an eigenvector. So a natural place to start is pick an eigenvector, and hope the orthogonal space works.
Actually, I was wondering when in history this theorem was proven for the first time, and was it really proven like that.

Some searches tell me that Mr. Cauchy first embarked on this thing, but I doubt if his aim was really about diagonalizing the Hermitian matrix because the life of Cauchy and Hermite intersected only for a decade or a little more (Hermite was born in 1822, no matter how intelligent his contributions to Maths couldn't have come before 1842, and Cauchy died in 1857).
 
  • #4
I doubt you can even point to something that would obviously be the first proof. I think it's highly likely that people figured out that real symmetric 2x2 matrices were diagonalizable very early on for example.
 
  • Like
Likes Hall

1. What is a Hermitian matrix?

A Hermitian matrix is a square matrix that is equal to its own conjugate transpose. This means that the elements on the main diagonal are real numbers, and the elements above and below the diagonal are complex conjugates of each other.

2. How do you show that a matrix T is Hermitian?

To show that a matrix T is Hermitian, you need to prove that T is equal to its own conjugate transpose. This can be done by taking the transpose of T, conjugating all of the elements, and then comparing it to the original matrix T. If they are equal, then T is Hermitian.

3. What does it mean for eigenvectors to form an orthonormal basis?

An orthonormal basis is a set of vectors that are all orthogonal (perpendicular) to each other and have a length of 1. When eigenvectors form an orthonormal basis, it means that they are all perpendicular to each other and have a length of 1, making them a complete set of basis vectors for the vector space.

4. Why is it important for eigenvectors to form an orthonormal basis?

When eigenvectors form an orthonormal basis, it means that they can be used to diagonalize the matrix T. This makes it easier to solve problems involving T, as the diagonal form of the matrix is much simpler to work with. Additionally, having an orthonormal basis allows for easier calculations involving inner products and projections.

5. How can you use the eigenvectors of a Hermitian matrix to prove that they form an orthonormal basis?

To prove that the eigenvectors of a Hermitian matrix form an orthonormal basis, you can use the fact that the eigenvectors are all orthogonal to each other and have a length of 1. You can also use the property that the eigenvalues of a Hermitian matrix are all real numbers, which helps to show that the eigenvectors are all perpendicular to each other.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
911
Replies
4
Views
1K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
  • Linear and Abstract Algebra
Replies
23
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
4K
  • Linear and Abstract Algebra
Replies
2
Views
965
  • Linear and Abstract Algebra
Replies
13
Views
1K
Replies
5
Views
880
Back
Top