Welcome to our community

Be a part of something great, join today!

Prove matrix has all real eigenvalues

  • Thread starter
  • Admin
  • #1

Jameson

Administrator
Staff member
Jan 26, 2012
4,043
Problem: Let $A$ be a $n \times n$ matrix with real entries. Prove that if $A$ is symmetric, that is $A = A^T$ then all eigenvalues of $A$ are real.

Solution: I'm definitely not seeing how to approach this problem. I know that to calculate the eigenvalues of a matrix I need to solve $\text{det }(A-\lambda I)=0$ and I have experience calculating them, but I've never seen commentary on whether the values will be real or complex. Any ideas to get started?
 
Last edited:

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,707
Problem: Let $A$ be a $n \times n$ matrix (with real entries). Prove that if $A$ is symmetric, that is $A = A^T$ then all eigenvalues of $A$ are real.
Think of $A$ as acting on the complex inner-product space $\mathbb{C}^n$ (whose inner product satisfies $\langle y,x\rangle = \overline{\langle x,y\rangle}$). If $\lambda$ is an eigenvalue of $A$, with eigenvector $x$, then $$\lambda\langle x,x\rangle = \langle Ax,x\rangle = \langle x,A^{\mathrm{\scriptsize T}}x\rangle = \langle x,Ax\rangle = \overline{\langle Ax,x\rangle} = \overline{\lambda}\langle x,x\rangle,$$ and so $\overline{\lambda} = \lambda$.
 
  • Thread starter
  • Admin
  • #3

Jameson

Administrator
Staff member
Jan 26, 2012
4,043
Thank you very much for the quick reply, Opalg! I don't believe this is the intended method to solve the problem though since any sort of complex analysis or complex theory isn't a prerequisite for the course. I'll take some time to read over your solution more and digest it, but am still looking perhaps for another potential method.
 

Fernando Revilla

Well-known member
MHB Math Helper
Jan 29, 2012
661
Problem: Let $A$ be a $n \times n$ matrix. Prove that if $A$ is symmetric, that is $A = A^T$ then all eigenvalues of $A$ are real.
I suppse there is a typo. In order to be precise: "Let $A$ be a real $n \times n$ matrix ...". Oherwise, the result is false. Choose for example $A=\text{diag }(1,i).$ :)
 
  • Thread starter
  • Admin
  • #5

Jameson

Administrator
Staff member
Jan 26, 2012
4,043
Yes, you are very correct and my apologies for this error. :eek: I've fixed the OP now.
 

Chris L T521

Well-known member
Staff member
Jan 26, 2012
995
The way that Opalg approached the problem is the most common way that I've seen it done.

So, if you don't want to use complex numbers or spaces....



The other way to do this then would be to change the statement slightly and prove the following:

An $n\times n$ matrix $A$ has all real eigenvalues and $n$ orthogonal real eigenvectors if and only if $A$ is real symmetric.

The proof of the revised statement would require a lot more (and I do mean a lot more) work than the way Opalg presented.
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
If you're talking about roots of real polynomials, it's really hard to avoid a discussion of complex numbers.
 
  • Thread starter
  • Admin
  • #8

Jameson

Administrator
Staff member
Jan 26, 2012
4,043
My mistake everybody and thank you for reassurance. :) Seems I have some reading to do. I took linear algebra last semester but we didn't cover this method at all so it's completely new material. I'll try to work through Opalg's solution.
 

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,780
Hi Jameson!

What you are asking for is the proof of a general theorem called the "spectral theorem for real symmetric matrices".

Its proof is far from trivial, as you can see if you google for it, although Opalg's proof is quite elegant.
His proof is a bit concise though. I had to puzzle a bit to understand why the steps he's taking are valid.

First step is to realize that the characteristic polynomial you have, $\det(A-\lambda I)=0$, is a polynomial of degree n, meaning it has n (possible duplicate) roots if we allow complex numbers.
What's left to prove is that these eigenvalues are actually real numbers.
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
If you start with complex numbers to begin with, you can actually say a bit more:

Every Hermetian matrix has real eigenvalues (Opalg's proof goes through as before).

As ILikeSerena has indicated, this is a consequence of the spectral theorem for normal matrices: every normal matrix is unitarily diagonalizable, and every unitarily diagonalizable matrix is normal

(normal matrices are those for which \(\displaystyle AA^H = A^HA\)).

This is another instance of a statement about real numbers that makes more sense when you consider the wider context of complex numbers.

As far as I understand it (which is poorly) physicists prefer to work with complex vector spaces, because you can always restrict to the special case of real numbers, and complex numbers are in a sense "more complete". There's not much more difficulty involved in doing so, many of the results in linear algebra hold for an arbitrary field (there are some special cases where a field of characteristic 2 is inappropriate), and the usual definition of an inner product in a vector space over \(\displaystyle \Bbb C\) resolves to a real-valued inner product when the scalars and coordinates are real.

What I'm trying to get across here is the idea that the "algebra" part of linear algebra, is based on the classical operations of addition, subtraction, multiplication and division (operations with matrices are "built-up" of operations of these kinds on individual entries) and a field is precisely the kind of algebraic object we can do those things IN.

For 99% of the examples one actually encounters in practice, the algebraic closure of the rational numbers would suffice (we rarely work with the full spectrum of transcendental numbers, a few important ones keep popping up). It is common practice when studying inner product spaces to assume the underlying field is a subfield of \(\displaystyle \Bbb C\), and it is perhaps unfortunate that in introductory courses so much emphasis is given to \(\displaystyle \Bbb R^n\), when you can do more without any extra effort.
 

Jameson

Administrator
Staff member
Jan 26, 2012
4,043
Think of $A$ as acting on the complex inner-product space $\mathbb{C}^n$ (whose inner product satisfies $\langle y,x\rangle = \overline{\langle x,y\rangle}$). If $\lambda$ is an eigenvalue of $A$, with eigenvector $x$, then $$\lambda\langle x,x\rangle = \langle Ax,x\rangle = \langle x,A^{\mathrm{\scriptsize T}}x\rangle = \langle x,Ax\rangle = \overline{\langle Ax,x\rangle} = \overline{\lambda}\langle x,x\rangle,$$ and so $\overline{\lambda} = \lambda$.
Ok I've done some reading on inner-product spaces and feel comfortable with the basic axioms and some other conclusions from them. The steps I don't follow are $ \langle Ax,x\rangle = \langle x,A^{\mathrm{\scriptsize T}}x\rangle = \langle x,Ax\rangle$. The first two steps and the last three are clear but these three are not. Can someone explain?
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
In some developments, one actually DEFINES the transpose of a square matrix \(\displaystyle A \) as the matrix \(\displaystyle B\) such that:

\(\displaystyle \langle Ax,x \rangle = \langle x,Bx \rangle, \forall x \)

(this assumes a REAL inner product space).

However, we can follow this entry-by-entry:

\(\displaystyle \langle Ax,x \rangle = \sum_i\left(\sum_j a_{ij}x_j\right)x_i\)

\(\displaystyle = a_{11}x_1x_1 + a_{12}x_2x_1 + \cdots + a_{1n}x_nx_1 + a_{21}x_1x_2 + a_{22}x_2x_2 + \cdots + a_{2n}x_nx_2 +\)

\(\displaystyle \cdots + a_{n1}x_1x_n + a_{n2}x_2x_n + \cdots + a_{nn}x_nx_n\)

\(\displaystyle = x_1(a_{11}x_1 + a_{21}x_2 + \cdots + a_{n1}x_n) + x_2(a_{12}x_1 + a_{22}x_2 + \cdots + a_{n2}x_n) +\)

\(\displaystyle \cdots + x_n(a_{1n}x_1 + a_{2n}x_2 + \cdots + a_{nn}x_n)\) <---plucking out the \(\displaystyle x_j\) in the "middle position"

\(\displaystyle = \sum_j x_j\left(\sum_i a_{ji}x_i \right) = \langle x,A^Tx \rangle\).

The equality \(\displaystyle \langle A^Tx,x \rangle = \langle Ax,x \rangle \) follows because \(\displaystyle A\) is symmetric, and thus \(\displaystyle A^T = A\).

(Note: this assumes the \(\displaystyle x_j\) are the coordinates of \(\displaystyle x\) in some basis (the standard basis works well), and that the \(\displaystyle a_{ij}\) are the entries of \(\displaystyle A\) in that same basis).

(Note #2: I hope it's clear that when we rearrange the order of the \(\displaystyle x_i\)'s and the \(\displaystyle x_j\)'s in the complex case, the sesquilinearity of the complex inner product requires we use \(\displaystyle \overline{a_{ji}}\) instead, leading to:

\(\displaystyle \langle Ax,x \rangle = \langle x,A^Hx \rangle\)

This reduces to the symmetric case for the reals, since real numbers are self-conjugate).
 
Last edited:

Klaas van Aarsen

MHB Seeker
Staff member
Mar 5, 2012
8,780
Alternatively:

The standard inner product is given by $\langle x,y \rangle = y^\dagger x$, whiere $\dagger$ denotes the conjugate transpose.
Since A is a real matrix, its conjugate transpose is the same as its transpose.
And since A is symmetric, its transpose is the same as A: $A^\dagger = A^T = A$.

So:
$$\langle Ax,x \rangle = x^\dagger(Ax) = (x^\dagger A)x = (A^\dagger x)^\dagger x = \langle x, A^\dagger x \rangle = \langle x, A^T x \rangle = \langle x, A x \rangle$$

Oh, and you may already now that in general $(MN)^T = N^TM^T$, which is the reason that we have here that $(x^\dagger A) = (A^\dagger x)^\dagger$.
 
Last edited: