# [SOLVED]Spectrum of self adjoint operator is real

#### Boromir

##### Banned
Let $T$ be a self adjoint operator in a commutatative $B^{*}$ subalgebra A of B(H). Prove that ${\sigma}_{A}(T)$ is contained in the real numbers.

what I have done: I can prove that ${\sigma}(T)$ is in the reals by first showing that, for normal $T$, if ther exists positive scalar $t$ with $||Tx||>t||x||$ then $T$ is invertible. Then its quite easy to show that if a we have a non-real number ${\lambda}$ then it is in the resolvent, but I would need it to be in the resolvent at $A$.
I've figured that if I can show that if the boundary of a spectrum is real, then the spectrum is real, then I can prove what I want, but I've no idea if this result is true? Thanks

#### Opalg

##### MHB Oldtimer
Staff member
Let $T$ be a self adjoint operator in a commutatative $B^{*}$ subalgebra A of B(H). Prove that ${\sigma}_{A}(T)$ is contained in the real numbers.

what I have done: I can prove that ${\sigma}(T)$ is in the reals by first showing that, for normal $T$, if ther exists positive scalar $t$ with $||Tx||>t||x||$ then $T$ is invertible. Then its quite easy to show that if a we have a non-real number ${\lambda}$ then it is in the resolvent, but I would need it to be in the resolvent at $A$.
I've figured that if I can show that if the boundary of a spectrum is real, then the spectrum is real, then I can prove what I want, but I've no idea if this result is true? Thanks
You need to show that if $\lambda = \mu + i\nu$ is a nonreal complex number, so that $\nu\ne0$, then $\lambda \notin \sigma_A(T).$

Step 1: $\mu + i\nu \in \sigma_A(T) \;\Longleftrightarrow\; i\nu \in\sigma_A(T - \mu I)$; and $T-\mu I$ is also selfadjoint. So replacing $T$ by $T-\mu I$, we may as well assume that $\mu=0.$

Step 2: $i\nu \in \sigma_A(T) \;\Longleftrightarrow\; i \in\sigma_A(\nu^{-1}T)$; and $\nu^{-1}T$ is also selfadjoint. So replacing $T$ by $\nu^{-1}T$, we may as well assume that $\nu=1.$

That reduces the problem to showing that $i\notin \sigma_A(T).$ Suppose (to get a contradiction) that $i\notin \sigma_A(T)$, and let $T_n = T + inI.$ Then $i\in \sigma_A(T) \;\Longrightarrow\; i(n+1) \in \sigma_A(T_n).$ You need to know at this stage that the spectral radius is always less than or equal to the norm, so $|i(n+1)| \leqslant \|T_n\|.$ Then $$n^2+2n+1 = |i(n+1)|^2 \leqslant \|T_n\|^2 = \|T_n^*T_n\| = \|(T-inI)(T+inI)\| = \|T^2-n^2I\| \leqslant \|T\|^2 + n^2.$$ Therefore $2n+1 \leqslant \|T^2\|.$ But that cannot be true for all $n$, so by taking $n$ large enough we get a contradiction.

[That is essentially the proof given in Kadison and Ringrose, Proposition 4.1.1.]

#### Boromir

##### Banned
You need to show that if $\lambda = \mu + i\nu$ is a nonreal complex number, so that $\nu\ne0$, then $\lambda \notin \sigma_A(T).$

Step 1: $\mu + i\nu \in \sigma_A(T) \;\Longleftrightarrow\; i\nu \in\sigma_A(T - \mu I)$; and $T-\mu I$ is also selfadjoint. So replacing $T$ by $T-\mu I$, we may as well assume that $\mu=0.$

Step 2: $i\nu \in \sigma_A(T) \;\Longleftrightarrow\; i \in\sigma_A(\nu^{-1}T)$; and $\nu^{-1}T$ is also selfadjoint. So replacing $T$ by $\nu^{-1}T$, we may as well assume that $\nu=1.$

That reduces the problem to showing that $i\notin \sigma_A(T).$ Suppose (to get a contradiction) that $i\notin \sigma_A(T)$, and let $T_n = T + inI.$ Then $i\in \sigma_A(T) \;\Longrightarrow\; i(n+1) \in \sigma_A(T_n).$ You need to know at this stage that the spectral radius is always less than or equal to the norm, so $|i(n+1)| \leqslant \|T_n\|.$ Then $$n^2+2n+1 = |i(n+1)|^2 \leqslant \|T_n\|^2 = \|T_n^*T_n\| = \|(T-inI)(T+inI)\| = \|T^2-n^2I\| \leqslant \|T\|^2 + n^2.$$ Therefore $2n+1 \leqslant \|T^2\|.$ But that cannot be true for all $n$, so by taking $n$ large enough we get a contradiction.

[That is essentially the proof given in Kadison and Ringrose, Proposition 4.1.1.]
I see in a problem in the 'spectral radius is less than the norm', becaues the spectral radius refers to the spectrum at B(H), which is smaller than the one at A. More precisely, just because i(n+1) is in the sprectrum at A, it need not be in the spectrum at B(H) so that inequality does not neccesarily hold

#### Opalg

##### MHB Oldtimer
Staff member
I see in a problem in the 'spectral radius is less than the norm', becaues the spectral radius refers to the spectrum at B(H), which is smaller than the one at A. More precisely, just because i(n+1) is in the sprectrum at A, it need not be in the spectrum at B(H) so that inequality does not neccesarily hold
The spectral radius is always independent of the containing algebra because of the formula $$\displaystyle r(T) = \lim_{n\to\infty}\|T^n\|^{1/n}$$, which shows that the spectral radius depends only on the norm, not on the individual points in the spectrum. That formula applies in any Banach algebra, not just in $B(H).$ In the Kadison–Ringrose proof the spectrum in question is always the spectrum in $A$, not in $B(H).$