Dirac Equation and Pauli Matrices

DI have to say, it's interesting how much I have learned in the past few months. I returned to school after a long hiatus (10 years), and I am completely changed. I was a terrible student the first time. I didn't understand how to learn. I didn't understand how to take notes. I didn't understand how to study. This time around, I'm the minority in my classes. I'm 28 now, and I am surrounded by young, bright, and motivated students. I have to say, I love it. I love being a student. I love the fear of the unknown that comes with learning. I love the competition. I love the challenge. And most of
  • #1
HeavyMetal
95
0
I have been reading through Mark Srednicki's QFT book because it seems to be well regarded here at Physics Forums. He discusses the Dirac Equation very early on, and then demonstrates that squaring the Hamiltonian will, in fact, return momentum eigenstates in the form of the momentum-energy relation.

He uses the Hamiltonian [itex]H_{ab}=cP_{j}(\alpha^{\ j})_{ab}+mc^2(\beta)_{ab}[/itex]
So it is easy to see that [itex](H_{ab})^2=c^2P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}+mc^{3}P_{j}(\alpha^{\ j}\beta+\beta\alpha^{\ j})_{ab}+m^{2}c^{4}(\beta^2)_{ab}[/itex]

He then explains that if we choose suitable matrices that satisfy a few equations, that we can obtain the momentum eigenstate in the correct momentum-energy form. These matrices satisfy:
{[itex]\alpha^{\ j},\alpha^{k}[/itex]}[itex]_{ab}=2\delta^{\ jk}\delta_{ab}[/itex]
{[itex]\alpha^{\ j},\beta[/itex]}[itex]_{ab}=0[/itex]
[itex](\beta^2)_{ab}=\delta_{ab}[/itex]
Where brackets represent the anticommutator.

Ultimately, after some arithmetic we find that:
[itex](H^2)_{ab}=\textbf{P}^{2}c^{2}\delta_{ab}+m^{2}c^{4}\delta_{ab}=(\textbf{P}^{2}c^{2}+m^{2}c^{4})\delta_{ab}[/itex]

I'm having trouble seeing the steps that it takes to get there. Substituting in the values obtained from the matrices, I can see that the middle term drops out because the value is zero, i.e. [itex]mc^{3}P_{j}(\alpha^{\ j}\beta+\beta\alpha^{\ j})_{ab}=mc^{3}P_{j}*0=0[/itex].
I can also see that the last term is included because [itex]m^{2}c^{4}(\beta^2)_{ab}=m^{2}c^{4}\delta_{ab}[/itex].

I cannot, however, see why [itex]c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=\textbf{P}^{2}c^{2}\delta_{ab}[/itex].

I can see that [itex]c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=c^{2}P_{j}P_{k}\frac{1}{2}[/itex]{[itex]\alpha^{\ j},\alpha^{k}[/itex]}[itex]_{ab}=c^2\textbf{P}^2\delta^{\ jk}\delta_{ab}[/itex]. I am confused as to where the [itex]\delta^{\ jk}[/itex] goes!

Is it simply the fact that [itex]\delta^{\ jk}[/itex] and [itex]\delta_{ab}[/itex] are Kronecker deltas? I reread the previous sections, but I don't think it was mentioned. If this is the case, then I understand where that [itex]\delta^{\ jk}[/itex] goes.
More importantly, I tried to work out the three equivalencies above by using Pauli matrices, but I quickly get lost! When {[itex]\alpha^{\ j},\alpha^{k}[/itex]}[itex]_{ab}[/itex] is written, does it refer to a 2 x 2 matrix (a x b), in which each entry is a 2 x 2 Pauli matrix (j x k) and therefore is ultimately a 4 x 4 matrix?

Thanks in advance,
HeavyMetal \m/
 
Physics news on Phys.org
  • #2
HeavyMetal said:
I can see that [itex]c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=c^{2}P_{j}P_{k}\frac{1}{2}[/itex]{[itex]\alpha^{\ j},\alpha^{k}[/itex]}[itex]_{ab}=c^2\textbf{P}^2\delta^{\ jk}\delta_{ab}[/itex]. I am confused as to where the [itex]\delta^{\ jk}[/itex] goes!

Is it simply the fact that [itex]\delta^{\ jk}[/itex] and [itex]\delta_{ab}[/itex] are Kronecker deltas?

Right, so it goes

[itex]c^{2}P_{j}P_{k}(\alpha^{\ j}\alpha^{k})_{ab}=c^{2}P_{j}P_{k}\frac{1}{2}[/itex]{[itex]\alpha^{\ j},\alpha^{k}[/itex]}[itex]_{ab}=c^2 P_{j}P_{k}\delta^{\ jk}\delta_{ab} = c^2\textbf{P}^2\delta_{ab}[/itex]
HeavyMetal said:
When {[itex]\alpha^{\ j},\alpha^{k}[/itex]}[itex]_{ab}[/itex] is written, does it refer to a 2 x 2 matrix (a x b), in which each entry is a 2 x 2 Pauli matrix (j x k) and therefore is ultimately a 4 x 4 matrix?

[itex]\{\alpha^{\ j},\alpha^{k}\}_{ab}[/itex] says: compute the matrix product [itex]\alpha^j \alpha^k + \alpha^k \alpha^j[/itex], then take the [itex](a, b)[/itex] component of the final matrix. As Srednicki explains just below this, [itex]\alpha_j[/itex] and [itex]\alpha_k[/itex] are 4x4 matrices. So ##a## and ##b## both range from 1 to 4. Meanwhile ##i## and ##j## range from 1 to 3.

Some things to help understand the index notation:

##\delta_{ab}## gives the entries of the identity matrix.

[itex]\{\alpha^{\ j},\alpha^{k}\}_{ab} = 2 \delta^{jk} \delta_{ab}[/itex] means: the square of any alpha matrix is the identity matrix (this is the ##j=k## case), and the anticommutator of any alpha matrix with a *different* alpha matrix is the zero matrix (this is the ##j \neq k## case).

As you can see, your proficiency in QFT is limited by how well you can see through a fog of indices :P This is just a warmup; eventually you are dealing with equations that have half a dozen indices on each term, half of which are suppressed.
 
Last edited:
  • #3
[itex]P_{j}P_{k}\delta^{\ jk} =P_{j}P^{j}[/itex]

It's Einstein's summation convention, see that it works by explicitating the sums. You don't get a scalar unless you contract the indices. When you actually have unpaired indices in an equation, they stay there to refer to the fact that you have in fact a system of equations.
 
  • #4
The_Duck said:
Right, so it goes

[itex]c^{2}P_{j}P_{k}\frac{1}{2}[/itex]{[itex]\alpha^{\ j},\alpha^{k}[/itex]}[itex]_{ab}=c^2 P_{j}P_{k}\delta^{\ jk}\delta_{ab}[/itex]

It was this intermediate step that I missed. I made the mistake of trusting my abilities to make the last two steps in one. Very careless error :D

The_Duck said:
[itex]\{\alpha^{\ j},\alpha^{k}\}_{ab}[/itex] says: compute the matrix product [itex]\alpha^j \alpha^k + \alpha^k \alpha^j[/itex], then take the [itex](a, b)[/itex] component of the final matrix. As Srednicki explains just below this, [itex]\alpha_j[/itex] and [itex]\alpha_k[/itex] are 4x4 matrices. So ##a## and ##b## both range from 1 to 4. Meanwhile ##i## and ##j## range from 1 to 3.

I see why I was confused. I was thinking that the alpha matrices were 2 x 2 Pauli matrices. So what I think you're saying is that both [itex]\alpha^j[/itex] and [itex]\alpha^k[/itex] correspond to the 1, 2, and 3 Pauli gamma matrices?

The_Duck said:
[itex]\{\alpha^{\ j},\alpha^{k}\}_{ab} = 2 \delta^{jk} \delta_{ab}[/itex] means: the square of any alpha matrix is the identity matrix (this is the ##j=k## case), and the anticommutator of any alpha matrix with a *different* alpha matrix is the zero matrix (this is the ##j \neq k## case).

As you can see, your proficiency in QFT is limited by how well you can see through a fog of indices :P This is just a warmup; eventually you are dealing with equations that have half a dozen indices on each term, half of which are suppressed.

This one really hit home for me. I just finished chapter 2 of Bernard Schutz's "A First Course in General Relativity," which is the very chapter that introduces Einstein notation! It is still very new to me; I am a chemistry major, and you must understand that we do not do much mathematical physics as undergraduates. Even in physical chemistry we use it as a tool -- a means to an end -- and are rarely allowed to bask in the beautiful depth of theory. As a matter of fact, my school doesn't offer any physics courses above physics 1 and 2! So I will take your advice and practice recognizing suppressed indices. I take it that you're referring to the omission of repeated identical contravariant and covariant indices, correct? Thanks for your response :D

ddd123 said:
[itex]P_{j}P_{k}\delta^{\ jk} =P_{j}P^{j}[/itex]You don't get a scalar unless you contract the indices.

I am aware of the Einstein summation convention, but the obvious benefit was completely obscured until you pointed this out. Thank you!
 
  • #5
HeavyMetal said:
This one really hit home for me. I just finished chapter 2 of Bernard Schutz's "A First Course in General Relativity," which is the very chapter that introduces Einstein notation! It is still very new to me; I am a chemistry major, and you must understand that we do not do much mathematical physics as undergraduates. ... I am aware of the Einstein summation convention, but the obvious benefit was completely obscured until you pointed this out. Thank you!

I had problems too even when starting from pure physics. I strongly suggest these short notes: http://www.ita.uni-heidelberg.de/~dullemond/lectures/tensor/tensor.pdf they contain all the applied "benefits" and explain the mysteries plainly. You'd go insane without reading something similar.
 
  • Like
Likes aabottom, HeavyMetal and bhobba
  • #6
HeavyMetal said:
I see why I was confused. I was thinking that the alpha matrices were 2 x 2 Pauli matrices. So what I think you're saying is that both [itex]\alpha^j[/itex] and [itex]\alpha^k[/itex] correspond to the 1, 2, and 3 Pauli gamma matrices?

Right.

HeavyMetal said:
This one really hit home for me. I just finished chapter 2 of Bernard Schutz's "A First Course in General Relativity," which is the very chapter that introduces Einstein notation! It is still very new to me; I am a chemistry major, and you must understand that we do not do much mathematical physics as undergraduates. Even in physical chemistry we use it as a tool -- a means to an end -- and are rarely allowed to bask in the beautiful depth of theory. As a matter of fact, my school doesn't offer any physics courses above physics 1 and 2! So I will take your advice and practice recognizing suppressed indices. I take it that you're referring to the omission of repeated identical contravariant and covariant indices, correct?

Sure, or as another example the alpha matrix anticommutation relation would often be written

[tex]\{\alpha^j, \alpha^k\} = \delta^{jk}[/tex]

where the ##a## and ##b## indices have been suppressed, leaving it up to you to remember that ##\alpha^i## and ##\alpha^j## are matrices, and that when we say that a matrix equals 1 we mean that it equals the identity matrix.

Don't worry if you get tripped up at first, QFT is slow going for everyone.
 
  • Like
Likes HeavyMetal
  • #7
ddd123 said:
I had problems too even when starting from pure physics. I strongly suggest these short notes: http://www.ita.uni-heidelberg.de/~dullemond/lectures/tensor/tensor.pdf they contain all the applied "benefits" and explain the mysteries plainly. You'd go insane without reading something similar.
Yep, going to have to read that tensor.pdf. I've been studying relativistic QM and the Dirac equation. I'm getting tensor overload.

Thanks for the link.
 

Related to Dirac Equation and Pauli Matrices

1. What is the Dirac equation?

The Dirac equation is a mathematical equation that describes how particles with spin of ½, such as electrons, behave in the presence of an electromagnetic field. It was developed by physicist Paul Dirac in 1928 as a relativistic version of the Schrödinger equation.

2. What are Pauli matrices?

Pauli matrices are a set of three 2x2 matrices named after physicist Wolfgang Pauli. They are used in the Dirac equation to represent the spin of a particle, and are also important in quantum mechanics for describing the behavior of spin ½ particles.

3. How are the Dirac equation and Pauli matrices related?

The Dirac equation incorporates the use of Pauli matrices to describe the spin of a particle. The matrices are used in the equation to represent the spin of the particle along different axes, allowing for a more complete description of its behavior in an electromagnetic field.

4. What is the significance of the Dirac equation and Pauli matrices in physics?

The Dirac equation and Pauli matrices have significant implications in the field of quantum mechanics and particle physics. They helped to reconcile the theory of relativity with quantum mechanics and have been integral in the development of important theories such as quantum electrodynamics.

5. How are the Dirac equation and Pauli matrices used in practical applications?

The Dirac equation and Pauli matrices have been used in various practical applications such as the prediction of the existence of antimatter, the development of nuclear energy, and in the creation of technologies such as magnetic resonance imaging (MRI) machines. They are also used in the study of particle behavior in particle accelerators.

Similar threads

Replies
1
Views
585
Replies
3
Views
645
Replies
8
Views
926
Replies
6
Views
1K
Replies
5
Views
2K
  • Quantum Physics
Replies
3
Views
974
Replies
4
Views
3K
Replies
5
Views
1K
  • Special and General Relativity
Replies
1
Views
149
Replies
6
Views
1K
Back
Top