Commutator between Casimirs and generators for Lorentz group

In summary: I'm not that good at this!)In summary, the generators ##\{ L^1, L^2, L^3, K^1, K^2, K^3 \}## of the Lorentz group satisfy the Lie algebra:[L^i, L^j] = \epsilon^{ij}_{\;\; k} L^k[L^i, K^j] = \epsilon^{ij}_{\;\; k} K^k[K^i, K^j] = \epsilon^{ij}_{\;\; k} L^k
  • #1
julian
Gold Member
808
313
The generators ##\{ L^1, L^2 , L^3 , K^1 , K^2 , K^3 \}## of the Lorentz group satisfy the Lie algebra:

\begin{array}{l}
[L^i , L^j] = \epsilon^{ij}_{\;\; k} L^k \\
[L^i , K^j] = \epsilon^{ij}_{\;\; k} K^k \\
[K^i , K^j] = \epsilon^{ij}_{\;\; k} L^k
\end{array}

It has the Casimirs

[tex]
C_1 = \sum_i (K^i K^i - L^i L^i) , \qquad C_2 = \sum_i K^i L^i
[/tex]

I wish to prove that the Casimirs commute with all of the generators of the Lie algebra. It is easy to prove that ##[C_1 , L^j] = 0##, ##[C_2 , L^j]## and ##[C_2 , K^j] = 0##. However, I'm having more trouble proving ##[C_1 , K^j] = 0##. What I obtain, for example for ##j=1##, is:

[tex]
[C_1 , K^1] = 2 [L^2 , K^3]_+ - 2 [L^3 , K^2]_+
[/tex]

where ##[\cdot , \cdot]_+## is the anti-commutator. I'm not sure how to prove directly that this vanishes. However, there may be an indirect way of proving ##[C_1 , K^1] = 0##. First note:

\begin{array}{l}
[C_1 , C_2] = \sum_i [K^i K^i - L^i L^i , C_2] \\
\sum_i (K^i [K^i , C_2] + [K^i , C_2] K^i - L^i [L^i , C_2] - [L^i , C_2] L^i) \\
= 0
\end{array}

where we have used ##[C_2 , L^j]## and ##[C_2 , K^j] = 0##. Next write

\begin{array}{l}
0 = [C_1 , C_2] \\
= \sum_i [C_1 , K^i L^i] \\
= \sum_i ([C_1 , K^i] L^i + K^i [C_1 , L^i]) \\
= \sum_i [C_1 , K^i] L^i .
\end{array}

where we have used ##[C_1 , L^j] = 0##.

Is it possible to use this to prove ##[C_1 , K^1] = 0##?

I would prefer to prove first that the Casimirs commutate with all the generators first and then conclude the two Casimirs commute, but if this is what I have to resort to...
 
Last edited:
Physics news on Phys.org
  • #2
The indirect method certainly isn't wrong, you basically prove that ##C_1## commutates with the ##K^i## and ##C_2## at the same time.
From the final expression you know that ##\sum_i[C_1,K^i]L^i=0## but the ##L^i## are linearly independent so the commutators all vanish.

I haven't looked at it in detail but maybe you can use the Jacobi identity in some way to directly prove that ##[C_1,K^j]=0##.
 
  • #3
JorisL said:
The indirect method certainly isn't wrong, you basically prove that ##C_1## commutates with the ##K^i## and ##C_2## at the same time.
From the final expression you know that ##\sum_i[C_1,K^i]L^i=0## but the ##L^i## are linearly independent so the commutators all vanish.

I haven't looked at it in detail but maybe you can use the Jacobi identity in some way to directly prove that ##[C_1,K^j]=0##.

Thanks JorisL, I was hoping to try and make some linearly-independent type argument. Now, it would be easy if the ``coefficients" in front of the ##L^i##'s were numbers then we would have an equation like:

[itex]
\alpha_1 L^1 + \alpha_2 L^2 + \alpha_3 L^3 = 0
[/itex]

then I could do

\begin{array}{l}
0 = \alpha_1 [L^1 , L^2] + \alpha_2 [L^2 , L^2] + \alpha_3 [L^3 , L^2] \\
= \alpha_1 L^3 - \alpha_3 L^1
\end{array}

and then do

\begin{array}{l}
0 = \alpha_1 [L^3 , L^1] - \alpha_3 [L^1 , L^1] \\
= \alpha_1 L^2
\end{array}

implying ##\alpha_1 = 0##.

However, an issue in the sum ##\sum_i[C_1,K^i]L^i=0## is that the ``coefficients" aren't numbers but are combinations of generators. Not sure how to proceed in the above way.

I have also been trying to use the Jacobi identities in order to try to prove the result (i.e. ##[C_1,K^1]=0##) directly, I think the relevant identities to consider would be

\begin{array}{l}
[[L^i,L^j],K^k] + [[L^j,K^k],L^i] + [[K^k,L^i],L^j] = 0 \\
[[K^i,K^j],K^k] + [[K^j,K^k],K^i] + [[K^k,K^i],K^j] = 0
\end{array}

But the only interesting result I seem to be getting from this is the identity that results from setting ##k=2,i=1,j=2## in the first of these Jacobi identities, it gives

[itex]
L^3 K^2 - K^2 L^3 - K^3 L^2 + L^2 K^3 = 0
[/itex]

(actually this identity follows easily from ##[K^2 , L^3] = - \epsilon^{32}_{\;\; 1} K^1 = \epsilon^{23}_{\;\; 1} K^1 = - [K^3 , L^2]##. Using this identity allows the simplification:

\begin{array}{l}
[C_1 , K^1] = 2 [L^2 , K^3]_+ - 2 [L^3 , K^2]_+ \\
= 4 (K^3 L^2 - L^3 K^2)
\end{array}

but I'm not sure how to proceed from here.
 
Last edited:
  • #4
In this case I would think of an algebra as a vector space (addition) equipped with a group structure (the multiplication).
If I'm not mistaken you could think of ##\sum_i[C_1,K^i]L^i## as an "algebra-valued vector" that is the components of the vector are elements of the algebra itself.
If that's correct the conclusion follows immediately.

Hailing @Samy_A, @lavinia and @micromass to ensure I'm not giving any wrong information. (I hope not because I used this kind of argument before)
 
  • #5
JorisL said:
In this case I would think of an algebra as a vector space (addition) equipped with a group structure (the multiplication).
If I'm not mistaken you could think of ##\sum_i[C_1,K^i]L^i## as an "algebra-valued vector" that is the components of the vector are elements of the algebra itself.
If that's correct the conclusion follows immediately.

Hailing @Samy_A, @lavinia and @micromass to ensure I'm not giving any wrong information. (I hope not because I used this kind of argument before)
In general, that doesn't seem to work.
Let's take the Algebra of 2*2 real matrices.
The matrices ##A=\begin{pmatrix}
1 & 0 \\
0 & 1 \end{pmatrix}
## and ##B=\begin{pmatrix}
0& 1 \\
1& 0 \end{pmatrix}
## are linearly independent, yet it is easy to find non-zero matrices ##C, D##, such that ##CA+DB=\begin{pmatrix}
0 & 0 \\
0 & 0 \end{pmatrix}##.
 
  • #6
By the way these are the Casimirs for an infinite unitary representation of the Lorentz group (in particular the unitary irreducible representations of the principle series, if that means anything to anyone). Unitarity here implies that the generators be anti-hermitian:

[tex]
(L^i)^\dagger = - L^i , \qquad (K^i)^\dagger = - K^i .
[/tex]

This doesn't seem to help as it implies ## [C_1 , K^1]^\dagger = [C_1 , K^1] ## and ## \big( 2 [L^2,K^3]_+ - 2 [L^3 , K^2]_+ \big)^\dagger = 2 [L^2,K^3]_+ - 2 [L^3 , K^2]_+ ##.

Could it be to do with the particular matrix representation itself? I'm guessing the representation itself could be constructed starting from the fact that the Casimirs commute with each other, commute with all the ##L^i##'s and that ##\sum_i L^i L^i## and ##L^3## commute with each other - and using that commuting operators have simultaneous eigenstates.

Let me explain how I understand Lie algebra matrix representations.

The representation of a Lie algebra is defined as a mapping of the algebra onto linear operators on a vector space, i.e. operatos (matrices) ##\hat{D} (\hat{L}_i)## are assigned to the elements of the Lie algebra ##\hat{L}_i## (generators of the Lie group).

These operators have to satisfy linearity,

[tex]
\hat{D} (\alpha \hat{L}_i + \beta \hat{D} (\hat{L}_j) = \alpha \hat{D} (\hat{L}_i) + \beta \hat{D} (\hat{L}_j) \qquad Eq.1
[/tex]

and must be homomorphic to the Lie algebra

[tex]
\hat{D} ([\hat{L}_i , \hat{L}_j]) = [\hat{D} (\hat{L}_i) , \hat{D} (\hat{L}_i)] \qquad Eq.2
[/tex]

In general, a representation in a vector space with basis ##|\{ \phi_k> \}## is obtained by assigning to every operator ##\hat{L}_i##, by means of

[tex]
\hat{L}_i |\phi_n> = \sum_m |\phi_m> D (\hat{L}_i)_{mn}
[/tex]

a matrix. From this it follows that

\begin{array}{l}
\hat{L}_i \hat{L}_j |\phi_m> = \hat{L}_i \sum_m |\phi_m> D (\hat{L}_i)_{mn} \\
= \sum_m \big( \hat{L}_i |\phi_m> \big) D (\hat{L}_i)_{mn} \\
= \sum_m \big( \sum_p |\phi_p> D (\hat{L}_i)_{pm} \big) D (\hat{L}_i)_{mn} \\
= \sum_p |\phi_p> \Big( \sum_m D (\hat{L}_i)_{pm} D (\hat{L}_i)_{mn} \Big) \\
= \sum_p |\phi_p> D (\hat{L}_i \hat{L}_j)_{pn}
\end{array}

From which it follows that

[tex]
\sum_m D (\hat{L}_i)_{pm} D (\hat{L}_i)_{mn} = D (\hat{L}_i \hat{L}_j)_{pn} \qquad Eq 3
[/tex]

Hence, the matrix obtained by simple matrix multiplication of ##D (\hat{L}_i)## and ##D (\hat{L}_j)## is equal to the matrix ##D (\hat{L}_i \hat{L}_j)##, assigned to the operator ##\hat{L}_i \hat{L}_j##. If the basis is orthonormalised then the matrix representation is given directly by the scalar product

\begin{array}{l}
<\phi_m| \hat{L}_i |\phi_n> = \sum_p <\phi_m | \phi_p> D (\hat{L}_i)_{pn} \\
= D (\hat{L}_i)_{mn} .
\end{array}

Eq 1. and Eq 2. are automatically satisfied:

\begin{array}{l}
D (\alpha \hat{L}_i + \beta \hat{L}_j) = <\phi_n | \alpha \hat{L}_i + \beta \hat{L}_j |\phi_m> \\
= \alpha <\phi_n | \hat{L}_i |\phi_m> + \beta <\phi_n| \hat{L}_j |\phi_m> \\
= \alpha D (\hat{L}_i) + \beta D (\hat{L}_j)
\end{array}

is satisfied, using this we obtain:

\begin{array}{l}
D ([\hat{L}_i , \hat{L}_j]) = D (\hat{L}_i \hat{L}_j - \hat{L}_j \hat{L}_i) \\
= D (\hat{L}_i \hat{L}_j) - D (\hat{L}_j \hat{L}_i) \\
\end{array}

and using Eq 3,

\begin{array}{l}
D ([\hat{L}_i , \hat{L}_j]) = D (\hat{L}_i) D (\hat{L}_j) - D (\hat{L}_j) D (\hat{L}_i) \\
= [ D(\hat{L}_i) , D(\hat{L}_j)] .
\end{array}

Thereby obtaining a representation.

Any thoughts on this?
 
Last edited:

Related to Commutator between Casimirs and generators for Lorentz group

1. What is the commutator between Casimirs and generators for Lorentz group?

The commutator between Casimirs and generators for Lorentz group is a mathematical expression that measures the extent to which the Casimirs, which are constants of motion, and the generators, which are operators that generate transformations, commute with each other. It is an important quantity in the study of the Lorentz group, which is a mathematical framework used to describe the symmetries of special relativity.

2. How is the commutator between Casimirs and generators for Lorentz group calculated?

The commutator between Casimirs and generators for Lorentz group is calculated using the standard commutator formula, [A, B] = AB - BA, where A and B are operators. In the case of the Lorentz group, the Casimirs and generators are expressed as matrices, and the commutator is calculated by matrix multiplication.

3. What is the physical significance of the commutator between Casimirs and generators for Lorentz group?

The commutator between Casimirs and generators for Lorentz group has important physical significance because it reflects the underlying symmetries of spacetime. It is used to study the properties of particles and fields in special relativity, and it also plays a crucial role in the development of quantum field theories.

4. Are there any applications of the commutator between Casimirs and generators for Lorentz group?

Yes, there are many applications of the commutator between Casimirs and generators for Lorentz group. It is used extensively in the theoretical study of particle physics and quantum field theories. It also has practical applications in fields such as quantum computing and quantum information processing.

5. How does the commutator between Casimirs and generators for Lorentz group relate to other mathematical concepts?

The commutator between Casimirs and generators for Lorentz group is closely related to other mathematical concepts such as Lie algebras, commutation relations, and symmetry groups. It also has connections to other physical theories, such as group theory and gauge theories. Understanding these connections is essential for a deeper understanding of the commutator and its applications.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
816
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
19
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
847
  • Calculus and Beyond Homework Help
Replies
1
Views
634
  • Calculus and Beyond Homework Help
Replies
8
Views
363
  • Linear and Abstract Algebra
Replies
19
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
805
  • Advanced Physics Homework Help
2
Replies
36
Views
2K
Back
Top