Matrix Lie groups and its Lie Algebra

In summary: I}_{2}X^{\dagger}##. So, since ##X^{\dagger}=X##, we have that ##X=exp(X^{\dagger})=exp(X)^{\dagger}##.
  • #1
Xico Sim
43
4
Hi!

I'm studying Lie Algebras and Lie Groups. I'm using Brian Hall's book, which focuses on matrix lie groups for a start, and I'm loving it. However, I'm really having a hard time connecting what he does with what physicists do (which I never really understood)... Here goes one of my questions - very basic, but very important. It is about how one finds a Lie Algebra given its Lie Group. I believe I understood the method exposed in the book, but I am unable to relate it precisely to what I see done in physics textbooks.

Thanks!
 

Attachments

  • Infinitesimal.pdf
    80.9 KB · Views: 319
Physics news on Phys.org
  • #2
They simply cut the Taylor expansion of ##\exp## after the linear term and write the error as ##O(\varepsilon^2)=O(t^2)## for small ##t=\varepsilon## around ##t=0##. In the case of ##SL(2)## this is especially convenient, for
$$\exp{\left(t\cdot \begin{bmatrix}0&0\\1&0\end{bmatrix}\right)} = \begin{bmatrix}1&0\\t&1\end{bmatrix}=1+t\cdot X$$
and similar for the other two basis vectors ##\begin{bmatrix}0&1\\0&0\end{bmatrix}\, , \,\begin{bmatrix}1&0\\0&-1\end{bmatrix}## of ##\mathfrak{sl}(2)##.

In general one has to consider paths within the group through the identity element and calculate the tangent vectors of them in ##t=0##.
 
  • #3
fresh_42 said:
They simply cut the Taylor expansion of ##\exp## after the linear term and write the error as ##O(\varepsilon^2)=O(t^2)## for small ##t=\varepsilon## around ##t=0##.
I understand that. What I want to know is why they do it. Also, why the characteristics one finds for ##X## in such an approximated formula for ##U## (e.g. one finds that ##X## is hermitian in the su(2) case) transfers to all elements of g. Finally, I think I've seen people refer to such an ##X## (an ##X## such that ##U \approx I+i\epsilon X##) as a generator (i.e. a basis element) of g. Why is that true?

Thanks for your patience..
 
  • #4
Xico Sim said:
I understand that. What I want to know is why they do it.
I'm not sure, whether I have the ultimate answer to this, or whether there is a specific one at all.
The easy answer would be, because of sloppiness and a linear approximation is good enough for calculations. One could also say, that locally there is no difference and differentials (or tangent vectors) are simply a linear approximation, so let's write them linearly. It also makes sense formally, as ##U^{-1}## becomes ##-X## and conjugation as well as transposition are linear and can be pulled into the sum.
Also, why the characteristics one finds for ##X## in such an approximated formula for ##U## (e.g. one finds that ##X## is hermitian in the su(2) case) transfers to all elements of g.
I don't think, that this approximation is used to translate ##\text{ unitary }## to ##\text{ skew-hermitian }##. If at all, then it is
$$-X=-\exp(U)=\exp(U^{-1})=\exp(\overline{U}^\tau)=\overline{\exp(U)}^\tau=\overline{X}^\tau$$
Finally, I think I've seen people refer to such an ##X## (an ##X## such that ##U \approx I+i\epsilon X##) as a generator (i.e. a basis element) of g. Why is that true?
The usage of the word "generator" by physicists has to be taken with a big amount of generosity. Mathematically the word generator in the context of vector spaces makes no sense. It's reserved for group elements in a presentation of the group. It's highly confusing and by no means automatically the basis vectors. Locally they span the tangent space ##\mathfrak{g}## and can thus be called infinitesimal generators. But sometimes something different is meant: the basis transformations of a representation of ##\mathfrak{g}## or even of ##G##, which isn't much better either. The representations are usually the goal and why tangent spaces are considered here, e.g. for the ladder operator. (O.k. it's only half the truth, but I think the linear approximation for calculations is the other half.) Latest at BCH even physicist take more care.
 
  • Like
Likes Xico Sim
  • #5
fresh_42 said:
It also makes sense formally, as ##U^{-1}## becomes ##-X## and conjugation as well as transposition are linear and can be pulled into the sum.

I don't see how... maybe I'm missing something trivial?: Suppose ##U^{-1} = X##. Then,
$$U^{-1}U \approx -X(I + i \epsilon X) = -X - i \epsilon X^2 \ne I$$
Which is absurd.

fresh_42 said:
I don't think, that this approximation is used to translate ##\text{ unitary }## to ##\text{ skew-hermitian }##. If at all, then it is
$$-X=-\exp(U)=\exp(U^{-1})=\exp(\overline{U}^\tau)=\overline{\exp(U)}^\tau=\overline{X}^\tau$$

Two questions about this part:
  1. You wrote ## X=exp(U) ##. But ##X \in \mathbf{g}## and ##U \in \mathbf{G}##, so I would expect it to be ##U=exp(X)##, and even this is questionable, since in general we only know that ##U=e^{X_{1}}...e^{X_{m}} ## for some ##{{X_i}}## (and also assuming G connected).
  2. I would say that in fact ##X=X^{\dagger}## because of the following: Let ##X## be an element of su(2). We have ##X\in\mathfrak{su}(2)\ \iff\forall t\in\mathbb{R},\,e^{itX}\in\text{SU(2) } ##. In particular, ##(e^{itX})^{\dagger}e^{itX}=\mathbb{I}_{2}\,\Rightarrow e^{-itX^{\dagger}}e^{itX}=\mathbb{I}_{2}\,\Rightarrow X^{\dagger}=X##.
 
  • #6
Xico Sim said:
I don't see how... maybe I'm missing something trivial?: Suppose ##U^{-1} = X##. Then,
$$U^{-1}U \approx -X(I + i \epsilon X) = -X - i \epsilon X^2 \ne I$$
Which is absurd.
Right. I assume physicists think a lot more in terms of infinitesimals which they identify with tangents or better with small changes in the direction of the tangent. And the power ##\{-1\}## in a function is a minus sign in the differential, a infinitesimal generator of change. In the same sense the determinant (##\cdot##) becomes the trace (##+##) and ##1## becomes ##0##. Analytically there is still the error margin ##O(t^2)##. This only vanishes in special cases like the ones in my first post where ##X^2=0## or everything is on the diagonal where we get the ordinary exponential function. And although locally, i.e. in a small neighborhood of the identity matrix, unitary matrices behave like their tangent matrices, one cannot identify them - only in approximation which is often sufficient for it shows the main rate of change. You could as well ask, why we would bother Taylor series at all, if we simply could take ##1+f\,'(x)## as linear approximation. Sometimes it is good enough, but you won't try to prove ##f(x)=1+f\,'(x)##
You wrote ## X=exp(U) ##. But ##X \in \mathbf{g}## and ##U \in \mathbf{G}##, so I would expect it to be ##U=exp(X)##, and even this is questionable, since in general we only know that ##U=e^{X_{1}}...e^{X_{m}} ## for some ##{{X_i}}## (and also assuming G connected).
Yes, that was a mistake. It had to be ##U^{-1}=(\exp(X))^{-1}=\exp(-X)=\overline{\exp(X)}^\tau=\overline{U}^\tau=U^\dagger##
I would say that in fact ##X=X^{\dagger}## because of the following: Let ##X## be an element of su(2). We have ##X\in\mathfrak{su}(2)\ \iff\forall t\in\mathbb{R},\,e^{itX}\in\text{SU(2) } ##. In particular, ##(e^{itX})^{\dagger}e^{itX}=\mathbb{I}_{2}\,\Rightarrow e^{-itX^{\dagger}}e^{itX}=\mathbb{I}_{2}\,\Rightarrow X^{\dagger}=X##.
Nope. ##\mathfrak{su}(n)## are skew-Hermitian matrices (see above). I don't see your ##U:=e^{itX}## in ##SU(n)## because ##U^{-1}=e^{-itX}=\overline{e^{itX}}=\overline{U} \neq \overline{U}^\tau=U^\dagger##.

The Lie algebra of SU(n), denoted by su(n), can be identified with the set of traceless antihermitian n×n complex matrices, with the regular commutator as Lie bracket. Particle physicists often use a different, equivalent representation: the set of traceless hermitian n×n complex matrices with Lie bracket given by −i times the commutator.
 
  • #7
fresh_42 said:
Yes, that was a mistake. It had to be ##U^{-1}=(\exp(X))^{-1}=\exp(-X)=\overline{\exp(X)}^\tau=\overline{U}^\tau=U^\dagger##

Sorry, but I still have the same problem I had before:
Xico Sim said:
...to be ##U=exp(X)##, and even this is questionable, since in general we only know that ##U=e^{X_{1}}...e^{X_{m}} ## for some ##{{X_i}}## (and also assuming G connected).
fresh_42 said:
Nope. ##\mathfrak{su}(n)## are skew-Hermitian matrices (see above). I don't see your ##U:=e^{itX}## in ##SU(n)## because ##U^{-1}=e^{-itX}=\overline{e^{itX}}=\overline{U} \neq \overline{U}^\tau=U^\dagger##.

Hm I think your second equal sign is unjustified. Let me try and compute that:
##U^{-1}=e^{-itX}=e^{-itX^{\dagger}}=e^{(itX)^{\dagger}}=(e^{itX})^{\dagger}=U^\dagger##
As expected. (I used the assumption that ##X## is hermitian on my second equal sign).

fresh_42 said:

Exactly: this is in accordance with what I'm saying: "...the set of traceless hermitian n×n complex matrices..." (using the physicists convention).
 
  • #8
Xico Sim said:
Sorry, but I still have the same problem I had before:
Why? You don't need this representation as a product. Given any skew-Hermitian matrix, send it via the diffeomorphism ##\exp## into its group and get a unitary matrix as result. The other way around take a path through ##\mathbb{1} \in G## and differentiate. We may talk about the transportation of the structure (algebra ##\rightarrow## group), but then we have to talk about BCH.
Hm I think your second equal sign is unjustified. Let me try and compute that:
##U^{-1}=e^{-itX}=e^{-itX^{\dagger}}=e^{(itX)^{\dagger}}=(e^{itX})^{\dagger}=U^\dagger##
As expected. (I used the assumption that ##X## is hermitian on my second equal sign).
Only if ##X^\dagger =X.##
Exactly: this is in accordance with what I'm saying: "...the set of traceless hermitian n×n complex matrices..." (using the physicists convention).
But in this case it should be stated, as it is mathematically completely unusual. In mathematics, ##[A,B]=AB-BA## in the case ##A,B## are linear transformations. This would result in ##[A,B]^\dagger = (AB-BA)^\dagger = (AB)^\dagger- (BA)^\dagger=B^\dagger A^\dagger - A^\dagger B^\dagger = BA-AB = -[A,B]## and an Abelian Lie algebra, which is (in your words) absurd unless ##char\, \mathbb{F} =2##.
So if you change this convention, someone should tell.
 
  • Like
Likes Xico Sim

Related to Matrix Lie groups and its Lie Algebra

1. What is a Matrix Lie group?

A Matrix Lie group is a type of mathematical group that is made up of matrices (rectangular arrays of numbers) that satisfy certain properties. These properties include closure, associativity, identity element, and inverse element. Matrix Lie groups are used in many areas of mathematics, including geometry, physics, and computer graphics.

2. What is the Lie Algebra of a Matrix Lie group?

The Lie Algebra of a Matrix Lie group is the set of all possible tangent vectors at the identity element of the group. It is represented by a vector space with a specific type of operation called the Lie bracket. The Lie Algebra provides a way to study the local behavior of the Matrix Lie group and is useful in understanding the group's structure and properties.

3. How are Matrix Lie groups and Lie Algebras related?

Matrix Lie groups and Lie Algebras are closely related, as the Lie Algebra is a way to study the local behavior of the Matrix Lie group. The Lie Algebra is a linear approximation of the Matrix Lie group and provides a way to understand the group's structure and properties without having to work with the entire group. In some cases, the Lie Algebra can be used to reconstruct the Matrix Lie group.

4. What are some applications of Matrix Lie groups and Lie Algebras?

Matrix Lie groups and Lie Algebras have many applications in mathematics and other fields. They are commonly used in physics to study symmetries and conservation laws in physical systems. In computer graphics, Matrix Lie groups are used to represent and manipulate the movement of 3D objects. They are also used in signal processing, control theory, and robotics.

5. Are there any real-world examples of Matrix Lie groups and Lie Algebras?

Yes, there are many real-world examples of Matrix Lie groups and Lie Algebras. One example is the special orthogonal group in 3D space, which represents all possible rotations of a 3D object. Another example is the general linear group, which represents all invertible linear transformations. These groups and their Lie Algebras have various applications in fields such as robotics, computer graphics, and physics.

Similar threads

  • Linear and Abstract Algebra
Replies
19
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
896
  • Linear and Abstract Algebra
Replies
14
Views
3K
  • Linear and Abstract Algebra
Replies
27
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
2K
Replies
6
Views
964
Back
Top