Why they call them Lie groups.

In summary: Earth if you don't trust your fingers. There is no meaningful sense in which [Jx, Jy] is related to Jz. It's just a trick of the i.
  • #1
AdrianMay
121
4
Why do people try against all odds to make SU(2) isometric with SO(3) when it's clear from the definition that it's actually isometric with SO(4). Either way you've got 4 variables and the same constraint between them.

It's interesting to see all the dodgy tricks that go into this deception. First, the definition of Lie groups has got that i sneaked inside the exponential for no apparent reason. For SO(2) or SO(3), the generators are defined as what you'd expect them to be, only divided by i to get rid of it again, and the system is just the same as if the i had never been there.

Almost the same. The difference is that you can say that the commutator [Jx,Jy] = iJz. This is also a deception. You can twiddle a globe around in your fingers all day long (or use google Earth if you don't trust your fingers) and you'll never find a meaningful sense in which [Jx, Jy] is related to Jz. It's just a trick of the i, but we never needed to enter the complex domain in order to describe rotations.

Nevertheless, this fictitious commutation relation is then used as the definition of ... of what? Of whatever we're trying to shoehorn SU(2) into. Even this is not enough though. We also have to rob SU(2) of one of it's generators. It's the one that due to the i obfuscation trick looks like the identity. Without that trick, it's got i on it's diagonal, which is not the identity at all. Some authors dismiss it as just an equal phase shift of all the wavefunctions which is supposedly not significant.

Like hell it's not significant. If you phase shift a square well solution you break the boundary conditions. The scalar and vector potentials act only on this phase, so declaring that missing generator insignificant is tantamount to denying the existence of electromagnetism.

It doesn't stop there either. SU(2), like SO(4) actually has 6 generators (because you can rotate about the planes wx, wy, wz, xy, xz and yz, which is a a lot simpler than the Pauli matrices) so we've robbed it of an entire 50% of its generators.

Even after dragging this poor group into a space it doesn't need and stealing half it's generators, rather than fitting the square hole we prepared for it, it expresses it's derision by refusing to return home after 360 degrees. Instead of recognising this as the reductio ad absurdum we deserved, we invent a name for it (double cover) and still insist that our fictitious commutation relation is closer to the essence of rotation than the fact that there are 360 degrees in a circle.

The only guy who found a use for all this spaghetti is Dirac, but even then it wasn't enough. He had to pop out into four dimensions, which brings me full circle.

Adrian.
 
Physics news on Phys.org
  • #2
Isometric? Do you mean isomorphic? No one says they are. SO(3) is isomorphic to SU(2)/Z2. Also, SU(2) is homeomorphic to S3 (a 3-sphere), which is a 3-dimensional manifold. That ensures that there are 3 generators, not 2 or 6.
 
  • #3
I mean "related in any sense whatsoever".
As for the relationship with SO(4) I mean "exactly the same".
 
  • #4
Obviously not true. SU(2) is not isomorphic to SO(4). SU(2)xSU(2) is.
 
  • #5
Actually I figured out the catch: if I compare (w,x,y,z) with (Re(x), Im(x), Re(y), Im(y)) then I'm stuck for an equivalent of the generator that rotates between e.g. w and y. But I think my other points still stand. There are 4 generators in SU(2). It's not helpful to obfuscate the true transformations with the i in the Lie group definition, nor is it justifiable to just discard the transformation that advances the phase of each component independently. You could just as easily write the generators as:

0 i
i 0

0 -1
1 0

i 0
0 0

0 0
0 i

where I've de-obfuscated the i. Wanting [Jx,Jy]= <maybe i>Jz is just silly in a system with 4 generators. The fact that it doesn't work because it wants 720 degrees to get home again is a symptom of the fact that we hid this other generator. The -1 in the result of turning 360 degrees, is just 180 degrees worth of the hidden generator.
 
  • #6
It seems to me that:

1+i.dt 0
0 1+i.dt

for small dt is a perfectly good rotation in SU(2). It preserves lengths, lengths between lengths and everything. Can it be reached using the three standard generators? If not, I think it has to be conceded that the group has 4. Maybe it can, but I can't see how even by rotating through 361 degrees, or is it 721?
 
  • #7
The matrix you have given is neither special nor unitary, so it certainly is not a representation of an element of SU(2).
 
  • #8
AdrianMay said:
You can twiddle a globe around in your fingers all day long (or use google Earth if you don't trust your fingers) and you'll never find a meaningful sense in which [Jx, Jy] is related to Jz.

I think this is not true; correct me if I go wrong in the following. I'll ignore all the i's because I have no idea where they're supposed to go. Say Jx, Jy, Jz are the generators of rotations around the x, y, and z axes, by which I mean we can write an infinitesimal rotation around say the x-axis by a small angle [tex]\alpha[/tex] as (to second order)

[tex]I + \alpha J_x + \frac{1}{2} \alpha^2 J_x^2 + ...[/tex]

and a tiny rotation around the y-axis by angle [tex]\beta[/tex] by

[tex]I + \beta J_y + \frac{1}{2} \beta^2 J_y^2 + ...[/tex]

where I is the identity. Suppose you do the x rotation, then the y rotation, then the inverse of the x rotation, then the inverse of the y rotation. To second order the result is the rotation

[tex](I - \beta J_y + \frac{1}{2} \beta J_y^2) (I - \alpha J_x + \frac{1}{2} \alpha J_x^2) (I + \beta J_y + \frac{1}{2} \beta J_y^2) (I + \alpha J_x + \frac{1}{2} \alpha J_x^2)[/tex]

[tex]= I - \alpha \beta (J_x J_y - J_y J_x) = I - \alpha \beta [J_x, J_y][/tex]

keeping only terms up to second order in the small angles alpha and beta. So fiddling with a globe you can find a meaning for the commutator of Jx and Jy: it's the generator of the rotation you get when you do this composition of four infinitesimal x and y rotations. And if you play around with rotation matrices you can see that this composition of four rotations should produce a rotation by angle [tex]\alpha \beta[/tex] around the z axis. So, up to signs and factors of i, [Jx, Jy] = Jz, and the physical meaning of this is contained above.

I have some sense that the i's start appearing when you realize that you want the generator of rotations to /also/ be the angular momentum operator, which you want to have real eigenvalues, hence be hermitian. If you don't include the i's I think Jz turns out to be antihermitian.
 
Last edited:
  • #9
Indeed, when I tried that again in google Earth I found that if you go left, up, right then down you end up where you started but tilted about the axis your nose is on. I got that wrong in my head cos I was turning 90 degrees all the time which is not very infinitessimal.

I can also see what ccesare means about that matrix not being unitary: the determinant is
1-dt. But that's funny because for small dt it isn't going to change the length of either vector, not even by dt, because the dt is perpendicular to what it's being added to. That's the same approximation that justifies using

0 -1
1 0

as the generator of SO(2). And I seem to remember that the definition of the group is that the sum of the squares of the magnitudes of the components of the spinor stays at 1. That seems like a contradiction I can't see how to resolve.

As for making things hermitian by sneaking the i offstage, isn't that a bit dodgy? Hermitianness has got a real physical meaning, and I'd have thought it was the actual transformations we're interested in (or their diffs). They come out the same whether you put the i before the theta or in the generator.

Adrian.
 
  • #10
The determinant of the matrix you gave above is (1+2*i*dt-dt^2). Additionally, that linear transformation does not preserve the lengths of vectors. The infinitesimal dt has no direction associated to it; it is just a small (real) number. I don't understand your statement about its orthogonality to something else.

The choice of where to put the 'i,' whether in the generators or in the exponential for an arbitrary SU(2) rotation, is entirely up to you. I am very confused about all of your complaints.
 
  • #11
ccesare said:
The determinant of the matrix you gave above is (1+2*i*dt-dt^2). Additionally, that linear transformation does not preserve the lengths of vectors. The infinitesimal dt has no direction associated to it; it is just a small (real) number. I don't understand your statement about its orthogonality to something else.

The choice of where to put the 'i,' whether in the generators or in the exponential for an arbitrary SU(2) rotation, is entirely up to you. I am very confused about all of your complaints.

Yes, that's the right determinant, but I neglected the dt^2, as I may. I say it's orthogonal because if I apply my transformation to a spinor (x,y) I get (x+i.dt.x, y+i.dt.y), and my point is that the length of x+i.dt.x is pretty much the same as x, because the extra bit is orthogonal to x. The same trick is used to justify the generator of SO(2).

I also think it's immaterial where the i goes, except that people say things about the hermitianness and commutation relations of the obfuscated versions, which I think is irrelevant to reality.
 
  • #12
The two vectors you list have different lengths. Thus, the transformation does not preserve the length of vectors. Since you already agreed that your transformation is not even the representation of an SU(2) element, I am failing to see your point.
 
  • #13
Also, choosing the generators to be Hermitian does make some sense, since their expectation values can then be interpreted in a useful way in quantum mechanics.
 
  • #14
They have the same lengths to first order, just like with the generator of SO(2). How is the third Pauli matrix any better? It has determinant=1 but it makes (x+i.dt.x, y-i.dt.y), which is the same deal as regards the lengths.

I didn't agree that my operator wasn't part of SU(2). I agreed that it has non-1 determinant. But I thought we defined rotations as transformations that preserve lengths (and don't reflect), and I think my operator does to the same accuracy as any other Lie generator.

It would be nice if the real operators were hermitian, but if we're at liberty to just sneak bits of the operators off to one side and ask about what's left, we could arrange for any operators to have any properties just by choosing to ignore the awkward bits. What would that have to do with physical reality though?
 
Last edited:
  • #15
To first order in what? dt? That is precisely what changes their lengths. None of the Pauli matrices have determinant zero, since they are elements of SU(2) with a physically irrelevant phase factored out. Representations of elements of SU(2) must be special, i.e. determinant one, and unitary, i.e. the conjugate transpose is equal to the inverse. Your presented transformation satisfies neither of these, and so by definition is not an element of SU(2).
 
  • #16
Sorry, I meant 1, not zero. I guess you read my post before I corrected it.

So SU(2) is defined that way. Fair enough. But is that definition supposed to be the same as saying that it preserves lengths without reflecting? That's what I read, but it doesn't seem to be the case. As I say, dt doesn't change the length either in my transformation or in the third Pauli matrix which is almost the same. The length we're talking about is
Re(x)^2 + Im(x)^2 + Re(y)^2 + Im(y)^2, (square rooted if you like), right?

I already touched on this "physically irrelevant phase". How can it be physically irrelevant when it would break a square well solution by breaking the condition of having zero psi at the edges of the well? And is it not the case that the scalar and vector potentials act purely on this "irrelevant" phase?
 
  • #17
The length of x+i.dt.x, let's just call it 1+i.dt, is sqrt(1+dt^2). So I can ignore it to first order. And if I can't make that approximation, how come SO(2) can? That's what they do when they expand

exp(dt.(0 -1
1 0))

We're just adding a titchy bit at right angles on the Argand diagram or vector plane, and it stays on the orbit.

The length of the spinor is defined as sqrt[ mod(x)^2 + mod(y)^2 ], right? So we can consider the changes of lengths to x and y seperately, and the third Pauli spinor doesn't do any better than mine just by turning y the other way.

So I'm pretty convinced that my transformation preserves lengths as well as the third Pauli matrix, but maybe the answer is that your definition of SU(2) implies more than just preserving lengths and not reflecting. Is that the deal?
 
  • #18
There is a difference between the transformation that you give and the infinitesimal transformation generated by the third Pauli matrix, namely that your transformation is not unitary, while the transformation generated by the third Pauli matrix is. Recall that the transformations need to form a group, i.e. be closed under matrix multiplication. Unitarity is not just about preserving lengths but also about preserving angles (or inner products). Taking products of the two transformations mentioned will demonstrate quickly that your transformation breaks the group structure. Sakurai has a nice discussion about these things in Chapter 3, so I would suggest taking a look at that.

I don't really follow your examples of a phase "breaking" the square well. Can you elaborate on that?
 
  • #19
OK, I'm downloading Sakurai now. But I thought I'd thought about the angles and decided that if I also preserve the length between any two points away from the origin, then I've preserved angles. That much is true, right, so the question is whether the third pauli matrix is any better than mine at preserving lengths between two non-zero points. I doubt that turning them in opposite directions is better than turning them the same way, but I'll try it and see.

What is the angle between two spinors anyway? With two complex numbers I just divide them and normalise the answer, right. How do I do the equivalent for spinors?

About the phase: I mean that you solve an electron in a box by wanting the probability of finding the electron at the edges or outside to be zero, and that's why the length of the box has to be a multiple of the wavelength/2. If you apply a phase shift to that solution, it's no longer zero at the edges and the solution no longer works, so the phase is significant. I also read, but I'm a bit sketchy on this, that the scalar potential shifts the phase in proportion to the time that the eletron is exposed to it along some path, and the vector potential shifts it according to the path integral. In other words, the phase changes by

Integral[(dt,dx,dy,dz).(SP, VPx, VPy, VPz)]

along all the possible 4-paths. So if you declare these phase shifts to be irrelevant, you've abolished electromagnetism. I think the dirac equation is supposed to be general enough to handle these cases isn't it?
 
  • #20
Well I just multiplied out a lot of brackets (attached) and I think I proved that both Pauli's 3rd and my transformation preserve the lengths BETWEEN ANY TWO spinors. That being the case, I see no meaningful sense in which they could be messing up angles.

What's more, you can see in the attachment that the terms depending on whether it's mine or Pauli's 3rd all cancel out. That's just what I expected cos I can't see what difference it makes to rotate the two spinors in different directions. It was also no surprise that both transformations multiply lengths by sqrt(1+dt^2).

So I think we're down to these possibilities:

1) The group really does need the other generator but it may or may not be physically significant. This is what I read in some book or other.
2) You can make my transformation using the other three, maybe around 721 degrees or something.
3) It's actually got 6 as I still suspect. I just got to find planes at funny angles that I can rotate around without the problem of separating the real and imaginary parts.

My determinant was mostly real with a touch of imaginary and still magnitude 1 to first order. What does an imaginary determinant mean anyway?
 

Attachments

  • spinors1.jpg
    spinors1.jpg
    13.6 KB · Views: 356
  • spinors2.jpg
    spinors2.jpg
    12.2 KB · Views: 366

1. Why are they called "Lie groups"?

They are named after the mathematician Sophus Lie, who was instrumental in developing the theory of continuous groups and their applications in differential equations.

2. What is the significance of the name "Lie groups"?

The name reflects the fact that these groups were first introduced and studied by Lie, and it also distinguishes them from other types of groups in mathematics.

3. How are Lie groups different from other types of groups?

Lie groups are special types of continuous groups, meaning that they are defined on a continuous space rather than a discrete one. They also have a smooth structure, making them useful for studying continuous symmetries in physics and other fields.

4. Why are Lie groups important in mathematics and physics?

Lie groups provide a framework for understanding and studying symmetries in various mathematical and physical systems. They are also used in the development of quantum mechanics and other theories in physics.

5. Are there any real-world applications of Lie groups?

Yes, Lie groups have numerous applications in engineering, physics, and other fields. They are used in the study of dynamical systems, robotics, computer graphics, and more.

Similar threads

Replies
27
Views
946
Replies
6
Views
956
Replies
5
Views
1K
  • Quantum Physics
Replies
1
Views
1K
Replies
0
Views
320
  • Topology and Analysis
Replies
16
Views
529
  • High Energy, Nuclear, Particle Physics
Replies
27
Views
3K
Replies
3
Views
2K
Replies
9
Views
1K
Replies
20
Views
1K
Back
Top