Group Theory Basics: Where Can I Learn More?

In summary, Group Theory is a fundamental mathematical concept that has various applications in physics, particularly in the study of symmetry and patterns. It involves the study of groups, which are sets of elements that follow certain rules and properties when combined. Some good resources for understanding Group Theory include the books "Groups and Symmetry" by M.A. Armstrong, "An Introduction to the Theory of Groups" by J. Rotman, and "Group Theory: An Intuitive Approach" by R. Mirman. Online resources are also available, such as the website http://www.cns.gatech.edu/GroupTheory/index.html, which provides a free introductory book on Group Theory. The idea of having an entry-level workshop on groups has been proposed, with
  • #106
Originally posted by Lonewolf
I'm still around. Can someone explain tangent bundles, please. Marsden defines them as the disjoint union of tangent vectors to a manifold M at the points m in M. Am I right in thinking that this gives us a set containing every tangent vector to the manifold, or did I miss something?

Such a good question! Thread would die without a questioner like that---Hurkyl and I, chroot and/or Lethe etc. wouldn't like to just talk to selves. I want to let Hurkyl answer this because will do it in clear orderly reliable manner.

But look, it is a necessary and basic construction! The tangent vectors on a manifold are the most important vectorspacetype things in sight!
And yet each tangentspace at each separate point is different. So at the outset all one has is a flaky disjoint collection of vectorspaces. One HAS to glue it all together into a coherent structure and give it a topology and, if possible, what is even better namely a differential structure.

Imagine a surface with a postagestampsized tangent plane at every point but all totally unrelated to each other. How flaky and awful! But now imagine that with a little thought you can merge all those tangentplanes into a coherent thing-----a dualnatured thing because it is both a linearspace (in each "fiber") and a manifold. Now I am imagining each tangentspace as a one-dimensional token (in reality n-dimensional) and sort of like a hair growing out of the point x in the manifold. All the hairs together making a sort of mat.

And these things generalize----not just tangentspace bundles but higher tensor bundles and Lie algebra bundles. Socalled fiber bundles (general idea). It is a great machine.

A vectorfield is a "section" of the tangent bundle. The graph of a function is a geometrical object and the "graph" of a vectorfield lives in the tangent bundle. A choice of one vector "over" each point x in the manifold. Great way to visualize things.

The problem is how to be rigorous about it! Hurkyl is good at this. You get this great picture but how do you objectify gluing and interrelating all the tangent spaces into a coherent bundle and giving them usable structure.

It turns out to be ridiculously easy. To make a differentiable manifold out of anything you merely need to specify local coordinate charts. The vectorspace has obvious coordinates and around every point in the manifold you have a coordinatized patch so if it is, like, 3D, you have 3 coordinates for the manifold and 3 for the vectorspace. So you have 6 coordinates of a local chart in the bundle.

Charts have to fit together at the overlaps and the teacher wastes some time and chalk showing that the 6D charts for the bundle are smoothly transformable one to the other on overlapping territory------why? because, surprise, the original 3D manifold charts were smoothly compatible on overlaps.

You will see a bit of magic. An innocent looking LOCAL condition
taking almost no time to mention will unexpectedly suffice to make it all coherent. All that is needed is that, at every point, right around that point, the tangent bundle looks like a cartesian product of a patch of manifold and a fixed vectorspace.

what should I do, edit this? delete it? it is attitudinal prep before someone else writes out the definition (if they ever do) of a fiber bundle-----tangent bundle just a special case of fiber bundle
once that is done, can erase this----I don't want to bother editing it since just provisional. Glad yr still around LW
 
Physics news on Phys.org
  • #107
prep for Lorentz group reps

Lethe, chroot, Hurkyl
does everybody know the dodge used to represent the lorentz group on infinite dimensional function spaces?

I sort of suspect you all do.

Let's identify at least the proper lorentz group, the connected component of the identity or whatever, with SL(2,C)
and just look at SL(2,C)

to transform a function f(x) using a 2x2 complex matrix all you need to do is process the x with the matrix first before you feed it to the function

in a generalized sense it is SL(2,C) "acting by translation".

x ----> (ax + b)/(cx + d)

f[x] ----> f[(ax + b)/(cx + d)]

Lately when I see representations of SL(2,C) they mostly involve this action on functions by generalized "translation". So I assume its familiar to y'all.

And the vectorspace of the action is infinite dimensional. There isn't just a discrete set of reps labeled by integers or halfintegers, rather a whole slew labeled by the real line (and maybe another parameter as well).

something to prove here, namely that composing
two maps of the form
x ----> (ax + b)/(cx + d)
results in one of that form
and if you do things in the right order it really gives a group representation
maybe someone should state the definition so we could check details like that?
 
Last edited:
  • #108


Doh, and I thought I had weaseled out of explaining fiber bundles!

Disclaimer: the terminology "fiber bundle" is somewhat new to me... although I've known the idea behind them for quite a while. It is possible I have some subtle detail wrong about them. (really, studying this whole subject with rigor is new to me... I just had the good fortune to have gotten partway through Brian Hall's text and another one on differential geometry before this discussion started! It's fortunate that much or even all of differential geometry is ideas we've had since our calculus days... just phrased in nifty ways that make them precise, easy to use, and generalizable)


Let's start with something mundane. Everyone remember studying calculus of a single variable? :smile:

In this context we were primarily concerned with functions in the domain of the real numbers and range in the real numbers. Real numbers have some very nice properties (i.e. they form a differentiable manifold), and we were primarily concerned with how the nice properties of the domain and range interacted with our single variable functions. For example, a continuous function is (essentially) one that preserves the property of nearness in the domain and range. A differentiable function relates the differential structure of the domain to that of the range.

So, our studies began by considering certain functions that map R to R. The "total space" of functions was simply R*R; the set that contains all possible graphs of functions.


Eventually our studies became more sophisticated. We no longer considered x an independant variable and y a dependant variable, but we treated x and y both as fully fledged variables. It became interesting to study the structure of R*R as a differentiable manifold in its own right (though, of course, we didn't call it that!).


These same ideas can be applied to any 2 differentiable manifolds; all of our ideas from calculus still apply (but may be a little tricker) when we're considering functions from, say, R2 to SO(3)! And just like in the single variable case, it pays to also be able to consider the total space as a structure on its own merit.


A fiber bundle is the abstraction that covers the above notions. We have a manifold M (analogous to a domain) and a fiber F (analogous to a range). We consider a "total space" E which looks locally like M*F. More precisely, that means we have a projection mapping π that projects E onto M... so that for any neighborhood U of M, π-1(U) is isomorphic (or isometric or diffeomorphic or whatever) to U*F. That is, the set of all points in E that map onto U must have the same structure as U*F.

Why do we only ask this to hold locally? Why not just take the total space to be equal to U*F? (that would be called a trivial bundle... and incidentally, the local isomorphisms from E to U*F are called trivializations) Well, it helps to consider what appears to be everyone's favorite first nontrivial fiber bundle... the mobius strip!

Recall the procedure for making a mobius strip: you take a long thin rectangle of paper, you loop it around into a cylinder, but then before pasting the ends together so it really is a cylinder, you twist the strip one time, and you get a mobius strip.

But what does this have to do with fiber bundles?

Well, we can describe the strip as being:

S = [0, 10]*[-1, 1]

"Looping" the strip means topologically identifying the two ends of the strip {0}*[-1, 1] and {10}*[-1, 1]. The "domain" (aka base manifold) here is [0, 10] with 0 and 10 identified; this is just the circle S1. The fiber is [-1, 1]. This pasting gives us the cylinder S1*[-1, 1]... a trivial fiber bundle.

But wait! There are two (topologically distinct) ways we can map [-1, 1] onto [-1, 1]; instead of using the identity map f(x)=x, use the map f(x)=-x. This corresponds to the "twist" we use when making a mobius strip. This fiber bundle is (obviously) different from the cylinder. Describing the mobius strip as S1*[-1, 1] does not work anymore...

Presume it does work. Then define f from S1 to [-1, 1] as:

f(x) = 1

This is a constant function, and clearly continuous... but it is not a continuous function on the mobius strip! (please ignore the minor details that would take too much effort to patch this into a perfectly rigorous demonstration)


A "section" is simply a fancy name for a graph of a function! But recall that the total space E cannot always be decomposed into M*F... so "section" only coincides with "graph" locally. More precisely, a section S is simply a surface in the total space E such that for every point x on the manifold M, there is a unique point y on S such that π(y) = x.



For a n-dimensional manifold M, the tangent bundle is a special fiber bundle where the fiber is Rn; for any point x on M, π-1(x) = TMx. In other words, the total space E is simply the collection of the tangent spaces for every point on the manifold. The associated projection map π takes each tangent space onto the point on the manifold to which it is tangent. (alternatively, π-1 inverse takes each point on the manifold to its associated tangent space)


Question: is the tangent bundle a trivial bundle? In other words, is the tangent bundle TM diffeomorphic to M*Rn? (I don't know the answer to this one... I presume it is yes, but I haven't tried to prove or disprove it yet)



Anyways, to summarize:

A fiber bundle is a generalization of the cross product; it permits the resulting structure to "twist" as it goes around the base manifold so that the net result does not globally have the same structure as M*F (though it does locally). A section S is the corresponding generalization of a graph of a function; to each point x on the manifold corresponds a unique point y of S, and the projection onto M of y is simply x. A tangent bundle is simply one where the fibers are the tangent spaces of M.

Lonewolf, you were correct (I believe) in your summary of what a tangent bundle is... but the important thing is that bundles also has properties related to the structures of M and F, be it merely topological, differential, metric, or even a linear structure in the case of vector spaces.


Sorry this isn't as clear as my other ones, I hadn't prepared any nice examples and demonstrations of the concepts. :frown:
 
  • #109
I'm not familiar with that construction (as I mentioned, I'm still somewhat new to this subject)... but I have seen that action before, but in the context of complex analysis as the mobius transformation (and, before that, I learned about the roughly equivalent notion of inversions in the Euclidean plane).
 
  • #110
this is a quality treatment

I'll try to think about your question. Lethe probably could reassure us on this point, or else knows a counterexample.
<<Question: is the tangent bundle a trivial bundle? In other words, is the tangent bundle TM diffeomorphic to M*Rn? (I don't know the answer to this one... I presume it is yes, but I haven't tried to prove or disprove it yet)
>>
Thanx for doing bundles!


Hey Hurkyl, I found a GREAT site about knots and the Jones
polynomial. Anyone including a high school kid could quickly learn from this site how to calculate the jones of a a trefoil knot. It is an AMS website and it is classy.

http://www.ams.org/new-in-math/cover/knots2.html [Broken]
 
Last edited by a moderator:
  • #111
Originally posted by Hurkyl

For example, a continuous function is (essentially) one that preserves the property of nearness in the domain and range.
essentially? this is the exact definition of a continuous mapping: one that takes close points to close points. you just need to find a precise way to formulate a notion of closeness.


Question: is the tangent bundle a trivial bundle? In other words, is the tangent bundle TM diffeomorphic to M*Rn? (I don't know the answer to this one... I presume it is yes, but I haven't tried to prove or disprove it yet)
obviously. any vector space is easily isomorphic and diffeomorphic to Rn. which gives you a vector bundle morphism.
 
  • #112
Originally posted by lethe
essentially? this is the exact definition of a continuous mapping: one that takes close points to close points. you just need to find a precise way to formulate a notion of closeness.


obviously. any vector space is easily isomorphic and diffeomorphic to Rn. which gives you a vector bundle morphism.

Merci monsieur Lethe for both comments! I believe Hurkyl is going to point to the next direction for the thread to go, but do you have any suggestions? I feel all of us can be proud of this
congenial and useful thread and would be glad to hear of ideas for things that the thread might do. Tho at moment it is Hurkyl's choice.

BTW Lethe, in that "Knot and Jones" thread I could not see how to introduce the orientation idea without being able to draw better pictures and without sounding pedantic by introducing
lots of words like "righthanded, lefthanded". So what I calculated is only correct up to orientation. Just a first look. I expect you know something about the Jones polynomial and would be happy if you want to edit or emend what I wrote. Feel free if you can improve it.
 
  • #113
essentially? this is the exact definition of a continuous mapping: one that takes close points to close points. you just need to find a precise way to formulate a notion of closeness.

Posting late at night saps the urge to prove things that seem obvious. :smile: Preserving nearness is the same as preserving limits (I guess commuting with limits is the right phrase), which is the very definition of a continuous function.



obviously. any vector space is easily isomorphic and diffeomorphic to Rn. which gives you a vector bundle morphism.

I'm not convinced... we can modify the mobius strip construction so the fiber is R, so then the mobius strip is a vector bundle over S1, but is clearly not topologically equivalent to S1*R.
 
  • #114
Originally posted by Hurkyl

I'm not convinced... we can modify the mobius strip construction so the fiber is R, so then the mobius strip is a vector bundle over S1, but is clearly not topologically equivalent to S1*R.

ehh... insert the word locally about three places in my post. locally trivial, locally diffeomorphic, etc. then we should be in business
 
  • #115
ehh... insert the word locally about three places in my post. locally trivial, locally diffeomorphic, etc. then we should be in business

I knew it was locally trivial; I was wondering about globally... I presume from the last part of jeff's post that it is not globally true in general.
 
  • #116
Originally posted by Hurkyl
I presume from the last part of jeff's post that it is not globally true in general.

You are correct sir. For example, no spheres save for S1,3,7 are parallelizable.
 
Last edited:
  • #117
Aha, I see!

IIRC, any smooth tangent vector field on the sphere S^2 must contain a zero vector. However, it is trivial to find a smooth section of S^2*R^2 that is nonzero everywhere... therefore T(S^2) cannot be (globally) diffeomorphic to S^2*R^2 and thus is not parallelizable.
 
  • #118


Originally posted by jeff
Triviality and principle bundles:

A bundle E is trivial iff it's associated principle bundle P(E) - obtained from E by replacing it's fibre with it's structure group - has a "cross-section", that is, a continuous map s: B &rarr; E satisfying &pi;s(x) = x, x &isin; B. In the case of the mobius strip, if &theta; is a local coordinate on S1, then we must have s(&theta;) = s(&theta;+2&pi;) (for purposes of illustration, we ignore the fact that because each coordinate chart covers less than 2&pi; radians, local coordinates on S1 should not be allowed to exceed this). However, G = Z2 means that P(E) is a double cover of S1, so we can't have s(&theta;) = s(&theta;+2&pi;) unless s jumps discontinuously between corresponding points on the two "branches".

Hrm... Z2 is also the structure group of the cylinder, right? This proof needs to also take into account the twist in the construction of the mobius strip so that P(E) is a connected double cover, right? It seems that you would need to use this fact to prove P(E) is a double cover, so you might as well use this fact by itself to show that s(&theta;) != s(&theta;+2&pi;)
 
  • #119
tangent bundle on sphere not trivial

Originally posted by Hurkyl


Question: is the tangent bundle a trivial bundle?


the simplest counterexample is probably the 2D sphere S2 because you can't comb the hair on a billiard ball

It is a famous diff/geom theorem that any vectorfield on the sphere must be zero at at least one point


but if the tangent bundle on the sphere were isomorphic (as bundle) to the cartesian product of the sphere with the plane R2

then one could define a vectorfield or "section" of the bundle by giving every point the same vector (1,0)
and it would map to a never-vanishing vectorfield on the sphere
contradiction


I actually put this 2D sphere counterexample in my first post
replying to Hurkyl's where he originally asked the question
but erased it I guess before anyone read it---not feeling completely confident about the definitions

this is where it matters what the definition of a vector bundle morphism is------I theeenk. It ought to act like a linear map upstairs and a diffeo downstairs

havent read jeff's weighty contribution, maybe it says something about this?
 
  • #120
Originally posted by jeff
You are correct sir. For example, no spheres save for S1,3,7 are parallelizable.

Ahah! so jeff did make this point! Regret to say I just got back
and have not been keeping up.
 
  • #121
Doh! Diffeomorphism is not the right word for a vector bundle morphism. Bad Hurkyl!
 
  • #122
Originally posted by Hurkyl
Doh! Diffeomorphism is not the right word for a vector bundle morphism. Bad Hurkyl!

Good Hurkyl!
I liked your treatment of tangent bundles
and was in doubt myself about the definitions which
is why i erased mention of sphere in my initial reply
Am not too concerned with semantics in any case
morphism schmorphism

I trust your judgement about what is a relaxed
not-overly-technical level of discussion and what
would be useful to discuss. Where shall we go next?

Or do we wait till Lonewolf asks another question?
 
  • #123
Well, the problem was that I was actually thinking diffeomorphism; I wasn't just using the word because it has "morphism" in it!


As for where to go next, I'm wondering if everyone wanted to stick primarily to lie groups, or if we want expand our goal to study differential geometry in more detail as well.


Anyways, now that we know what the tangent bundle is, I can submit the next homework problem my coworker suggested! (and finally get back to lie groups! :wink:)


Suppose M is a differential manifold and f is a morphism of M into itself. The differential structure of M allows us to define a function (*f) on T(M) that acts as a morphism (*f)x from Tx(M) to Tf(x)(M) for every x in M.

Informally, f(x + dx) = f(x) + (*f)x(dx)

More precisely, for any x on M, define (*f)x as follows:

For any v in Tx(M), choose a smooth curve &gamma; through x whose tangent vector at x is v. Then, define (*f)x(v) to be the tangent vector to f(&gamma;) at f(x). (proof that this is well-defined is left to the reader! I've always wanted to say that!)

(Of course, you could do it much more easily by using coordinate charts... but I've been making a conscious effort to avoid using coordinate charts whenever possible because, IMHO, they obscure the geometric meaning behind everything)

An invariant tangent vector field (with respect to a group G of automorphisms of M) is one that is unchanged after applying elements of G. IOW, for a vector field V and a group element g, (*g)(V) = V. Alternatively. (*g)x(V(x)) = V(g(x))


Now, suppose M is a lie group. Since M is a group, we are given a natural class of automorphisms; those of M acting on itself by left multiplication (also by right multiplication)! For an element g of M, define:

Lg : M -> M : h -> gh

That is, Lg is the "left multiplication by g" operator.

Define Rg similarly to be the right multiplication operator.


Finally, let E be the identity element of M.


Problem 1: Prove that there is a one to one correspondence between TE(M) and the set of all tangent vector fields invariant under left multiplications. (called left invariant vector fields)

Problem 2: Prove that right multiplication maps left invariant vector fields to left invariant vector fields.

(there is an exercise 3 that goes with this problem set, but we haven't talked about Adjoint mappings)
 
Last edited:
  • #124
Originally posted by Hurkyl
...T(S^2) cannot be (globally) diffeomorphic to S^2*R^2 and thus is not parallelizable.

It's S2 that's not parallelizable implying the stronger statement that T(S2) and S2xR2 aren't homeomorphic.

Originally posted by Hurkyl
Hrm... Z2 is also the structure group of the cylinder, right?

No, trivial bundles have trivial structure groups, that's the whole point.

Originally posted by Hurkyl
This proof needs to also take into account the twist in the construction of the mobius strip so that P(E) is a connected double cover, right? It seems that you would need to use this fact to prove P(E) is a double cover, so you might as well use this fact by itself to show that s(&theta;) != s(&theta;+2&pi;)

Was it not obvious that P(E)'s connectedness was used? Sorry. What I meant was going once round the open set S1 traces a closed arc in P(E) beginning at one point of a fibre and ending at the other point of the same fibre so that s-1 maps a closed set to an open set and so isn't continuous.
 
Last edited:
  • #125
Problem 1: Prove that there is a one to one correspondence between TE(M) and the set of all tangent vector fields invariant under left multiplications. (called left invariant vector fields)
---------------------
Both TE(M) and the set of LIVFs are vector spaces and this 1-1 corresp will turn out to be a linear isomorphism (so they are essentially the same as vector spaces)

you already told us about how a mapping f : M--->M has a lift *f up to the tangent space level Tx(M) --->Ty(M) which we now apply to a manifold which is a group G.

and you described the right and left multiplication maps Rg and Lg : G ---> G

So we can use the lifts of those maps, like for example *(Lg)

Now as to Problem 1, for any v in TE(M) let's define a LIVF denoted by Xv
g ---> *(Lg) v

this is a vector field which at a point g in G has a vector which is the image of v by the lift of the left multiplication map that goes from the group identity element to g.

I just need to show that this vector field is left invariant so I study
Xv(h) where h is in G. If I left multiply by g, I get
Xv(gh) and to show left invariance
I have to show this is the same as *(Lg)Xv(h)
This is just your definition of left invariance, shifting around on the group level has to have the same effect as lift-mapping upstairs in the tangent spaces.

But by how Xv was defined in the first place
Xv(gh) = *(Lgh) v
= *(Lg) *(Lh) v ...[[[by chain rule]]]
= *(Lg) Xv(h) ...[[[by Xv definition]]]

I think its clear that the correspondence here is linear----adding vectors v and v' in the tangent space at the group identity will correspond to adding left invariant vector fields Xv and Xv' just by the linear way the fields were defined.

All I really have left to do is exhibit the inverse of this map. Given a LIVF, say call it X(g), how do I go back to a tangent vector at the identity. Well it is obvious. Just take X(e), the field's value AT the identity.

footnote, there is that chain rule thing. Lifts preserve the composition of mappings, and specializing that to the case of left multiplication mappings we have that the original mappings
compose groupishly---I'm showing composition of maps denoted by the little o symbol.

Since for any k in G, associativity gives us (gh)k = g(hk) we have
Lgh = Lg o Lh
and that extends to the tangent spaces because of the chain rule
*(Lgh) = *(Lg) o *(Lh)

---------------------------
Problem 2: Prove that right multiplication maps left invariant vector fields to left invariant vector fields.
---------------------------

Well suppose we have vectorfield X(g) which is left invariant and we
apply Rh to it in some sensible way that produces a new vector field Y(g)
A sensible way to define Y(g) might be

Y(g) = *(Rh) X(g h-1)

So let us check if this is left invariant by an action Lk

Is it true that Y(kg) = *(Lk) Y(g)?

Well Y(kg) = *(Rh) X(kg h-1)
= *(Rh) *(Lk) X(g h-1)...[[[by left invariance of X]]]
= *(Lk) *(Rh) X(g h-1) ...[[[commutativity]]]
= *(Lk) Y(g)

which was to be proved.

footnote for any two elements of the group k and h
right and left multiplication by them commute
Lk Rh = Rh Lk
I guess that is obvious k(gh) = (kg)h assoc. and this
commuting business goes upstairs to the lifted right and left multiplications maps
*(Lk) *(Rh) = *(Rh) *(Lk)
so there :wink:


Originally posted by Hurkyl


Suppose M is a differential manifold and f is a morphism of M into itself. The differential structure of M allows us to define a function (*f) on T(M) that acts as a morphism (*f)x from Tx(M) to Tf(x)(M) for every x in M.

Informally, f(x + dx) = f(x) + (*f)x(dx)

More precisely, for any x on M, define (*f)x as follows:

For any v in Tx(M), choose a smooth curve &gamma; through x whose tangent vector at x is v. Then, define (*f)x(v) to be the tangent vector to f(&gamma;) at f(x). (proof that this is well-defined is left to the reader! I've always wanted to say that!)

(Of course, you could do it much more easily by using coordinate charts... but I've been making a conscious effort to avoid using coordinate charts whenever possible because, IMHO, they obscure the geometric meaning behind everything)

An invariant tangent vector field (with respect to a group G of automorphisms of M) is one that is unchanged after applying elements of G. IOW, for a vector field V and a group element g, (*g)(V) = V. Alternatively. (*g)x(V(x)) = V(g(x))


Now, suppose M is a lie group. Since M is a group, we are given a natural class of automorphisms; those of M acting on itself by left multiplication (also by right multiplication)! For an element g of M, define:

Lg : M -> M : h -> gh

That is, Lg is the "left multiplication by g" operator.

Define Rg similarly to be the right multiplication operator.


Finally, let E be the identity element of M.


Problem 1: Prove that there is a one to one correspondence between TE(M) and the set of all tangent vector fields invariant under left multiplications. (called left invariant vector fields)

Problem 2: Prove that right multiplication maps left invariant vector fields to left invariant vector fields.

(there is an exercise 3 that goes with this problem set, but we haven't talked about Adjoint mappings)
 
Last edited:
  • #126
Originally posted by Hurkyl

As for where to go next, I'm wondering if everyone wanted to stick primarily to lie groups, or if we want expand our goal to study differential geometry in more detail as well.

at the risk of sounding self-serving, let me say: yes, continue this conversation, but don t do it in the group theory thread, do it in my differential forms thread!

no, seriously though, don t worry about keeping your conversation "on topic". just let it go where it goes. i like the dynamic of this board a lot.
 
  • #127
at the risk of sounding self-serving, let me say: yes, continue this conversation, but don t do it in the group theory thread, do it in my differential forms thread!

Actually, your thread is the main reason I didn't want to go deep into differential forms in this one. :smile:



It's S2 that's not parallelizable implying the stronger statement that T(S2) and S2xR2 aren't homeomorphic.

Yah, I was using (and thinknig) the wrong word. :frown:


No, trivial bundles have trivial structure groups, that's the whole point.

For the cylinder, the principle bundle is S1*Z2, a trivial (and disconnected) one.


Was it not obvious that P(E)'s connectedness was used?

I know you were using it, I was remarking that it was yet to be proven... and the only method I saw for proving it could have itself proved the fact you were using P(E)'s connectedness to prove.


Edit: fixed typo; I meant to have S1 for the base space of the cylinder
 
Last edited:
  • #128
Originally posted by Hurkyl
For the cylinder, the principle bundle is S2*Z2, a trivial (and disconnected) one.

No, the cylinder's structure group is trivial so it's principle bundle is just it's base space S1.

Originally posted by Hurkyl
I was remarking that it [P(E) is connected] was yet to be proven

This needs no proof since it's the transition functions that encode topology and P(E) by definition has the same ones as E.
 
  • #129
My typo of writing S2 for S1 aside...


What's the definition of a structure group? I had presumed it was the group that preserved the structure of the fiber (i.e. diffeomorphisms for diff. manifolds, isometries for metric spaces, et cetera)... so if I used the same fiber for the cylinder (instead of orienting the fiber) I should have the same structure group.


Spivak's treatment of the mobius strip goes:


Consider, in particular, the Mobius strip as a 1-dimensional vector budle &pi;:E&rarr;S1 over S1. A frame in a 1-dimensional vector space is just a non-zero vector, so F(E) consists of the Mobius strip with the zero-section deleted. This space is connected (cut a paper Mobius strip along the center if you don't believe it); more generally, a vector bundle &pi;:E&rarr;M over a connected space M is orientable if and only if F(E) is disconnected.


F(E) is a principle bundle, so principle bundles aren't always connected spaces. For the cylinder E with the same fiber, F(E) would have to be disconnected.
 
Last edited:
  • #130
Originally posted by Hurkyl
My typo of writing S2 for S1 aside...


What's the definition of a structure group? I had presumed it was the group that preserved the structure of the fiber (i.e. diffeomorphisms for diff. manifolds, isometries for metric spaces, et cetera)... so if I used the same fiber for the cylinder (instead of orienting the fiber) I should have the same structure group.


Spivak's treatment of the mobius strip goes:


Consider, in particular, the Mobius strip as a 1-dimensional vector budle &pi;:E&rarr;S1 over S1. A frame in a 1-dimensional vector space is just a non-zero vector, so F(E) consists of the Mobius strip with the zero-section deleted. This space is connected (cut a paper Mobius strip along the center if you don't believe it); more generally, a vector bundle &pi;:E&rarr;M over a connected space M is orientable if and only if F(E) is disconnected.


F(E) is a principle bundle, so principle bundles aren't always connected spaces. For the cylinder E with the same fiber, F(E) would have to be disconnected.

Spivak is right about Gcylinder = Z2. What I tried to do was avoid this by taking fibres to be unoriented line segments instead of vector spaces, but I realize now that they get flipped anyway. See my "Revised overview of fibre bundles" post below for detailed responses to all of your questions. In particular, I show how the structure group is obtained by explicitly constructing it for my cylinder and mobius strip examples. I also construct their principle bundles and that of T(S1). I think my treatment should make the significance of the structure group and the transition functions fairly clear.
 
Last edited:
  • #131
If we define AG(H) to be GHG-1 for G and H in a lie group, we can define:

Ad G = *AG

to be the adjoint map on the lie algebra.


Problem 3 is to prove that right multiplication by G on left invariant vector fields is the same as to applying Ad G to the equivalent lie algebra element.



I was trying to hold off to introduce another fact about the adjoint map, but I haven't worked out the proof yet (except for when M is a matrix lie group)...

Ad is a mapping from the lie group G to the group of linear transformations on its lie algebra GL(g). From this we can lift a new map ad from the tangent bundle of G to the tangent bundle of GL(g)... in particular, it maps g to gl(g).

The goal is to prove that the adjoint map ad satisfies the axioms of a lie bracket so that we may define:

[f, g] = (ad f)g

Which justifies our calling the tangent space at the identity (alternatively, the space of all left invariant vector fields) a lie algebra.
 
  • #132
this Problem 3 Hurkyl mentioned now seems like an urgent and critical part of the program. Its like Lie algebras are gradually emerging out of the unknown. First the Tangent space of a manifold appears, And then a group that is a manifold.
And then the Tangent space at that group's Identity!

And then we discover that TeG (the tangent space at the group identity element) is linearly isomorphic to the set of all Left Invariant Vector Fields living on the group itself.

At this point then, Hurkyl says what the goal is:

<<The goal is to prove that the adjoint map ad satisfies the axioms of a lie bracket so that we may define:

[f, g] = (ad f)g

Which justifies our calling the tangent space at the identity (alternatively, the space of all left invariant vector fields) a lie algebra. >>

For me this represents the Lie algebra looming up out of nothingness in a kind of natural way as the tangent space at the identity except it is beginning to grow and morph an algebraic structure with a kind of "bracket" operation and "adjoint" map that plain old vectors don't ordinarily have. So to keep it growing and morphing we should (according to Hurkyl) do a Problem 3:

<<Problem 3 is to prove that right multiplication by G on left invariant vector fields is the same as to applying Ad G to the equivalent lie algebra element.>>

Everybody who studies basic Group theory (not just Lie Groups but finite groups) learns that about the most important thing in groups is the "inner automorphism"

g ---> hgh-1

and indeed this is what is used to define socalled "normal" subgroups and that ultimately is how you classify all possible
crystals and symmetries and all possible finite groups and all that jazz.

Hurkyl wants us to look at the lift of "inner automorphism"

Oh, he calls the lift of inner automorphism by h the ADJOINT map using h.

Well OK.

and this is going to engender the Lie bracket and cultivate the algebraic structure on Te

So we better get on with it and do Problem 3

I'm busy now but may have a moment later in afternoon
however anyone who wants should go ahead



Originally posted by Hurkyl
If we define AG(H) to be GHG-1 for G and H in a lie group, we can define:

Ad G = *AG

to be the adjoint map on the lie algebra.


Problem 3 is to prove that right multiplication by G on left invariant vector fields is the same as to applying Ad G to the equivalent lie algebra element.



I was trying to hold off to introduce another fact about the adjoint map, but I haven't worked out the proof yet (except for when M is a matrix lie group)...
The goal is to prove that the adjoint map ad satisfies the axioms of a lie bracket so that we may define:

[f, g] = (ad f)g

Which justifies our calling the tangent space at the identity (alternatively, the space of all left invariant vector fields) a lie algebra.
Ad is a mapping from the lie group G to the group of linear transformations on its lie algebra GL(g). From this we can lift a new map ad from the tangent bundle of G to the tangent bundle of GL(g)... in particular, it maps g to gl(g).

 
Last edited:
  • #133
Just to get my bearings, the tangent space at a point is essentially equivalence classes of curves thru that point----two curves being equivalent if taking the derivative along them at the point gives the same answer. There is a kind of convergence of views on this, a few posts ago Hurkyl was saying:

<<...For any v in Tx(M), choose a smooth curve &gamma; through x whose tangent vector at x is v. Then, define (*f)x(v) to be the tangent vector to f(&gamma;) at f(x). ..>>

And IIRC Lethe was defining tangent space in the diff forms thread in a comparable way----the directions of directional derivative
And eg Marsden chapter 4 page 123 says much the same.
Anyway whatever the fine print of the definition says I will consider the tangent space to be equiv classes of curves, because I want to be able to pick a representative of the equiv class and take the derivative along that curve.

So then with &phi; some map M--->M it is easy to define the lift Tx&phi; or *&phi;: Tx --->T&phi;(x).

Given v in Tx pick a representative curve &psi; from the equiv classs and just compose mappings to get a new curve
&phi;(&psi;) passing thru &phi;(x) and take its equiv class which will be a vector belonging to the target tangent space T&phi;(x)

Some people say equiv classes of curves and differentiate along them and other people define tangent vectors in other equivalent ways but it all comes to the same thing.

THE POINT IS YOU ALWAYS HAVE A JACOBI LIE BRACKET. If X and Y are tangent vector fields on a manifold they for any smooth function f there is always an obvious
meaning to the derivatives X[f] and Y[f] which are some new smooth functions on the manifold. So one can do it in either order and define [X,Y] [f] = XY[f] - YX[f].

This seems kind of easy and direct, so where does it get hard if it ever does?

It must be when M turns into a group G as well as a manifold. then we have concepts like "Left Invariant Vector Field" and tangent space not just anywhere but at the identity, and inner automorphisms of the group, and lifting that to the "Adjoint" map which is a kind of stirring around or automorphism of the tangent space at the identity, and so on.

And also, don't forget, we can always go back and fetch the primitive old JACOBI LIE BRACKET which is just switching the order of differentiation w/rt a couple of vectorfields and then we
have something to prove which is that the brack of left invariants is left invariant and that ADJOINT which is a group-type thing gives the same as the jacobi lie bracket and allemand left dosiedo up the middle. Anyway that's how I see it.

So I am going to repeat the first two problems I proved for homework, without proof, just in case they are needed and then
go on to look at adjoint map.


Problem 1: Prove that there is a one to one correspondence between Te(M) and the set of all tangent vector fields invariant under left multiplications. (called left invariant vector fields)
---------------------------
Problem 2: Prove that right multiplication maps left invariant vector fields to left invariant vector fields.
---------------------------

Problem 3 is to prove that right multiplication by g on left invariant vector fields is the same as to applying Ad(g) to the equivalent lie algebra element, i.e to the equivalent tangent vector at the identity.

In other words we have a Left Invariant field X defined on G and there is the one-one correspondence to Te given by the laughably obvious X(e), the value of the field at the identity.
And for any g in G there is the adjoint map Ad(g) which is a way of mucking around with the tangent space at the identity.

And we want to see what doing that Ad(g) corresponds to in the world of Left Invariant vectorfields.


The inner aut map G ---> G is just h--->ghg-1 and
the lift of that is clearly *Lg*Rg-1

Problem 3 says to take a L.I. field X and operate with Ad(g) on X(e)

OK

*Lg*Rg-1 X(e)

*LgX'(eg-1) [[[X' is also left invariant]]]

X'(geg-1) = X'(e)

Darn, I have to go, but I think this is problem 3

have to get back to this and check and maybe edit.

This step *Rg-1 X(e)
corresponded to doing right mult by g to the invariant vector field X and getting an invariant field X'

And I calculated the Ad(g) of X(e)
and it turned out to give the same answer.
However must check this later since I have to go.
 
  • #134
Grr, I forgot why I wanted to bring up the differential geometry in the first place! Anyways, I'm kinda stuck on the adjoint thing, so someone want to introduce representations while I try to develop enough of the geometry to continue that track? (I'm probably going to check out Vol I of Spivak's diff. geom text now too for this thread; so much for my plan to dive right in with curvature)
 
Last edited:
  • #135
Originally posted by Hurkyl
Grr, I forgot why I wanted to bring up the differential geometry in the first place! Anyways, I'm kinda stuck on the adjoint thing, so someone want to introduce representations...

I think this means shifting to Brian Hall page 41 and page 68.

Good thing about Hall is no manifolds, no differential geometry, just plain old matrices! A lot of what they want to make happen in great generality and abstraction is just what happens naturally and concretely with matrices.

If you want, I'll discuss Hall pages 41 and 68, and then we would have the option to continue from there if you so choose. On page 41 Hall says:

"The following very important theorem tells us that a Lie group homomorphism between two Lie groups gives rise in a natural way to a map between the corresponding Lie algebras..." Isomorphic groups have isomorphic algebras...

Is this obvious or did you discuss it earlier and I just forgot? Please tell me, before I start proving it, if this is just repetitive or obvious. Here is the statement (Hall's Theorem 3.18)

Let G and H be matrix Lie groups, with Lie algebras g and h. Let &phi; :G --> H be a Lie group homomorphism. Then there exists a unique real linear map &phi;*: g --> h,
such that for all X in g we have

&phi:(exp(X)) = exp (&phi;*(X)).

Moreover this unique real linear map &phi;* has certain properties which I will list, if this has not been covered yet, and the star operation is compatible with the composition of mappings
(&phi; o &psi;)* = &phi;* o &psi;*

Hurkyl I mention this only because you asked someone to temporarily take the initiative going towards representations. You have the baton the moment you want to resume directing the band.
 
Last edited:
  • #136
This theorem summarizes some things we have already discussed on this thread like the exponential map and like
one parameter subgroups exp(tX)
the way you actually compute &phi;*(X) is to take the
derivative at t = 0 of &phi;(exp(tX))

This is so obvious! You just use &phi;, since it is a group homomorphism, to map a one-parameter subgroup of one into a one-parameter subgoup of the other-----and an element of the algebra is always the infinitesimal move belonging to some one-parameter subgroup

(Hall's Theorem 3.18, restated with some more detail)

Let G and H be matrix Lie groups, with Lie algebras g and h. Let &phi; :G --> H be a Lie group homomorphism. Then there exists a unique real linear map &phi;*: g --> h,
such that for all X in g we have

&phi;(exp(X)) = exp (&phi;*(X)).

Moreover this unique real linear map &phi;* has certain properties:

1. For all X in g and all A in G,
&phi;*(AXA-1) = &phi;(A) &phi;*(X) &phi;(A-1)

2. For all X, Y in g,
&phi;*(XY - YX) = &phi;*(X) &phi;*(Y) - &phi;*(Y)&phi;*(X)

this is to say that &phi;* commutes with taking brackets or the "adjoint" map, whatever, namely
&phi;*([X,Y]) = [&phi;*(X), &phi;*(Y)]


3. The bedrock fact no matter what anybody says, is that the lifted map takes infinitesimal moves into the corresponding infinitesimals, namely,
&phi;*(X) = d/dt|t = 0 &phi;(exp(tX))

and FINALLY that the star operation is compatible with the composition of mappings
(&phi; o &psi;)* = &phi;* o &psi;*
 
Last edited:
  • #137
It makes sense, but isn't entirely obvious. The * here seems to be the same * I introduced in the geometrical context... but we haven't proved much about * in that context either.

I don't mind someone else leading; I'm usually more comfortable playing second fiddle anyways!

Besides, you seem to know the first round of details for representations and I don't, so it'd be better for you to lead that part anyways. :smile:
 
  • #138
Originally posted by Hurkyl
It makes sense, but isn't entirely obvious. The * here seems to be the same * I introduced in the geometrical context... but we haven't proved much about * in that context either.

I don't mind someone else leading; I'm usually more comfortable playing second fiddle anyways!

Besides, you seem to know the first round of details for representations and I don't, so it'd be better for you to lead that part anyways. :smile:

You are still stuck with the job leading. I am only interjecting this because you asked for someone to cover for you for a moment.
Dont try to wiggle out. I am even fonder of secondfiddle than you and you really are more generally competent. I am reckless at times but do not mistake that for confidence:wink:
Also I flatly deny knowing whatever you are trying to insinuate that I know. However what I do think is that this thread has to be fun! If it is not we should stop whenever.

Come to think of it, I should make proving properties 1. 2. and 3. mentioned above into homework. When you assigned some things about tangent mappings as homework, earlier, I filled in the details. could you deal with those three properties of &phi;* in some fashion. A line or two of proof or a reference to some page in Hall or whatever seems judicious and perspicacious?

I wonder if Lonewolf is still around and has questions?

OH, ABOUT THE ASTERISK! I realize the ambiguity caused by this usage. Brian Hall uses a squiggle tilda over the phi. But I cannot type this. I tried typing various things and they looked too messy and ad hoc. So I finally concluded that I had to use asterisk, EVEN THOUGH you had already used it in a diff geometry context as notation for something else.
 
Last edited:
  • #139
I think I see why I'm having difficulties; to take the geometric approach means to work out tons of details that are "obvious" yet nontrivial to prove.


(In the following, all derivatives are to be taken at 0)

Anyways, proofs of properties 1-3. Using the fact &phi;* is linear and properties of the exponential we remember from earlier:

(1)

&phi;*(AXA-1) = (d/dt) exp(t &phi;*(AXA-1))
= (d/dt) exp(&phi;*(tAXA-1))
= (d/dt) &phi;(exp(tAXA-1))
= (d/dt) &phi;(A exp(tX) A-1)
= &phi;(A) (d/dt)&phi;(exp(tX)) &phi;(A-1)
= &phi;(A) (d/dt)exp(&phi;*(tX)) &phi;(A-1)
= &phi;(A) (d/dt)exp(t&phi;*(X)) &phi;(A-1)
= &phi;(A) &phi;*(X) &phi;(A-1)

(3)

&phi;*(X) = (d/dt) exp(t &phi;*(X))
= (d/dt) exp(&phi;*(tX))
= (d/dt) &phi;(exp(tX))

(&phi;&psi;)*(X) = (d/dt) exp(t (&phi;&psi;)*(X))
= (d/dt) exp((&phi;&psi;)*(tX))
= (d/dt) (&phi;&psi;)(exp(tX))
= (d/dt) (&phi;)(&psi;(exp(tX))
= (d/dt) (&phi;)(exp(&psi;*(tX)))
= (d/dt) exp(&phi;* &psi;* (tX))
= (d/dt) exp(t &phi;* &psi;* (X))
= &phi;* &psi;* (X)

(2)'s a little messier, I'll get it tomorrow unless Lonewolf polishes it off in the meanwhile.

Anyways, there is no ambiguity in the use of *; it's the exact same operator in both contexts.

In the first identity in problem (3), notice that exp(tX) is a curve with tangent vector X @ t = 0, and &phi;*(X) is defined to be the tangent vector @ t = 0 to the image of exp(tX) under &phi;... that's precisely how we defined (*&phi;) in the geometric context!
 
  • #140
As usual you came through in spades, points 1-3 are proven.
Also you indicate here what is quite true, that we have been chewing over the same material----the exponential map, the logarithm of a matrix (which you defined earlier by a limit as I recall), the one parameter subgroup which is, by golly, a curve, and its derivative or tangent vector at the identity----in various different forms. At least I think we have been doing essentally that for a while. Maybe this theorem 3.18 of Hall can give us a place from which to move onwards.

Originally posted by Hurkyl
...
In the first identity in problem (3), notice that exp(tX) is a curve with tangent vector X @ t = 0, and &phi;*(X) is defined to be the tangent vector @ t = 0 to the image of exp(tX) under &phi;... that's precisely how we defined (*&phi;) in the geometric context!

There are still two parts to theorem 3.18 which I did not ask anyone to prove and I am going to nonchallantly leave them without proof. Anyone who wants can look it up in Hall.

The unproven parts are:
&phi;* exists and is a unique real linear map: g --> h,

and also that (&phi; o &psi;)* = &phi;* o &psi;*

The proof involves stuff we have already been doing lots of, you define phi-star in a by-now-very-familiar way by saying: "take X in g that we want to define phi-star of, and make a one parameter subgroup exp(tX) which you can think of as a curve of matrices in G passing thru the identity matrix
and use phi to MAP THIS WHOLE CURVE into the matrix group H.
and since phi is a smooth group homomorphism the image is a nice smooth curve passing thru the identity in the matrix group H.
And then as destiny decrees you just look at the tangent vector of that curve up in the tangent space of matrices h, and that is some matrix and you call THAT matrix = &phi;*(X)

then you have to check that this map is linear between the two vectorspaces (of matrices) g --> h, which just means trying it out with a scalar multiple rX and with a sum X+Y, and you have to check that it is the unique linear map that commutes with exponentiation namely
&phi;(exp(X)) = exp (&phi;*(X))
each of which little facts Brian Hall proves in one line on page 42 or 43 in case anyone wants to check up.

Now I think we can move on and see where this theorem and the discussion surrounding it have gotten us. In a way all the theorem does is work matrix multiplication into a familiar geometry picture

the geometry picture is two manifolds and a smooth map phi: M--->N that takes point x --->y

and we add just one thing to the picture namely that M and N are now matrix groups and x and y are the group identities (that is identity matrices) and phi is now a homomorphism----it preserves matrix multiplication.

this is just a tiny embellishment of the basic geometry picture and we want to know what happens with the lifted map of the Tangent spaces Tx ---> Ty

It is only natural to ask what happens when the smooth group homomorphism is lifted to the tangentspace level and the answer is this theorem which says that all is as well behaved as one could wish

not only is the thing linear and uniquely defined and consistent with the exponential map and one parameter subgroups (which are curves thru the identity) but we even get a bonus that the
map commutes with a certain "multiplication-like" operation upstairs called the [X,Y].

phi-star doesn't commute with ordinary matrix multiplication, it commutes with bracket. This is how god and nature tell us that we must endow the tangent space at the group identity with an algebraic structure involving the bracket.

We are predestined to do this because IT, the bracket, is what the lift of a group homomorphism preserves and it does not preserve anything else resembling multiplication.

And it is a linear map on tangent spaces so it preserves addition, so it is telling us what a Lie algebra is, namely vectorspace ops plus bracket----and whatever identities the bracket customarily obeys.

well that's one way to look at it. sorry if I have been long-winded.

now we can try a long jump to theorem 5.4 on page 68, which talks about Lie algebra representations, or else in a more relaxed frame of mind we can scope out some of the followup stuff that comes right after this theorem 3.18

oh, theorem 3.34 about the "complexification" of a real Lie algebra seems like a good thing to mention. sometimes we might need to drag in complex numbers to get some matrix diagonalized or solve some polynomial or for whatever reason and there is a regular proceedure for complexifying things when and if that is needed

well that is certainly enough said about theorem 3.18


(Hall's Theorem 3.18, restated with some more detail)

Let G and H be matrix Lie groups, with Lie algebras g and h. Let &phi; :G --> H be a Lie group homomorphism. Then there exists a unique real linear map &phi;*: g --> h,
such that for all X in g we have

&phi;(exp(X)) = exp (&phi;*(X)).

Moreover this unique real linear map &phi;* has certain properties:

1. For all X in g and all A in G,
&phi;*(AXA-1) = &phi;(A) &phi;*(X) &phi;(A-1)

2. For all X, Y in g,
&phi;*(XY - YX) = &phi;*(X) &phi;*(Y) - &phi;*(Y)&phi;*(X)

this is to say that &phi;* commutes with taking brackets or the "adjoint" map, whatever, namely
&phi;*([X,Y]) = [&phi;*(X), &phi;*(Y)]


3. The lifted map takes infinitesimal moves into the corresponding infinitesimals, namely,
&phi;*(X) = d/dt|t = 0 &phi;(exp(tX))

and FINALLY that the star operation is compatible with the composition of mappings
(&phi; o &psi;)* = &phi;* o &psi;*
 
<h2>What is group theory?</h2><p>Group theory is a branch of mathematics that deals with the study of groups, which are mathematical structures that consist of a set of elements and a binary operation that combines any two elements to form a third element. It is used to study symmetry and patterns in various fields such as physics, chemistry, and computer science.</p><h2>What are the basic concepts of group theory?</h2><p>The basic concepts of group theory include groups, subgroups, cosets, homomorphisms, and isomorphisms. Groups are sets of elements with a binary operation, subgroups are subsets of groups that also form groups, cosets are subsets of groups that are obtained by multiplying a subgroup by a fixed element, homomorphisms are functions that preserve the group structure, and isomorphisms are bijective homomorphisms.</p><h2>Where can I apply group theory?</h2><p>Group theory has applications in various fields such as physics, chemistry, computer science, and cryptography. In physics, it is used to study symmetries in physical systems and in particle physics. In chemistry, it is used to study molecular structures and chemical reactions. In computer science, it is used in the design and analysis of algorithms and data structures. In cryptography, it is used to design secure encryption algorithms.</p><h2>What are some good resources for learning group theory?</h2><p>There are many resources available for learning group theory, including textbooks, online courses, and video lectures. Some recommended textbooks include "Abstract Algebra" by Dummit and Foote, "A First Course in Abstract Algebra" by Fraleigh, and "Group Theory" by Rotman. Online courses and video lectures can be found on websites such as Coursera, Khan Academy, and YouTube.</p><h2>What are some important theorems in group theory?</h2><p>Some important theorems in group theory include Lagrange's theorem, which states that the order of a subgroup must divide the order of the group, the first and second isomorphism theorems, which relate the structure of a group to its subgroups and homomorphisms, and the Sylow theorems, which provide information about the number of subgroups of a given order in a finite group.</p>

What is group theory?

Group theory is a branch of mathematics that deals with the study of groups, which are mathematical structures that consist of a set of elements and a binary operation that combines any two elements to form a third element. It is used to study symmetry and patterns in various fields such as physics, chemistry, and computer science.

What are the basic concepts of group theory?

The basic concepts of group theory include groups, subgroups, cosets, homomorphisms, and isomorphisms. Groups are sets of elements with a binary operation, subgroups are subsets of groups that also form groups, cosets are subsets of groups that are obtained by multiplying a subgroup by a fixed element, homomorphisms are functions that preserve the group structure, and isomorphisms are bijective homomorphisms.

Where can I apply group theory?

Group theory has applications in various fields such as physics, chemistry, computer science, and cryptography. In physics, it is used to study symmetries in physical systems and in particle physics. In chemistry, it is used to study molecular structures and chemical reactions. In computer science, it is used in the design and analysis of algorithms and data structures. In cryptography, it is used to design secure encryption algorithms.

What are some good resources for learning group theory?

There are many resources available for learning group theory, including textbooks, online courses, and video lectures. Some recommended textbooks include "Abstract Algebra" by Dummit and Foote, "A First Course in Abstract Algebra" by Fraleigh, and "Group Theory" by Rotman. Online courses and video lectures can be found on websites such as Coursera, Khan Academy, and YouTube.

What are some important theorems in group theory?

Some important theorems in group theory include Lagrange's theorem, which states that the order of a subgroup must divide the order of the group, the first and second isomorphism theorems, which relate the structure of a group to its subgroups and homomorphisms, and the Sylow theorems, which provide information about the number of subgroups of a given order in a finite group.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
689
  • Linear and Abstract Algebra
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
17
Views
4K
  • STEM Academic Advising
2
Replies
43
Views
4K
  • Science and Math Textbooks
Replies
9
Views
2K
  • STEM Academic Advising
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
886
  • STEM Educators and Teaching
2
Replies
35
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
3K
Back
Top