Intro to differential forms

In summary: TRT.RvT=v'TvTSo in this way, the concept of length is built in to the definition of a vector in Euclidean 3-space. This is not true in general; in general, the transformation law for a vector is simply that it transforms as a tensor.In summary, don't think of a vector as an arrow between two points. Think of it as a mathematical object that transforms under rotations and parity.In summary, the conversation is about the prerequisites for understanding Stoke's theorems and how they relate to differential forms and Maxwell's equations. It also touches on the use of differential forms in yang-mills gauge theory. The prerequisites include knowledge of
  • #71
Originally posted by chroot
Hmm, but I thought the definition of a Riemannian manifold was that it was locally flat at every point? Don't all manifolds have this property of being locally flat?
all manifolds have the property of being locally euclidean. this is a topological property that has nothing to say about flatness.

I suppose being locally flat just means you can introduce a Euclidean coordinate system anywhere and neglect the curvature; it's still intrinsically present, you're just neglecting it.
the definition of a manifold makes no mention of curvature, no assumptions about curvature, nothing like that. the Riemannian manifold is just a differentiable manifold with a Riemannian metric on it.

the curvature is not constrained.

What sorts of experiments can he do? Besides drawing lines and measuring their lengths?
the curvature tensor tells you how a vector transforms when you go in a loop. so to measure curvature, he could draw 1dimensional loops, and see what happens. he would surely find that all vectors remain unchanged.
 
Physics news on Phys.org
  • #72
Hmmm but how does he draw a closed 1D loop in his 1D space? That's just a line, eh? And it can't be closed. So I guess that's why a 1D space has no intrinsic curvature. It doesn't make sense with the definition of intrinsic curvature.

- Warren
 
  • #73
Originally posted by chroot
Hmmm but how does he draw a closed 1D loop in his 1D space? That's just a line, eh? And it can't be closed. So I guess that's why a 1D space has no intrinsic curvature. It doesn't make sense with the definition of intrinsic curvature.

- Warren
why can t the loop be closed? what about the curvature doesn t make sense?
 
  • #74
Originally posted by lethe
why can t the loop be closed? what about the curvature doesn t make sense?
If you're an ant living in a 1D space, how can you draw a loop in the first place? I mean, how can a loop even exist in 1D? Am I missing something?

- Warren
 
  • #75
Originally posted by chroot
If you're an ant living in a 1D space, how can you draw a loop in the first place? I mean, how can a loop even exist in 1D? Am I missing something?

- Warren

here is a loop on the real line:

[tex]
\gamma(t)=\begin{cases}2t& 0\leq t\leq 1/2\\ 2-2t & 1/2\leq t\leq 1\end{cases}
[/tex]
 
  • #76
Okay, I guess I can't argue that it's a loop, even though it's sort of a "degenerate" one.

So if the concept of a loop is well-defined in 1D space, why then does the curvature tensor always vanish? Sorry, I believe it, I just don't grok it.

- Warren
 
  • #77
Originally posted by chroot
Okay, I guess I can't argue that it's a loop, even though it's sort of a "degenerate" one.

So if the concept of a loop is well-defined in 1D space, why then does the curvature tensor always vanish? Sorry, I believe it, I just don't grok it.

the tangent space is 1 dimensional. any vector has only one choice, it cannot rotate.
 
  • #78
Aha, that makes sense now. If our ant on the 1D circular manifold pushes a vector around, there's no way it can rotate (assuming that it can't snap around 180 degrees -- that sort of wacky behavior is impossible in a smooth manifold, I guess?). And since curvature is defined by the angle between a vector and its counterpart after being parallely transported in a small loop, the angle must be zero, so there must be no (intrinsic) curvature.

Okay, got it. Thanks.

- Warren
 
  • #79


Originally posted by lethe
some manifolds admit algebraic structures, and some don t.
...
you do not have to choose coordinates on your manifold to have algebra.
If it doesn't deviate terribly from the main discussion, could you just give the basic requirements of an algebra, specifically to distinguish them from the requirements of a vector space?




Originally posted by lethe
it was you who brought up the issue about the parabola and the line being the same, and so i only mentioned that to make that discussion a little clearer.
I'm just trying to distinquish between a parabola and a line in our context. I thought you were trying to make a point of it in the beginning.




Originally posted by lethe
the dual of the dual of a vector space is canonically isomorphic to the vector space.
That sure isn't obvious using the stack of pancakes notion. Is this a good example of why you don't like it?




Originally posted by lethe
the problem with choosing a basis is that there are many equally good bases to pick from, and there is no "best" basis, so sticking to only one is unnatural. but once i have made this unnatural choice, ...
I'm, assuming this was talking about the basis of the vector space, as opposed to the dual space?
 
  • #80


Originally posted by turin
If it doesn't deviate terribly from the main discussion, could you just give the basic requirements of an algebra, specifically to distinguish them from the requirements of a vector space?
an algebra is a vector space that has a vector product.

in general, tangent spaces will just be vector spaces, not algebras, but if the manifold is also a group, then some of the tangent spaces will be algebras. R3 with the vector cross product is an example of this.

just to be clear though: when i said algebra above, i didn t say an algebra. an algebra is a vector space with vector product. algebra is a more general term, it just means anything having to do with addition and multiplication. so i said above something about manifolds not having an algebraic structure, i just meant that there is no way to add or multiply, in a consistent way, points on, say, a sphere, whereas there are such notions for linear spaces (by definition).

I'm just trying to distinquish between a parabola and a line in our context. I thought you were trying to make a point of it in the beginning.
its been a while, but i think the reason i wrote that in the beginning was just to show why you need vectors to live in the tangent space, and they cannot, in general, live in the manifold. that was the only point.

i didn t want to imply that the parabola had intrinsic curvature or anything like that. i m beginning to regret even mentioning the word "curved" up there, since i had not defined it yet. it was just supposed to help your intuitive picture, when thinking of vectors and manifolds.


That sure isn't obvious using the stack of pancakes notion. Is this a good example of why you don't like it?]
sure, i guess so.


I'm, assuming this was talking about the basis of the vector space, as opposed to the dual space?
yeah, i guess that is what i was talking about, but it could work the other way as well.
 
Last edited:
  • #81
An algebra is not a vector space which has a vector product.

Exercise define vector product properly. Vector product is usually the term reserved for the cross product in three dimensions. there are plenty of three dimensional algebras not isomorphic to R^3 with the vector product.

An algebra is a Ring which is also a vector space.
 
  • #82
Originally posted by matt grime
An algebra is not a vector space which has a vector product.

i suppose this is a semantic argument. i define a vector product to be a bilinear map on a vector space into the vector space. under this definition, a vector space with a vector product is an algebra.
Exercise define vector product properly. Vector product is usually the term reserved for the cross product in three dimensions.
i think you should say vector cross product (or simply cross product), when you mean the cross product in R3.

the names of objects in mathematics ought to be descriptive enough to leave no ambiguity, this is my opinion, at least. under this philosophy, vector product, scalar product, cross product, inner product, outer product and dot product are all different, and there is no ambiguity in any of the terms.
there are plenty of three dimensional algebras not isomorphic to R^3 with the vector product.
i would say that R3 with the vector cross product is an example of an algebra. certainly there are other examples.



An algebra is a Ring which is also a vector space.
this definition also works. of course, the ring is not as familiar to people as the vector space, so i prefer my definition. but it is a matter of taste.
 
  • #83
How about, an algebra is a vector space which has(completely outside its VS structure) a distributive product. Very often physicists will define an algebra just by defining the product, since the underlying vector space is "obvious".

In the case of 3D vectors, of course, the cross product is really an outer product (Grassmann style) and its result is not exactly a true (polar) vector, but an axial vector that behaves differently under parity operations. This distinction had an important role in physicsts' attempts to understnd the weak force.

The true algebra that contains the 3D vectors is the quaternions.
 
  • #84
Originally posted by selfAdjoint
How about, an algebra is a vector space which has(completely outside its VS structure) a distributive product.
but, as i am sure you are aware, the dot product is distributive, but an inner product space is not an algebra. i think distributivity isn t worth mentioning. in my world, if its not distributive, it isn t a product. so the word "product", for me, contains the information "bilinear" and "distributive". what does need to be mentioned to distinguish it from other products is that it is vector valued

In the case of 3D vectors, of course, the cross product is really an outer product (Grassmann style)
i don t quite agree with this. there is a sense in which the cross product can be thought of as a Grassman product (which i call a "wedge product", not an "outer product". for me, "outer product" is synonymous with "tensor product"), but the Grassman algebra is certainly not isomorphic to R3 as an algebra. for example, inR3, you have (ixj)xi=j whereas in the Grassman algebra, you have (ixj)xi=0. not isomorphic. if you toss the Hodge dual in there in the appropriate place, then you can make an isomorphism

what R3 is isomorphic to is the Lie algebra [itex]\mathfrak{so}(3)[/itex]
 
Last edited:
  • #85
2-forms

RDT2 is around, let me see if i can't post some more on this thread. this is still taken from the thread at sciforums.

now... where were we. ah, yes. we had just finished building the 2-forms, and we re about ready to move to more general p-forms.

but before we leave the 2-forms, let s find a basis for them, and look at their coordinate representation. it should be obvious how to do that, right? we built our 2-form from two 1-forms, so we should be able to build a basis for our 2-forms from the basis for our 1-forms. let s recall what that was (6):

[tex]df=\partial_\nu f dx^\nu[/tex]

let s take two of those, and wedge them together:

[tex]df\wedge dg=(\partial_\mu fdx^\mu)\wedge(\partial_\nu gdx^\nu)=(\partial_\mu f)(\partial_\nu g) dx^\mu\wedge dx^\nu
[/tex]


here the advantages of using the einstein summation notation become more clear. when you are multiplying two (or more) long summations, carrying around a lot of extra sigmas can get quite unwieldy.

this isn t quite a proof that [itex]dx^\nu\wedge dx^\mu[/itex] is the basis, i.e. that any alternating second rank tensor can be written as such a sum, but it should be convincing at any rate.

now that we ve found a basis, let s count the dimension of this vector space, the space of all 2-forms. remember, the dimension of a vector space is just the number of elements in the basis. so how many independent [itex]dx^\nu\wedge dx^\mu[/itex] are there? well there are [itex]N[/itex] different [itex]dx^\nu[/itex] and [itex]N[/itex] different [itex]dx^\mu[/itex], so there should be [itex]N^2[/itex] ways to write the product, where [itex]N[/itex] is the dimension of the manifold, and the tangent vector space, and the cotangent space.

so the dimension of [itex]\bigwedge^2T^*M_p[/itex] is [itex]N^2[/itex]. right?

not so fast, hot shot! there may be [itex]N^2[/itex] ways to write that product, but they are not all linearly independent. remember the properties of 2-forms: [itex]dx^\nu\wedge dx^\mu = -dx^\mu\wedge dx^\nu[/itex]. so we don t want to count this guy twice. furthermore [itex]dx^\nu\wedge dx^\nu = 0[/itex]! so we definitely don t want to count those cases when [itex]\nu=\mu[/itex]. so when counting the basis elements we should only count those for which, say, [itex]\nu < \mu[/itex]

if you like combinatorics, you can work out the formula. i don t really so i m just going to say the answer: there are [itex]N(N-1)/2[/itex] linearly independent 2-forms. that formula might look familiar to some of you, it is [itex]{}_NC_2[/itex] (N choose 2).
 
Last edited:
  • #86


Originally posted by lethe
... we should be able to build a basis for our 2-forms from the basis for our 1-forms. let s recall what that was (6):

[tex]df=\partial_\nu f dx^\nu[/tex]
I still don't understand how this gives us a basis. Does this notation imply something more specific than generalized coordinates? This is just the total derivative of a multivariable function, right?




Originally posted by lethe
let s take two of those, and wedge them together:

[tex]df\wedge dg=(\partial_\mu fdx^\mu)\wedge(\partial_\nu gdx^\nu)=(\partial_\mu f)(\partial_\nu g) dx^\mu dx^\nu
[/tex]
Two of whats? they look like scalars to me since there is a contraction. Is there a previous post in which you explain how these would be vectors? I'm so confused. I read your post in which you introduced vectors, but they look like scalars to me.




Originally posted by lethe
this isn t quite a proof that [itex]dx^\nu\wedge dx^\mu[/itex] is the basis, i.e. that any alternating second rank tensor can be written as such a sum, ...
I don't see how the [itex]dx^\nu\wedge dx^\mu[/itex] shows up.
 
Last edited:
  • #87


Originally posted by turin
I still don't understand how this gives us a basis. Does this notation imply something more specific than generalized coordinates? This is just the total derivative of a multivariable function, right?
no, this notation implies nothing beyond the representation of a 1-form in terms of some general coordinates. your manifold has coordinates, which in turn yield a basis for the tangent space ([itex]\{\partial/\partial x^\mu\}[/itex]), which in turn induces a basis for the dual space ([itex]\{dx^\mu\}[/itex]). for this dual space, as for any vector space, expressing any vector in terms of the basis means finding a linear combination of the basis vectors that equals the vector in question. in this case, it is [itex]df=\partial_\mu f dx^\mu[/itex]. [itex]df[/itex] is the vector, [itex]\partial_\mu f[/itex] are the coefficients, and [itex]dx^\mu[/itex] are the basis vectors

this formula does look like the formula one learns in elementary calculus for the derivative of a function. there we have the chain rule [itex]df/dt=(\partial f/\partial x^\mu)(dx^\mu/dt)[/itex]. if we "multiply" both sides of this equation by [itex]dt[/itex], then we get the above formula. of course, this multiplication step is invalid, since in elementary calculus, we have no object called [itex]dt[/itex], and [itex]dx^\mu/dt[/itex] is not a fraction, but a single object. but the similarity is no coincidence. it looks this way because the exterior derivative really is a kind of derivative, and so has to include the chain rule of elementary calculus.




Two of whats? they look like scalars to me since there is a contraction. Is there a previous post in which you explain how these would be vectors? I'm so confused. I read your post in which you introduced vectors, but they look like scalars to me.
this is an excellent question. you probably have learned the following rule of thumb: anything with no indices is a scalar, anything with one index is a vector, anything with more indices is a tensor.

this rule is nonsense. or at least, it is only true about the coordinate components of those objects, and not the objects themselves.

here is a better rule: any geometric object, which has a coordinate independent meaning, cannot have any indices (since indices indicate dependence on your choice of coordinates). scalars, vectors, tensors, 1-forms, and anything else worth talking about, are all geometric objects, with coordinate independent existences. therefore they cannot have indices.

in the expression [itex]\mathbf{v}=v^\mu\partial_\mu[/itex], [itex]v^\mu[/itex] is the coefficient of my vector in terms of some basis. this number depends on my choice of basis (i.e. my choice of coordinates) and any coordinate dependent object should carry an index. but it is not a vector, it is only a component of a vector. you probably learned that anything with a raised index is a (contravariant) vector. part of my goal with this stuff is to teach you why that picture is misleading (and why the word contravariant is a mistake in this context).

[itex]\partial_\mu[/itex] also carries an index. but this one is lowered. does that mean that it is (covariant) vector? no it doesn t! it is still a tangent vector. the real vector is [itex]\mathbf{v}=v^\mu\mathbg{\partial_\mu}=v^1\mathbf{\partial_1}+v^2\mathbf{\partial_2}+...+v^n\mathbf{\partial_n}[/itex] which carries no indices! to sum up, the point is: anything with an index cannot have any intrinsic meaning, since it is a coordinate dependent object.

in some other post, i will explain why the words covariant and contravariant are exactly backwards. above, i used contravariant to mean "having a raised index, like the coordinates", and covariant to mean "having a lowered index, like the derivative", since this is how it is usually taught to physicists. i will never use those words in that sense again, and for me, the words are actually switched.

I don't see how the [itex]dx^\nu\wedge dx^\mu[/itex] shows up.
well, the [itex]dx^\nu\wedge dx^\mu[/itex] comes from [itex]df\wedge dg[/itex] by linearity on the basis vectors. i actually slipped up in one of the equations in my post, and forgot to include the [itex]\wedge[/itex]. maybe it is clearer now? the point is, [itex]df[/itex] and [itex]dg[/itex] both have coordinate representations in terms of our basis vectors [itex]dx^\mu[/itex], so i pull the coefficients of the two 1-forms in this wedge product out front (since the wedge product is linear), and am left with a sum over the wedge products of the basis vectors.

note that the coefficients of the 2-form have two indices ([itex]\partial_\mu f\partial_\nu g[/itex]), and the basis (co-)vectors also have two indices ([itex]dx^\mu\wedge dx^\nu[/itex]), but the expression for the entire 2-form has both indices contracted, and so carries no indices. but it is certainly not a scalar!
 
Last edited:
  • #88


I HATE ITEX!
I will type the corresponding html next to the itex.
OK, I will delete all itex crap and never use it again. It keeps changing on me.

Originally posted by lethe
... you probably have learned the following rule of thumb: anything with no indices is a scalar, anything with one index is a vector, anything with more indices is a tensor.

this rule is nonsense.
I have this appreciation. My major prof adamantly declares that a tensor is defined by its transformation properties. If this is true, then it is obvious that the rule of thumb you mention here is nonsense. If it is a bad way to think of it in terms of the transformation properties (I seem to vaguely remember you discouraging this way of thinking), then please remind me.

I wasn't thinking that the index free quality indicated scalar-ness. I was more concerned with the apparent contraction of two apparent vectors.
According to my major prof:
- the contraction of two rank 1 tensors (vectors) is a rank 0 tensor (scalar)
- &part;&mu; and dx&mu; are rank 1 tensors (at least in Minkowski space-time).
From this I infer:
- the object in question still looks like a scalar to me, unless I radically change my understanding of tensors.

What about the proper time interval: d&tau;2 = dx&mu;dx&mu;?

It seems like there are two inconsistent ways of looking at it:
- either this is a contraction and therefore a scalar
- or this is a 1-form with components dx&mu;.

Do the components of any 1-form form a vector basis, and do the components of any vector form a covector basis?




Originally posted by lethe
in the expression [itex]\mathbf{v}=v^\mu\partial_\mu[/itex], [itex]v^\mu[/itex] is the coefficient of my vector in terms of some basis. this number depends on my choice of basis (i.e. my choice of coordinates) and any coordinate dependent object should carry an index. but it is not a vector, it is only a component of a vector.
My major prof would say that it is OK to call v&mu; a vector because it implies all of the components (and I guess because it implies the basis?). What say you? I don't want to be picky, just trying to get a handle on the different notational formalisms.




Originally posted by lethe
i actually slipped up in one of the equations in my post, and forgot to include the [itex]\wedge[/itex]. maybe it is clearer now?
Ya. If you meant for that wedge to be in there, then I get it. Again, I'm not trying to be picky, but I have been given the impression lately that these kinds of notational issues are important.




Originally posted by lethe
... [itex]df[/itex] and [itex]dg[/itex] both have coordinate representations in terms of our basis vectors [itex]dx^\mu[/itex], ...
I thought the basis vectors were &part;&mu; and that the 1-forms were dx&mu;. Did you mean &part;&mu; here?
 
Last edited:
  • #89


Originally posted by turin
I HATE ITEX!
I will type the corresponding html next to the itex.
well, feel free to use html. i actually preferred the html, since it seems to fit more nicely with the text, however it doesn t display on some peoples browsers, and of course it can t do as much stuff as tex.

I have this appreciation. My major prof. adamantly declares that a tensor is defined by its transformation properties. If this is true, then it is obvious that the rule of thumb you mention here is nonsense. If it is a bad way to think of it in terms of the transformation properties (I seem to vaguely remember you discouraging this way of thinking), then please remind me.
physicists definition of a tensor:
an object with (r,s) raised, lowered indices (and therefore a coordinate dependent object) that transforms in such and such a way when you change coordinates

mathematicians definition of a tensor:
a tensor product of r,s vectors, covectors. the mathematicians definition of a vector and covector is such that it makes no reference to coordinates, and thus neither does the definition of a tensor. it is an exercise for the reader in most math books to check that when you look at the coordinate components of a tensor, they transform in the physicists way when you change your choice of coordinates.

you take your pick as to which definition. it is nice to understand both definitions, and then one doesn t have to adamantly adhere to one or the other. but certainly one can have a preference, mine is the mathematicians definition.

I wasn't thinking that the index free quality indicated scalar-ness. I was more concerned with the apparent contraction of two apparent vectors. According to my major prof:
- the contraction of two rank 1 tensors (vectors) is a rank 0 tensor (scalar)
you cannot contract two (1,0) rank tensors. you can only contract a (1,0) tensor with a (0,1) tensor. of course, since the metric (if you are doing Riemannian geometry, known to physicists as relativity) and the symplectic form (if you are doing symplectic geometry, known to physicists as classical mechanics) are both nondegenerate, you can always convert one of your (1,0) rank tensors into a (0,1) rank tensor and then contract them (known to physicists as raising and lowering indices). but in the absence of a metric or symplectic form, there is no canonical isomorphism between the vector space of tangent vectors ((1,0) rank tensors) and the vector space of covectors/dual vectors ((0,1) rank tensors), and therefore you cannot contract them.

if you would like any of those terms explained further, please ask.

- [itex]\partial_\mu[/itex] (that is, &part;&mu;) and [itex]dx^\mu[/itex] (that is, dx&mu;) are tensors (at least in Minkowski space-time)
therefore:
yes indeed, those are both tensors (they are basis tensors, and therefore coordinate dependent). also, none of this is particular to Minkowski space.

- the object in question still looks like a scalar to me, unless I radically change my understanding of tensors.
yes indeed, the contraction of [itex]dx^\mu[/itex] and [itex]\mathbf{\partial_\mu}[/itex] is indeed a scalar. in fact, it is 1.

What about the proper time interval:

[itex]d\tau^2 = dx_\mu dx^\mu[/itex] (that is, d&tau;2 = dx&mu;dx&mu;)?

It seems like there are two inconsistent ways of looking at it:
- either this is a contraction and therefore a scalar
contraction makes a scalar if the contracted objects are a (1,0) tensor and a (0,1) tensor. that is not the case here, so that thing is not a scalar. the fact that it doesn t carry any indices indicates that it is a geometric object, independent of coordinates.

- or this is a 1-form with components [itex]dx_\mu[/itex] (that is, dx&mu;).
it is also not a 1-form. it is a tensor product of 2 1-forms. if it were also antisymmetric, i would call it a 2-form, but it is not antisymmetric, so i will call it a (0,2) rank tensor.

but you could have figured out that it was a (0,2) rank tensor just by looking at its coordinate components [itex]g_{\mu\nu}[/itex]. 2 lowered indices on the coordinate components = (0,2) rank tensor.

Do the components of any 1-form form a vector basis, and do the components of any vector form a covector basis?
components do not form a basis, since they are not vectors. the components of a 1-form happen to transform like the basis vectors of tangent space, and components of the tangent vector happen to transform like the basis vectors of the cotangent vector space, but this does not mean that the components are themselves vectors

in fact, this point of confusion is exactly the reason that i dislike the physicists definition of a tensor. you become confused about what is a vector, what is a covector, and what are just components.

i said above "happen to transform", but it is no coincidence. recall that any vector (for any vector space. i m thinking linear algebra here) can be written like this:

[tex]\mathbf{x}=x_1\mathbf{e}_1+x_2\mathbf{e}_2+...x_n\mathbf{e}_n[/tex]

here, [itex]\mathbf{x}[/itex] is a vector, which exists in any basis, but has different components. the components live in some field (sometimes these guys are called scalars in math class, but i won t use that word here. for physicists, scalar means something that is invariant under coordinate transformations). thus the components are not vectors. the basis vectors are vectors, but they also depend on your choice of basis (obviously).

if you make a change of basis, you can achieve this by multiplying the basis vectors by some matrix to get a new basis. then you multiply the components by the inverse of that matrix to get the components of the vector in the new basis. the vector itself has matrix times matrix^-1, and thus doesn t change. it is independent of your choice of basis. it is only the coordinates that depend on your choice of basis, and they change in the opposite way that the basis vectors themselves change.

this is why the components of a tangent vector transform like the basis vectors of the cotangent space.




My major prof. would say that it is OK to call [itex]v^\mu[/itex] (that is, v&mu;) a vector because it implies all of the components (and I guess because it implies the basis?). What say you? I don't want to be picky, just trying to get a handle on the different notational formalisms.
yeah, all physicists do this. it is fine to call [itex]v^\mu[/itex] a vector. in fact, i do it myself whenever i am doing physics. but just keep in the back of your head that [itex]v^\mu[/itex] are really the components of a vector, strictly speaking, they are not the vector itself. since, in physics, we only ever deal with components, we can replace them in our minds. but be aware that doing so will lead to confusion when you try to do this in a math class. and when it comes time to start doing non-Abelian gauge theory, you will wish you were in the math camp, instead of the physics camp.




Ya. If you meant for that wedge to be in there, then I get it. Again, I'm not trying to be picky, but I have been given the impression lately that these kinds of notational issues are important.
indeed they are (in my opinion)


I thought the basis vectors were [itex]\partial_\mu[/itex] (that is, &part;&mu;) and that the 1-forms were [itex]dx^\mu[/itex] (that is, dx&mu;). Did you mean [itex]\partial_\mu[/itex] (that is, &part;&mu;) here?
no.

[itex]\mathbf{\partial}_\mu[/itex] are the basis vectors for the tangent space, and [itex]dx^\mu[/itex] are the basis vectors for the cotangent space. since [itex]df[/itex] and [itex]dg[/itex] are 1-forms (by definition, a 1-form is a member of the cotangent space), they can be written in terms of the basis of that space. of course, since the basis vectors of any vector space are themselves members of that vector space, [itex]dx^\mu[/itex] is itself a 1-form, it is a basis 1-form. but this 1-form depends on your coordinates. and likewise for [itex]\mathbf{\partial}_\mu[/itex]
 
Last edited:
  • #90


Holy crap!

Originally posted by lethe
... and the symplectic form (if you are doing symplectic geometry, known to physicists as classical mechanics) are both nondegenerate, you can always convert one of your (1,0) rank tensors into a (0,1) rank tensor and then contract them (known to physicists as raising and lowering indices).
Ya, uh, question. What is "symplectic form?"




Originally posted by lethe
... the contraction of [itex]dx^\mu[/itex] and [itex]\mathbf{\partial_\mu}[/itex] is indeed a scalar.
This seems to contradict the definition of &part;&mu;f dx&mu; as a vector. Is a vector the same thing as a scalar in math land?




Originally posted by lethe
contraction makes a scalar if the contracted objects are a (1,0) tensor and a (0,1) tensor. that is not the case here, so that thing is not a scalar. the fact that it doesn t carry any indices indicates that it is a geometric object, independent of coordinates.
OK, so a (1,0) tensor is not synonymous with a contravariant vector, nor is a (0,1) tensor synonymous with a covariant vector?

I have also been told rather emphatically that the metric is a scalar because it does not get transformed by a Lorentz transformation. Not true?




Originally posted by lethe
it is also not a 1-form. it is a tensor product of 2 1-forms. if it were also antisymmetric, i would call it a 2-form, but it is not antisymmetric, so i will call it a (0,2) rank tensor.
By writing one of the indices as a subscript, contraction with the metric tensor is already implied, and the dx&mu; is supposed to be the covariant form of dx&mu;. Is this just a matter of confusing termonology? I think it may be deeper than termonology and notation, because I would have sworn yesterday that d&tau;2 was a scalar, and that g&mu;&nu; was a second rank tensor. You're starting to scare me.




Originally posted by lethe
but you could have figured out that it was a (0,2) rank tensor just by looking at its coordinate components [itex]g_{\mu\nu}[/itex]. 2 lowered indices on the coordinate components = (0,2) rank tensor.
I understand that dx&mu;dx&nu; is a second rank tensor (and so, I guess that means a (0,2) tensor?). But dx&beta;dx&beta;?




Originally posted by lethe
and when it comes time to start doing non-Abelian gauge theory, you will wish you were in the math camp, instead of the physics camp.
I wish I was in the math camp right now. I am starting to think that physics is teaching me bad habits.




Originally posted by lethe
since [itex]df[/itex] and [itex]dg[/itex] are 1-forms (by definition, a 1-form is a member of the cotangent space), they can be written in terms of the basis of that space.
How do you know that they are 1-forms and not vectors? What is wrong with saying that df is a vector, and in the &part;&mu;f basis, it has components dx&mu;?
 
Last edited:
  • #91


Originally posted by turin
Holy crap!

Ya, uh, question. What is "symplectic form?"
a symplectic form is a nondegenerate closed 2-form. in the context of classical mechanics, it is related to the Poisson bracket (which is, as you know, antisymmetric)


This seems to contradict the definition of &part;&mu;f dx&mu; as a vector. Is a vector the same thing as a scalar in math land?
no, in math land, vectors and scalars are, of course, different.

i think where we are getting confused is this: [itex]dx^\mu[/itex] are the basis 1-forms. they are dual to the basis vectors [itex]\mathbf{\partial}_\mu[/itex] which means that you feed a basis vector to a basis dual vector and get a number [itex]dx^\mu(\mathbf{\partial}_\nu)=\delta^\mu_\nu[/itex]. make special note of the fact that i made those basis vectors bold. so here, i contract a dual vector with a vector and get a scalar.

now let's recall how i defined vectors in the first place: they are derivations on the algebra of functions. in other words, they are differential operators (this is why i use the symbol [itex]\mathbf{\partial}[/itex] for the basis vectors. it makes them look like differential operators.

so a bold [itex]\partial[/itex] represents a tangent (basis) vector.

the exterior derivative of a function was defined by [itex]df(\mathbf{v})=\mathbf{v}(f)[/itex], from which we derived that the components of [itex]df[/itex] are [itex]\partial_\mu f[/itex]. notice that i didn t put my [itex]\partial[/itex] in bold. because [itex]\partial[/itex] by itself is a tangent vector, it should be bold. but [itex]\partial_\mu f[/itex] is just a number, a component of a 1-form. a scalar (in the math sense of the word, not the physics sense; it is not invariant under coordinate transformations)

so when you contract [itex]\partial_\mu f[/itex] with [itex]dx^\mu[/itex], you just taking a linear combination of 1-forms, and so you end up with a 1-form (not a scalar)

when you contract [itex]dx^\mu[/itex] with [itex]\mathbf{\partial}_\mu[/itex], you are letting your 1-form eat your vector, and since 1-forms (by definition) eat vectors and spit out scalars (here these are scalars in the math sense and the physics sense)

note: i am not very strict or consistent about requiring my vectors to be bold, but i m hoping in this case, it will help.


OK, so a (1,0) tensor is not synonymous with a contravariant vector, nor is a (0,1) tensor synonymous with a covariant vector?
well, that depends. are you using the words contravariant and covariant according to the physics convention? if so, then a (1,0) is a contravariant tensor, and a (0,1) tensor is a covariant tensor, as you say.


I have also been told rather emphatically that the metric is a scalar because it does not get transformed by a Lorentz transformation. Not true?
this is simply not true. not even for a physicist. whoever told you that was just wrong.

according to the math convention, the metric is a coordinate independent object, and it does not transform under coordinate transformations. but in the math convention, this does not make something a scalar. so the statement is not correct.

in the physics convention, we say something is a scalar if it is invariant under coordinate transformations, and i just finished saying the metric is coordinate independent, right?

no, not right. in this physics convention, when we say metric, we don t mean the coordinate independent (0,2) tensor, we mean its coordinate components [itex]g_{\mu\nu}[/itex] and these certainly do transform under coordinate transformations.

so i can t think of any interpretation in which that statement makes any sense.



By writing one of the indices as a subscript, contraction with the metric tensor is already implied, and the dx&mu; is supposed to be the covariant form of dx&mu;. Is this just a matter of confusing termonology?
in math land, there is no object [itex]dx_\mu[/itex]. that is physics shorthand for [itex]g_{\mu\nu}dx^\nu[/itex]. there is also no object [itex]dx^\mu dx^\nu[/itex], that is physics shorthand for [itex]dx^\mu\otimes dx^\nu[/itex]. putting the two statements together, i have [itex]g(\cdot,\cdot)=g_{\mu\nu}dx^\mu\otimes dx^\nu[/itex]. this is the metric, written in coordinate components, in terms of the [itex]dx^\mu\otimes dx^\nu[/itex] basis for the (0,2) tensors.


I think it may be deeper than termonology and notation, because I would have sworn yesterday that d&tau;2 was a scalar, and that g&mu;&nu; was a second rank tensor. You're starting to scare me.

but i suppose if you mean [itex]ds^2[/itex] to be the invariant distance between two very close points, then this is indeed a scalar. but here, in math land, [itex]dx^\mu[/itex] does not mean distance between neighboring points (such a concept has no meaning; there is always a finite seperation, and to find the distance, you must integrate)


I understand that dx&mu;dx&nu; is a second rank tensor (and so, I guess that means a (0,2) tensor?). But dx&beta;dx&beta;?
like i said above, that is just physics shorthand for something else.


I wish I was in the math camp right now. I am starting to think that physics is teaching me bad habits.




How do you know that they are 1-forms and not vectors? What is wrong with saying that df is a vector, and in the &part;&mu;f basis, it has components dx&mu;?

well... i guess you can say that if you want, but in this thread, we are going to use my definitions, not yours. but let me point out why i think your choice of notation sucks: the notation [itex]\partial_\mu f[/itex] suggests that this object depends on some function [itex]f[/itex]. but basis vectors should only depend on your choice of coordinates, not on some function. also, the components of the vector associated with [itex]f[/itex] should depend on [itex]f[/itex], and your notation doesn t show that.
 
Last edited:
  • #92


Originally posted by lethe
a symplectic form is a nondegenerate closed 2-form.
I guess I will need to know what degenerate means in this context. Does it mean that it gives a unique value for every unique pair of vectors that you feed it?




Originally posted by lethe
make special note of the fact that i made those basis vectors bold.

so a bold [itex]\partial[/itex] represents a tangent (basis) vector.

notice that i didn t put my [itex]\partial[/itex] in bold. because [itex]\partial[/itex] by itself is a tangent vector, it should be bold. but [itex]\partial_\mu f[/itex] is just a number, a component of a 1-form. a scalar (in the math sense of the word, not the physics sense; it is not invariant under coordinate transformations)

so when you contract [itex]\partial_\mu f[/itex] with [itex]dx^\mu[/itex], you just taking a linear combination of 1-forms, and so you end up with a 1-form (not a scalar)

when you contract [itex]dx^\mu[/itex] with [itex]\mathbf{\partial}_\mu[/itex], you are letting your 1-form eat your vector, and since 1-forms (by definition) eat vectors and spit out scalars (here these are scalars in the math sense and the physics sense)
I'm not quite following the distinction between bold and not-bold. Is the difference whether or not the partial derivative operates on a function?




Originally posted by lethe
... i use the symbol [itex]\mathbf{\partial}[/itex] for the basis vectors. it makes them look like differential operators.
I don't understand what you mean by this. If they are in fact differential operators, then of course you would use the differential operator symbol? Are they not really differential operators, but they just act a lot like them?




Originally posted by lethe
in this physics convention, when we say metric, we don t mean the coordinate independent (0,2) tensor, we mean its coordinate components [itex]g_{\mu\nu}[/itex] and these certainly do transform under coordinate transformations.
I thought that "metric" meant "ds2" in physics. At any rate, that's what I meant by it. So, is ds2 a (0,2) tensor? I think I'm starting to see how it could be. It returns a math_scalar value when you feed it two 1-forms, but if you don't feed it anything, then it doesn't even make sense.




Originally posted by lethe
according to the math convention, the metric is a coordinate independent object, and it does not transform under coordinate transformations. but in the math convention, this does not make something a scalar. so the statement is not correct.
Can you give the math definition of scalar?




Originally posted by lethe
in math land, there is no object [itex]dx_\mu[/itex]. that is physics shorthand for [itex]g_{\mu\nu}dx^\nu[/itex].
Is not the g&mu;&nu;dx&nu; a 1-form? Are you saying that g&mu;&nu;dx&nu; is also mathematically meaningless?




Originally posted by lethe
... in this thread, we are going to use my definitions, not yours.

but let me point out why i think your choice of notation sucks:
Hey, whoa. Sorry, there, partner. I didn't mean to offend you or stomp on your territory unwarranted. I just want to figure out how the notation and stuff that I have learned fits here, because some of the things that have been said seemed contradictory. I in no way intended to imply that you were contradicting yourself, and I appologize if I have admitted such interpretation.




Originally posted by lethe
the notation [itex]\partial_\mu f[/itex] suggests that this object depends on some function [itex]f[/itex]. but basis vectors should only depend on your choice of coordinates, not on some function.
I seem to remember the definition of coordinates as functions. Are they a different kind of function or what? By choosing coordinates, have you not chosen functions?




Originally posted by lethe
also, the components of the vector associated with [itex]f[/itex] should depend on [itex]f[/itex], and your notation doesn t show that.
What does it mean for a vector to be associated with f? Is this related to the selection of the dual space?
 
  • #93


Originally posted by turin
I guess I will need to know what degenerate means in this context. Does it mean that it gives a unique value for every unique pair of vectors that you feed it?
a bilinear form [itex]B(\cdot,\cdot)[/itex] is degenerate if there exists some vector [itex]\mathbf{v}\neq 0[/itex] such that [itex]B(\mathbf{v},\mathbf{w})=0[/itex] for all [itex]\mathbf{w}\in V[/itex]. if you think of the bilinear form as a matrix (if [itex]V[/itex] is finite dimensional, then you can always do this), then this is equivalent to saying that its determinant is zero.

the existence of a nondegenerate bilinear form on a vector space gives you an isomorphism between the vector space and its dual space. it is this isomorphism that allows you to pretend that vectors with raised and lowered indices are just different names for the same thing. in the absence of this bilinear form, you cannot do this. in Riemannian geometry, the bilinear form is the metric.


I'm not quite following the distinction between bold and not-bold. Is the difference whether or not the partial derivative operates on a function?
the boldness is supposed to draw your attention away from the fact that these guys are differential operators, and remind you that they are tangent vectors. and tangent vectors can really be thought of as arrows. bold face is somehow supposed to remind you of that.

but they are still differential operators (by definition).


I don't understand what you mean by this. If they are in fact differential operators, then of course you would use the differential operator symbol? Are they not really differential operators, but they just act a lot like them?
tangent vectors are differential operators, and to remind you of this, i use the symbol [itex]\partial[/itex]

to recap: [itex]\mathbf{\partial}_\mu[/itex] is a vector. if you feed it to a 1-form, you get a scalar. if you feed a function to this vector, you get a scalar.

[itex]\partial_\mu f[/itex] is a component of a 1-form. if you contract it with basis 1-forms, you get a linear combination of those basis 1-forms, which is of course still a 1-form, not a scalar.


I thought that "metric" meant "ds2" in physics. At any rate, that's what I meant by it. So, is ds2 a (0,2) tensor? I think I'm starting to see how it could be. It returns a math_scalar value when you feed it two 1-forms, but if you don't feed it anything, then it doesn't even make sense.

feed it 2 tangent vectors, not 1-forms. things that eat vectors are type (0,s) tensors, things that eat 1-forms are type (r,0) tensors.


Can you give the math definition of scalar?
a vector space is defined to be the pair (F,V) where F is a field, and V is an abelian group, and which satisfies a few axioms (which you probably know)

in math, i call things that live in F scalars, and things that live in V vectors. since we are doing differential geometry, F will probably always be R, the real numbers. sometimes the complexes. those guys are math scalars.


Is not the g&mu;&nu;dx&nu; a 1-form? Are you saying that g&mu;&nu;dx&nu; is also mathematically meaningless
yes, it is a 1-form, and no it is not mathematically meaningless. recall where we said this:

Originally posted by turin

Originally posted by lethe

but you could have figured out that it was a (0,2) rank tensor just by looking at its coordinate components [itex]g_{\mu\nu}[/itex]. 2 lowered indices on the coordinate components = (0,2) rank tensor.
I understand that dx&mu;dx&nu; is a second rank tensor (and so, I guess that means a (0,2) tensor?). But dx&beta;dx&beta;?

now since you yourself have told me that [itex]dx_\mu[/itex] is a 1-form (which is a (0,1) rank tensor), and we know that [itex]dx^\mu[/itex] is also a 1-form, then it should be clear that [itex]dx_\mu\otimes dx^\mu[/itex] must be a (0,2) rank tensor. i only said that bit about it mathematically meaningless because i thought you were getting hung up on the fact that one of those 1-forms had a lowered index. that is just physics short hand for the components of a metric. nothing more. it doesn t change the fact that you are tensoring 2 (0,1) tensors, and therefore get a (0,2) tensor. just because you are contracting an index, it does not change the fact that there are 2 tensored 1-forms up there.




Hey, whoa. Sorry, there, partner. I didn't mean to offend you or stomp on your territory unwarranted. I just want to figure out how the notation and stuff that I have learned fits here, because some of the things that have been said seemed contradictory. I in no way intended to imply that you were contradicting yourself, and I appologize if I have admitted such interpretation.
hey man, no need to apologize. you didn t offend me. i just thought your suggestion for a change of notation was silly, and i was ribbing you for it.


I seem to remember the definition of coordinates as functions. Are they a different kind of function or what? By choosing coordinates, have you not chosen functions?
coordinates are indeed a different kind of function. coordinates are mappings from the manifold to Rn (for an n dimensional manifold), whereas functions that we are dealing with here are mappings from the manifold to R.

you can consider a single coordinate as a single valued function. then i guess you could call this a choice of a function. but [itex]f[/itex] is a different (arbitrary) function, and your coordinates shouldn t depend on it.

What does it mean for a vector to be associated with f? Is this related to the selection of the dual space?
well, in this context, it means that there is a unique differential form [itex]df[/itex] for each function [itex]f[/itex] on the manifold that satisfies the following equation:
[tex]
df(\mathbf{v})=\mathbf{v}(f)
[/tex]
 
Last edited:
  • #94
It's "soak" time for me. I think you've answered all of my questions. As always, thanks.

Please proceed with your original exposition.
 
  • #95
Excuse me for my rather bad english (I'm dutch speaking)

first:
In many replies people are making a distinguis between physicists and maths views about tensors ( p-forms and vectors).
I think that both agree that tensors are coordinate independent objects (because that's the whole idea behind differential geometry)
Only when you have to do some real calculation you have to choose a coördinate system and base vectors and p forms. The tensor components you get then are of cours coordinate dependent.

second:
It is a mistake to believe that the components of the metric are coördinate independent. It seems only that way because when we change coördianete we change directly the base vectors too (when we use basevectors in the direction of te coördinates)
 
  • #96
Originally posted by Peterdevis
Excuse me for my rather bad english (I'm dutch speaking)

first:
In many replies people are making a distinguis between physicists and maths views about tensors ( p-forms and vectors).
I think that both agree that tensors are coordinate independent objects (because that's the whole idea behind differential geometry)
Only when you have to do some real calculation you have to choose a coördinate system and base vectors and p forms. The tensor components you get then are of cours coordinate dependent.
there are two reasons why i outline the distinction between the maths convention and the physics convention.

1. while, as you say, both the mathematician and the physicist know that a tensor is a coordinate independent object, i think this fact is completely obscure for the physics student who sees only the coordinate transformations of the components which follow a prescribed but somewhat mysterious rule.

2. math and physics have the use of the words covariant and contravariant exactly switched, so there is a tangible difference that needs to be made clear.

second:
It is a mistake to believe that the components of the metric are coördinate independent. It seems only that way because when we change coördianete we change directly the base vectors too (when we use basevectors in the direction of te coördinates)
yeah, sure. i don t know who said the coordinates of the metric are coordinate independent. they are not. the components of any tensor (except a (0,0) tensor, otherwise known as a scalar) are not coordinate independent. i agree with you, but i don t know why you say "it seems that way". in fact, it doesn t even seem like the coordinates of the metric are invariant under coordinate transformations. the transformation looks like this:

[tex]g_{\mu\nu}'=\frac{\partial x^\rho}{\partial x'^\mu}\frac{\partial x^\sigma}{\partial x'^\nu}g_{\rho\sigma}[/tex]

so i don t know why anyone would think that the components are invariant, or even seem invariant.

although... now that i think of it, what did turin actually say before?

Originally posted by turin
I have also been told rather emphatically that the metric is a scalar because it does not get transformed by a Lorentz transformation. Not true?

you know, when i answered this before, i answered it talking about general coordinate transformations, for which there is no sense in which this statement can me made to even resemble the truth. For some reason, i didn t even notice that you were asking about Lorentz transformations, instead of coordinate transformations. i guess it is because when i am in the differential geometry subforum, i am never thinking about flat manifolds.

when the manifold in question has a large number of symmetries, we may consider those coordinate transformations that respect those symmetries. in the case of your manifold being flat Minkowski space, the set of coordinate transformations that respect its symmetries are Lorentz transformations.

in other words, by construction, a Lorentz transformation leaves the metric invariant. even the components.

i guess you would say that the components of the metric are scalars, but i think only a sick person would do this. i think it is much better to keep the metric in your head as a (0,2) tensor, whose components are therefore neither tensors or scalars, but rather components of a tensor, and just view the Lorentz transformation as a specific coordinate transformation that leaves those components invariant. not by luck, but because this is what we want, this is how we define the Lorentz transformation.
 
Last edited:
  • #97
It is a mistake to believe that the components of the metric are coördinate independent. It seems only that way because when we change coördianete we change directly the base vectors too (when we use basevectors in the direction of te coördinates)

I wrote a mistake independent must be dependent. So the components of the metric (and any tensor) can be seen as coördinate independent.

The whole transformation rule of the metric is:

[tex]g= g_{\mu^\prime\upsilon^\prime}(dx^{\mu^\prime}\otimes dx^{\upsilon^\prime}) =\frac{\partial x^\mu}{\partial x^{\mu^\prime}}\frac{\partial x^\upsilon}{\partial x^{\upsilon^\prime}}g_{\mu \upsilon }(\frac{\partial x^{\mu^\prime}}{\partial x^\mu}dx^{\mu }\otimes \frac{\partial x^{\upsilon^\prime}}{\partial x^\upsilon}dx^{\upsilon })[/tex]
whow it takes me 20 minutes with latex

So the componenets are changing not because of te coordinate transformation but because when we change of coordinates, we automaticaly change the basic tensors of the vector space. By changing the basic tensors, the components of a tensor must change.
But it is not necessary to change basic tensors when you change coordinates!
 
  • #98
Exterior differentiation of a 1-form

lethe said:
...but before we leave the 2-forms, let s find a basis for them, and look at their coordinate representation. it should be obvious how to do that, right? we built our 2-form from two 1-forms, so we should be able to build a basis for our 2-forms from the basis for our 1-forms. let s recall what that was (6):

[tex]df=\partial_\nu f dx^\nu[/tex]

let s take two of those, and wedge them together:

[tex]df\wedge dg=(\partial_\mu fdx^\mu)\wedge(\partial_\nu gdx^\nu)=(\partial_\mu f)(\partial_\nu g) dx^\mu\wedge dx^\nu
[/tex]

lethe, are you going to continue with this thread further later i.e. more on exterior derivatives and Hodge star operators? Thanks to your thread, am getting a lot of confusions cleared up esp the difference between a vector and the vector components and the basis vectors.

I'd like to ask some questions related to the above. Can we write [tex]d\omega^\mu=df\wedge dg[/tex]?

Let the {[tex]\omega^\mu[/tex]} be the basis of 1-forms dual to a basis {[tex]X_\mu[/tex]}.

Can we relate the structure coefficients [tex](\partial_\mu f)(\partial_\nu g)[/tex] in [tex]df\wedge dg=(\partial_\mu fdx^\mu)\wedge(\partial_\nu gdx^\nu)=(\partial_\mu f)(\partial_\nu g) dx^\mu\wedge dx^\nu
[/tex] with the structure coefficients of the expansion of the Lie bracket of 2 basis vectors? (am trying to work out an equation in the section dealing with exterior differentiation in Ryan and Shepley's Homogeneous Relativistic Cosmologies)
 
  • #99
maddy said:
lethe, are you going to continue with this thread further later i.e. more on exterior derivatives and Hodge star operators?
well, when i started this thread, i had planned on going all the way to the Yang-Mills lagrangian, which means i would include more on exterior derivatives, covariant derivatives, and exterior covariant derivatives, as well as the Hodge dual. and i had ideas on where i could go after that. higher form gauge fields, Palatini formalism of GR, Chern-Simmons forms, i dunno, a lot of things, some of which i am still learning myself, but the best way to learn a subject is to try to teach it, right?

however, i haven't written a new addition to the main exposition of this thread in almost a year. i seem to have run out of steam. i need something to motivate me to write some more of these, but they are a lot of work.

i have gotten a lot of encouraging private messages about this thread, and i like to think that i have made a difficult subject accesible to people. but i have abandoned it halfway through.

i will make you a deal, maddy. throughout my exposition (this thread is long, it has almost 100 replies. but there are only about 10 or so posts, written by me only, that i consider to be the main kernel of the exposition. i wouldn't mind if every post other than those that make up my exposition were deleted), so throughout these 10 posts or so by me, there have been a couple of homework exercises. no one ever did my homework exercises. i think they should have been very easy for anyone following along. so my deal for you is: do my homework exercises, and, in exchange, i will write 5 more posts.

what do you think?

maddy said:
Thanks to your thread, am getting a lot of confusions cleared up esp the difference between a vector and the vector components and the basis vectors.
I am glad you liked it! i think it is an exciting subject, I really liked learning it, and this thread is a way for me to communicate that excitement to others. i am glad someone is listening

maddy said:
I'd like to ask some questions related to the above. Can we write [tex]d\omega^\mu=df\wedge dg[/tex]?

hmm... well, you can write whatever you want, and if you define things appropriately, it may even make sense. i am not sure what [itex]\omega[/itex] is in your equation above, so i am having trouble making sense of it. usually [itex]\omega^\mu[/itex] is a function, in which case, the left-hand side of your equation is a 1-form, and the right-hand side is a 2-form. this would make the equation meaningless. furthermore, it violates the einstein summation convention, where the index content of both sides of the equation should match. your right-hand side contains a raised [itex]\mu[/itex] index, whereas your left-hand side contains no index at all. in short, i would say this is an invalid equation.

maddy said:
Let the {[tex]\omega^\mu[/tex]} be the basis of 1-forms dual to a basis {[tex]X_\mu[/tex]}.
these symbols don't agree with the spirit of my thread here, in that you are using coefficients of vectors with respect to some basis as the basis itself. but, OK, i can live with that. it is the convention that most physicists follow.


maddy said:
Can we relate the structure coefficients [tex](\partial_\mu f)(\partial_\nu g)[/tex] in [tex]df\wedge dg=(\partial_\mu fdx^\mu)\wedge(\partial_\nu gdx^\nu)=(\partial_\mu f)(\partial_\nu g) dx^\mu\wedge dx^\nu
[/tex] with the structure coefficients of the expansion of the Lie bracket of 2 basis vectors? (am trying to work out an equation in the section dealing with exterior differentiation in Ryan and Shepley's Homogeneous Relativistic Cosmologies)
the equation above has pretty much nothing to do with the structure constants of my basis.

why don't you post the equation here that you are trying to make sense of (i don't have the book you refer to), and let's see if we can help you
 
Last edited:
  • #100
lethe said:
well, when i started this thread, i had planned on going all the way to the Yang-Mills lagrangian, which means i would include more on exterior derivatives, covariant derivatives, and exterior covariant derivatives, as well as the Hodge dual...

however, i haven't written a new addition to the main exposition of this thread in almost a year. i seem to have run out of steam. i need something to motivate me to write some more of these, but they are a lot of work...

i will make you a deal, maddy. throughout my exposition (this thread is long, it has almost 100 replies. but there are only about 10 or so posts, written by me only, that i consider to be the main kernel of the exposition...

The original thread on http://www.sciforums.com/showthread.php?t=20843&page=2&pp=20 is still fairly clean. Over there I started a new thread for comments/questions/side-issues about Lethe's notes, leaving the main thread for the notes themselves. Then I was off-line for some time and lost the thread, so to speak. But perhaps that's what's needed here.

RDT2.

Still around - and still trying to integrate this into mech eng teaching. So many notations, so little time!
 
  • #101
lethe said:
i will make you a deal, maddy...my deal for you is: do my homework exercises, and, in exchange, i will write 5 more posts.

what do you think?

It's a deal.

Homework no 1 is:-

lethe said:
i will ask for a volunteer to show that the set of linear functionals on a given vector space is itself a vector space. it s not to hard, just check the vector space axioms given above.

Let [itex]\sigma[/itex] and [itex]\omega[/itex] are members of a set of linear functionals, [itex]p^\alpha[/itex] be the basis of the linear functional [itex]\sigma[/itex], [itex]\nu[/itex] be a vector, {[itex]e_\alpha[/itex]} be its basis, and a and b be arbitrary members of R.

[itex](\sigma+\omega)\nu[/itex]
=[itex](\sigma+\omega)(\nu^\alpha e_\alpha)[/itex]
=[itex]\sigma(\nu^\alpha e_\alpha)+\omega(\nu^\alpha e_\alpha)[/itex]
=[itex]\sigma(\nu)+\omega(\nu)[/itex]
=[itex](\omega+\sigma)\nu[/itex]
So, the addition of linear functionals is commutative.

Because [itex]\sigma[/itex]=[itex]\sigma_\alpha p^\alpha[/itex],
[itex]-\sigma[/itex]=[itex]-(\sigma_\alpha p^\alpha)[/itex]
[itex]=(-\sigma_\alpha)p^\alpha[/itex]
So, given a linear functional [itex]\sigma[/itex], [itex]-\sigma[/itex] is also a linear functional.

[itex](a+b)\sigma(\nu)[/itex]=[itex]a\sigma(\nu)+b\sigma(\nu)[/itex]
So, the addition of the real numbers is distributive.

[itex]a(\sigma+\omega)(\nu)[/itex]
=[itex]a(\sigma(\nu)+\omega(\nu))[/itex]
=[itex]a\sigma(\nu)+a\omega(\nu)[/itex]
So, the addition of linear functionals is distributive.

[itex]a(b\sigma(\nu))[/itex]
=[itex]a(b\sigma(\nu^\alpha e_\alpha))[/itex]
=[itex]a(b \nu^\alpha \sigma_\alpha)[/itex]
=[itex](ab)\nu^\alpha \sigma_\alpha[/itex]
=[itex](ab)\sigma(\nu^\alpha e_\alpha)[/itex]
=[itex](ab)\sigma(\nu)[/itex]
So, scalar multiplication of the linear functionals is associative.

The set of linear functionals thus form a vector space.
 
  • #102
Homework no 2 is:-

lethe said:
OK, it should be easy to show that the set of tangent vectors, thusly defined, satisy the axioms of the vector space. i will call this vector space TMp. that is, the tangent space to the manifold M at the point p is TMp. for an n dimensional manifold, the tangent space is always an n dimensional vector space.

Let f and g be functions, and [itex]e_\alpha[/itex]=[itex]\partial_\alpha[/itex].

Likewise,
[itex](\partial_\alpha+\partial_\mu)f[/itex]
=[itex](\partial_\alpha)f+(\partial_\mu)f[/itex]
=[itex](\partial_\mu+\partial_\alpha)f[/itex]
Addition of tangent vectors is commutative.

[itex](a+b)\partial_\alpha f[/itex]
=[itex]a\partial_\alpha f+b\partial_\alpha f[/itex]
Addition of the real numbers is distributive.

[itex]a(\partial_\alpha+\partial_\mu)f[/itex]
=[itex]a((\partial_\alpha f)+(\partial_\mu f))[/itex]
=[itex]a(\partial_\alpha f)+a(\partial_\mu f)[/itex]
Addition of tangent vectors is distributive.

[itex]a(b\partial_\alpha f)[/itex]
=[itex]ab(\partial_\alpha f)[/itex]
=[itex](ab)(\partial_\alpha f)[/itex]
Scalar multiplication of the tangent vectors is associative.

The set of tangent vectors thus form a vector space.
 
  • #103
i am not sure what [itex]\omega[/itex] is in your equation above, so i am having trouble making sense of it. usually [itex]\omega^\mu[/itex] is a function, in which case, the left-hand side of your equation is a 1-form, and the right-hand side is a 2-form. this would make the equation meaningless. furthermore, it violates the einstein summation convention, where the index content of both sides of the equation should match. your right-hand side contains a raised [itex]\mu[/itex] index, whereas your left-hand side contains no index at all. in short, i would say this is an invalid equation.

Oops, sorry, yes, I made a lethal careless mistake!
The equation I was referring to is
[itex]d\omega^\mu=D^\mu_{\alpha\beta} \omega^\alpha \wedge \omega^\beta[/itex].
(which at a glance, I saw the right-hand side wrongly as the wedge product of [itex]d\omega^\alpha[/itex] and [itex]d\omega^\beta[/itex])

Ok, so the above equation shows that the exterior derivative operator turns the 1-form [itex]\omega^\mu[/itex]into a 2-form [itex]\omega^\alpha \wedge \omega^\beta[/itex].

these symbols don't agree with the spirit of my thread here, in that you are using coefficients of vectors with respect to some basis as the basis itself. but, OK, i can live with that. it is the convention that most physicists follow.

The {[itex]\omega^\mu[/itex]} is the basis of a 1-form dual to a basis {[itex]X_\mu[/itex]} of a vector.

the equation above has pretty much nothing to do with the structure constants of my basis.

why don't you post the equation here that you are trying to make sense of (i don't have the book you refer to), and let's see if we can help you

Here is the Lie bracket of the basis of a vector.
[itex] \left[ X_\mu , X_\nu \right] = C^\lambda_{\mu\nu} X_\lambda[/itex]

I'm supposed to connect [itex]D^\mu_{\alpha\beta}[/itex] with the structure coefficients [itex]C^\mu_{\alpha\beta}[/itex].

I should be getting [itex]D^\mu_{\alpha\beta}=-\frac{1}{2}C^\mu_{\alpha\beta}[/itex],
and this will result in
[itex]d\omega^i=-\frac{1}{2}C^i_{st}\omega^s \wedge \omega^t[/itex].

Is there any concrete way that I can prove this?
 
  • #104
Hi, is my understanding that you can take the exterior derivarive of a differential form, but my question is, can you take the exterior derivative of some other object (e.g., can you take the exterior derivative of a tensor that is not a differential form?)?
 
  • #105
meteor said:
Hi, is my understanding that you can take the exterior derivarive of a differential form, but my question is, can you take the exterior derivative of some other object (e.g., can you take the exterior derivative of a tensor that is not a differential form?)?

Mmm, as far as I learnt, exterior calculus applies only to functions and forms to generate forms of higher rank. An arbitrary tensor is a product of an arbitrary number of forms and vectors. We use another type of calculus for tensors i.e. Lie derivatives, covariant derivatives. (I stand corrected)
 

Similar threads

  • Differential Geometry
Replies
6
Views
525
  • Differential Geometry
Replies
21
Views
3K
  • Differential Geometry
Replies
10
Views
695
  • Differential Geometry
Replies
8
Views
2K
  • Differential Geometry
Replies
7
Views
2K
  • Differential Geometry
Replies
2
Views
1K
  • Differential Geometry
3
Replies
70
Views
13K
  • Differential Geometry
Replies
4
Views
2K
  • Differential Geometry
Replies
13
Views
968
Replies
13
Views
488
Back
Top