Expression for the line element ds^2

In summary: Then:\delta s^2 = \delta x_\mu \delta x^\muThis is a very special case. The point I'm trying to make is that you can interpret this equation as saying that \delta s^2 is a quadratic form in \delta x, with coefficients given by the metric tensor g_{\mu \nu}.So to get back to the question of what dx^\mu means. It's a very small difference between two locations. In the first equation, it means "at point A, I take a small step in the mu direction", and in the second equation it means "I take a small step in the mu direction". In the first equation, the step is taken at
  • #1
TimeRip496
254
5
ds2=guvdxudxv
Why is the invariance of ds2 as shown above?

Why can't it be something like
ds2=guudxudxu or
ds2=dxudxu ?

Isnt it based on Pythagoras theorem? Why must it have 1 u and 1 v instead of just 2 u or 2 v? Forgive me for such dumb question as I just started.
 
Physics news on Phys.org
  • #2
I'm only a bit past this myself - so caveat emptor. One minor point - I understand that you usually use Roman subscripts to mean "summation of spatial coordinates only" and Greek to mean "summation over all coordinates". I've used Greek throughout, although I don't think it changes the argument.

The metric tensor provides a notion of length and angle to vectors in the tangent space of your manifold. It therefore encodes information about distances in the space and coordinate system you have chosen, and ##ds^2=g_{\mu\nu}dx^\mu dx^\nu## is the correct equation, summing over both indices independently. You can see it as a matrix equation, ##ds^2=\underline{dx}^T\mathbf{g}\underline{dx}## if you want, although the subscript notation is a bit more flexible, I think.

Let's look at the case of simple Euclidean plane, using Cartesian coordinates. Then
[tex]\begin{eqnarray}
ds^2&=&g_{\mu\nu}dx^\mu dx^\nu\\
&=&g_{xx}dx^2+g_{xy}dxdy+g_{yx}dydx+g_{yy}dy^2
\end{eqnarray}[/tex]That looks a good bit more complex than the regular expression for Pythagoras' Theorem. However, in this simple case, ##g_{xx}=g_{yy}=1## and ##g_{xy}=g_{yx}=0##. You can see for yourself that it simplifies immediately to the more familiar form. You can also see that the metric tensor is just the identity matrix, and the matrix equation in my previous paragraph simplifies to the familiar form for the dot product.

In this simple case, it so happens that ##g_{\mu\nu}dx^\mu dx^\nu=g_{\mu\mu}dx^\mu dx^\mu=dx^\mu dx^\mu##, but only because the off-diagonal elements of the metric tensor are zero. Those equalities do not hold in general, though. So the reason it can't be the simpler forms you suggested in general is that they can only handle fairly simple cases. It could not be used to describe the manifolds one finds in General Relativity.
 
  • Like
Likes PeroK
  • #3
Because what you wrote makes no sense mathematically. It's like writing 2+/2. Not every sequence of mathematical symbols makes a meaningful expression.

If you just started, I think you should start a little further back.
 
  • #4
Ibix said:
I'm only a bit past this myself - so caveat emptor. One minor point - I understand that you usually use Roman subscripts to mean "summation of spatial coordinates only" and Greek to mean "summation over all coordinates". I've used Greek throughout, although I don't think it changes the argument.

The metric tensor provides a notion of length and angle to vectors in the tangent space of your manifold. It therefore encodes information about distances in the space and coordinate system you have chosen, and ##ds^2=g_{\mu\nu}dx^\mu dx^\nu## is the correct equation, summing over both indices independently. You can see it as a matrix equation, ##ds^2=\underline{dx}^T\mathbf{g}\underline{dx}## if you want, although the subscript notation is a bit more flexible, I think.

Let's look at the case of simple Euclidean plane, using Cartesian coordinates. Then
[tex]\begin{eqnarray}
ds^2&=&g_{\mu\nu}dx^\mu dx^\nu\\
&=&g_{xx}dx^2+g_{xy}dxdy+g_{yx}dydx+g_{yy}dy^2
\end{eqnarray}[/tex]That looks a good bit more complex than the regular expression for Pythagoras' Theorem. However, in this simple case, ##g_{xx}=g_{yy}=1## and ##g_{xy}=g_{yx}=0##. You can see for yourself that it simplifies immediately to the more familiar form. You can also see that the metric tensor is just the identity matrix, and the matrix equation in my previous paragraph simplifies to the familiar form for the dot product.

In this simple case, it so happens that ##g_{\mu\nu}dx^\mu dx^\nu=g_{\mu\mu}dx^\mu dx^\mu=dx^\mu dx^\mu##, but only because the off-diagonal elements of the metric tensor are zero. Those equalities do not hold in general, though. So the reason it can't be the simpler forms you suggested in general is that they can only handle fairly simple cases. It could not be used to describe the manifolds one finds in General Relativity.

Really thanks a lot for your great answer. But i just want to clarify, for the dxu and dxv, are they both the same vector but just with no of dimensions that can differ with each other? I am a bit confused regarding this as to whether they are the same vector or not.
 
  • #5
dxu is the uth component in your chosen coordinate system of the vector dx. Many people will refer to it as a vector in itself, but they're kind of short-cutting because they are far more comfortable with the notation than I am. This index notation is extremely efficient, but can be a it tricky to get your head around.

Edit: ...so to answer your question, I suppose that dxu and dxv are the same vector. Or at least, different parts of the same vector.
 
  • #6
TimeRip496 said:
Really thanks a lot for your great answer. But i just want to clarify, for the dxu and dxv, are they both the same vector but just with no of dimensions that can differ with each other? I am a bit confused regarding this as to whether they are the same vector or not.

It's confusing because different people use slightly different conventions. For some people, [itex]dx^\mu[/itex] has a technical meaning as a differential. What I think is simpler when just starting out is to think of the line element in terms of differences: write it this way:

[itex]\delta s^2 = g_{\mu \nu} \delta x^\mu \delta x^\nu[/itex]

This form is really only literally true in the limit as [itex]\delta x^\mu \Rightarrow 0[/itex], but it's approximately true when [itex]\delta x^\mu[/itex] is small. So let's see what that particular form means.

Suppose you travel at constant velocity from point [itex]A[/itex] to point [itex]B[/itex]. (In relativity, the "points" are points in space and time, not just points in space. They are usually called "events", while "point" is used for just spatial locations.) Pick a coordinate system, say [itex](x,t)[/itex] (to simplify, let's only consider one spatial dimension). Then [itex]A[/itex] will have some coordinates: [itex](x_A, t_A)[/itex] and [itex]B[/itex] will have some coordinates: [itex](x_B, t_B)[/itex]. Then we compute:

[itex]\delta x = x_B - x_A[/itex]
[itex]\delta t = t_B - t_A[/itex]

The elapsed time [itex]\delta s[/itex] shown on your clock in traveling from [itex]A[/itex] to [itex]B[/itex] is given in Special Relativity by:

[itex]\delta s^2 = \delta t^2 - \frac{1}{c^2} \delta x^2[/itex]

We can rewrite this using indexed coordinates, by letting [itex]x^0 = t[/itex], [itex]x^1 = x[/itex]. Then this expression for [itex]\delta s[/itex] corresponds to a metric tensor [itex]g_{\mu \nu}[/itex] with the following components:

[itex]g_{00} = 1[/itex]
[itex]g_{11} = - \frac{1}{c^2}[/itex]
[itex]g_{10} = g_{01} = 0[/itex]

Then the expression for [itex]\delta s[/itex] can be written as:

[itex]\delta s^2 = g_{00} (\delta x^0)^2 + g_{01} \delta x^0 \delta x^1 + g_{10} \delta x^1 \delta x^0 + g_{11} (\delta x^1)^2[/itex]

which can be written more compactly as:

[itex]\delta s^2 = g_{\mu \nu} \delta x^\mu \delta x^\nu[/itex]

where we sum over all possible values for [itex]\mu[/itex] and [itex]\nu[/itex]. (In this case, that means 4 possibilities:
[itex]\mu=0, \nu=0[/itex],
[itex]\mu=0, \nu=1[/itex],
[itex]\mu=1, \nu=0[/itex],
[itex]\mu=1, \nu=1[/itex].

Note that [itex]dx^0[/itex] and [itex]dx^1[/itex] don't necessarily have the same units. In this case, the former has units of seconds (time), while the latter has units of meters (distance). The units are made compatible by the metric tensor [itex]g_{\mu \nu}[/itex] so that [itex]ds^2[/itex] consistently has the same units (either seconds, or meters--either choice is fine.)
 
  • #7
Ibix said:
One minor point - I understand that you usually use Roman subscripts to mean "summation of spatial coordinates only" and Greek to mean "summation over all coordinates". I've used Greek throughout, although I don't think it changes the argument.

There are two different conventions that people use. The one you're describing is the original one. Starting in the 70s, some people started using the Penrose abstract index notation, in which Latin letters and abstract indices and Greek letters are concrete indices. Abstract index notation is the best notation that's been invented for tensors, and the alternatives (including index-free notation) deserve to die out.
 
  • #8
bcrowell said:
There are two different conventions that people use. The one you're describing is the original one. Starting in the 70s, some people started using the Penrose abstract index notation, in which Latin letters and abstract indices and Greek letters are concrete indices. Abstract index notation is the best notation that's been invented for tensors, and the alternatives (including index-free notation) deserve to die out.

I'm trying to get to the point that I'm comfortable with abstract index notation, but I'm not quite there. To me, [itex]g_{\mu \nu}[/itex] looks like a component of the metric tensor, rather than the tensor itself. I understand the use of Latin versus Greek letters, but that convention makes me shudder--to put so much meaning into what alphabet is used. It reminds of the bad old days of BASIC programming, where you had to use a variable starting with [itex]I[/itex], [itex]J[/itex] or [itex]K[/itex] to indicate an integer value, and anything else meant a real value.

I do appreciate the usefulness of abstract notation: [itex]R^\mu_{\nu \lambda \tau} V^\nu[/itex] means the tensor resulting from plugging the vector [itex]V^\nu[/itex] into the first vector slot of the tensor [itex]R^\mu_{\nu \lambda \tau}[/itex] is very clear. If you leave out the abstract indices, then it's ambiguous what it means to contract [itex]R[/itex] with [itex]V[/itex]. But it's so similar to the notation [itex]R^a_{bcd} V^a[/itex], which means something very different (though related).

It's wordier, but in my opinion, clearer, to describe a tensor by giving its components in an arbitrary (unspecified) basis:

[itex]V^\mu e_\mu[/itex]

Then the rules for covariant derivative (as opposed to partial derivatives) become simply the use of the product rule for derivatives.
 
Last edited:
  • #9
The thing about these summation conventions is that you always want to have one index on top and one index at the bottom, and never ever repeat them. That's the moral equivalent of programming two nested do-loops with the same index. Even if it works out by accident (not sure that's even possible), it can be much more clearly expressed another way.
 
  • #10
stevendaryl said:
It's wordier, but in my opinion, clearer, to describe a tensor by giving its components in an arbitrary (unspecified) basis:

[itex]V^\mu e_\mu[/itex]

Then the rules for covariant derivative (as opposed to partial derivatives) become simply the use of the product rule for derivatives.

I don't want to drag this thread too far off topic, but the way I see it is that abstract index notation is the best over-all choice because it has two features: (1) it's manifestly coordinate-independent, and (2) it's expressive and concise. The disadvantage of index-free notation ("mathematician notation") is that it lacks property #2. The disadvantage of traditional concrete-index notation is that it lacks property #1. The notation in your example lacks both properties, which is probably why we don't see it very often.
 
  • #11
bcrowell said:
I don't want to drag this thread too far off topic, but the way I see it is that abstract index notation is the best over-all choice because it has two features: (1) it's manifestly coordinate-independent, and (2) it's expressive and concise. The disadvantage of index-free notation ("mathematician notation") is that it lacks property #2. The disadvantage of traditional concrete-index notation is that it lacks property #1. The notation in your example lacks both properties, which is probably why we don't see it very often.

I don't see how it could be coordinate-dependent if you are using an expression that doesn't mention any specific coordinate system. It's exactly as coordinate-independent as the abstract notation.
 
  • #12
stevendaryl said:
I'm trying to get to the point that I'm comfortable with abstract index notation, but I'm not quite there. To me, [itex]g_{\mu \nu}[/itex] looks like a component of the metric tensor, rather than the tensor itself. I understand the use of Latin versus Greek letters, but that convention makes me shudder--to put so much meaning into what alphabet is used. It reminds of the bad old days of BASIC programming, where you had to use a variable starting with [itex]I[/itex], [itex]J[/itex] or [itex]K[/itex] to indicate an integer value, and anything else meant a real value.

As I understand it (based on Wald, pg 24) there isn't really such a huge significance in the use of Latin vs Greek letters in practice. Basically, if you use latin letters, you are implying that your tensor equation works in any basis, while if you use greek letters you've written something that may require some special basis to be valid. So for instance, if a coordinate basis is required, one uses greek letters.

So the two categories of error are that a statement may be interpreted as less general than it could be if one uses greek instead of latin letters. The other case is more serious, actual errors usually occur if one uses statements that are only true in some specific basis in a general basis.

Is this the best way of doing things? I don't know but it's traditional.

It's wordier, but in my opinion, clearer, to describe a tensor by giving its components in an arbitrary (unspecified) basis:

[itex]V^\mu e_\mu[/itex]

Then the rules for covariant derivative (as opposed to partial derivatives) become simply the use of the product rule for derivatives.

This sort of notation is introduced in Wald before the discussion of abstract index notation, so I would agree that it's more basic. So I'd have to agree - the wordiness is a problem for routine use on complex problems though.
 
  • #13
pervect said:
As I understand it (based on Wald, pg 24) there isn't really such a huge significance in the use of Latin vs Greek letters in practice. Basically, if you use latin letters, you are implying that your tensor equation works in any basis, while if you use greek letters you've written something that may require some special basis to be valid. So for instance, if a coordinate basis is required, one uses greek letters.

Wow. I had things exactly backwards. I thought that Greek was for abstract indices, that simply indicate what type of object (vector versus co-vector versus the various types of tensors) and didn't imply anything about any basis.
 
  • #14
Vanadium 50 said:
Because what you wrote makes no sense mathematically. It's like writing 2+/2. Not every sequence of mathematical symbols makes a meaningful expression.

If you just started, I think you should start a little further back.
2/2 is one, pardon the ignorance but I instruct my students to learn mathematics, the greatest abstraction ever discovered by the mind of humanity is mathematics! I didn't get that piece of the pie. I can paint or draw anything but what is that when you can argue about the real meaning of one (philosophers excluded).
 
  • #15
2/2 is one. Yes, and 2+/2 is meaningless.
 
  • #16
stevendaryl said:
I don't see how it could be coordinate-dependent if you are using an expression that doesn't mention any specific coordinate system. It's exactly as coordinate-independent as the abstract notation.

The only reason for the existence of abstract index notation, and its only advantage over concrete index notation, is that it's coordinate-independent, whereas concrete index notation is not. They're designed to look almost identical, but they have different semantics.

As a simple example, suppose I'm doing stuff with tensors on a manifold that is a circle. In abstract index notation, I can write down an inner product like [itex]v^av_a[/itex] that is a meaningful expression at any point I pick on the circle. This expression is not an implied sum, and the index a never takes on numerical values. I'm not going to be able to cover this whole manifold with a single coordinate chart, and therefore in concrete index notation, an expression like [itex]v^\mu v_\mu[/itex] can only be defined on part of the manifold. It represents a sum over numerical values of [itex]\mu[/itex], each of which refers to a particular coordinate, and those coordinates are only defined within that chart.

Another example of a semantic difference is that in concrete index notation, it's always a sloppy/casual abuse of notation to say something like "the vector [itex]v^\mu[/itex];" although people know what you mean, it doesn't really make sense, because [itex]v^\mu[/itex] would really be a component, not a whole vector. It's sort of like saying "the function [itex]x^2[/itex]" as shorthand for "the function such that for any input x, its output is [itex]x^2[/itex];" it doesn't really make sense, because x is an unbound variable, which could have a value but doesn't. But in abstract index notation it's perfectly rigorous to say "the vector [itex]v^a[/itex]," because a isn't an unbound variable that could have a numerical value, it's a label for a slot, and the label's only purpose is to correlate with other slots on other tensors.

There is also a certain discipline that you're accepting when you use abstract index notation. You're never going to say, "given the vector [itex]v^a[/itex], substitute a=2," or "take a spacelike value for a." Concrete index notation makes you accept certain discipline as well, such as not writing things like [itex]v^\mu v^\mu[/itex], as in the OP's examples. The discipline you impose on yourself when you use abstract index notation is a strict superset of the discipline you impose on yourself when you use concrete notation. (It has to be a superset, because when you're done with an abstract-index calculation, you often want to carry the result over into concrete notation in order to get a result in a specific coordinate system.)

You say, "I don't see how it could be coordinate-dependent if you are using an expression that doesn't mention any specific coordinate system." I would say that this is a necessary but not sufficient condition if you want to make sure your results are coordinate-independent. I've given a couple of examples above where it's not sufficient. (1) It does assume the existence of some unspecified coordinate system, and such a coordinate system can fail to exist globally, as in the example of the circle. (2) There are ways in which you could break coordinate-independence without specifying a coordinate system. An example is "take a spacelike value for a."
 
Last edited:
  • Like
Likes martinbn and dextercioby
  • #17
bcrowell, can you give an example of the disadvantage of the mathematical notation i.e. not being expressive and concise. The inner product $v^av_a$ would be $g(v,v)$ which seems fine to me (in fact better because the metric's involvement is not implicit).
 
  • #18
Okay, I was ready to argue the other side some more, but the point about the lack of a global coordinate system is a very good point. The thing about "assume a is a spacelike index" isn't as compelling--somebody might be tempted to do such a thing, but it's clearly an assumption in any case.

One thing that occurs to me is that I don't understand how the notation is supposed to work when you have tetrads. Then you have one abstract index to mean "which slot", one concrete index to mean "which component", and one tetrad index to mean "which vector of the tetrad". Or is it always assumed that the last two are the same--that the components are all relative to the tetrad? Or people just avoid talking about tetrads in abstract index notation?
 
  • #19
In a way you always use tetrads. When you use local coordinates to express tensor components you are in fact using the tetrad of the coordinate vector fields and their duals.
 
  • #20
martinbn said:
bcrowell, can you give an example of the disadvantage of the mathematical notation i.e. not being expressive and concise. The inner product $v^av_a$ would be $g(v,v)$ which seems fine to me (in fact better because the metric's involvement is not implicit).

For example, how would you write something like this in "mathematician notation?" [itex]\nabla_a Q^{ab}{}_{[cde]}{}^e R_b{}^c[/itex]
 
  • #21
bcrowell said:
For example, how would you write something like this in "mathematician notation?" [itex]\nabla_a Q^{ab}{}_{[cde]}{}^e R_b{}^c[/itex]

Exactly. There is a subtle distinction, which I admit to not fully appreciating at times, between contractions of tensors and summations over components. So if [itex]T[/itex] is a tensor taking a pair of vectors and returning a vector, and [itex]V[/itex] is a vector, then the operation of "insert [itex]V[/itex] into the first slot of [itex]T[/itex]" doesn't involve components, or bases at all. But it's a different operation from "insert [itex]V[/itex] into the second slot of [itex]T[/itex]". With abstract indices, you can represent these as:
[itex]T^a_{bc} V^b[/itex] and [itex]T^a_{bc} V^c[/itex], respectively. There is no implied sum (unlike the usual Einstein convention). It's more akin to: "plug the red wire into the green socket"; it's wiring. The fact that it looks the same (other than the alphabet used) as [itex]T^\mu_{\nu \lambda} V^\nu[/itex] and [itex]T^\mu_{\nu \lambda} V^\lambda[/itex], which does imply a sum, is not entirely coincidental, because any linear operation can be written as a sum of the corresponding operation on bases.
 
  • #22
stevendaryl said:
One thing that occurs to me is that I don't understand how the notation is supposed to work when you have tetrads. Then you have one abstract index to mean "which slot", one concrete index to mean "which component", and one tetrad index to mean "which vector of the tetrad". Or is it always assumed that the last two are the same--that the components are all relative to the tetrad? Or people just avoid talking about tetrads in abstract index notation?

Depends on what you are trying to do with the tetrad. A set of tetrad basis vectors can be written as ##(e_\mu)^a## for example. The ##\mu## index tells you which tetrad you are looking at, and the ##a## index tells you its a vector.
 
  • #23
Ibix said:
I'm only a bit past this myself - so caveat emptor. One minor point - I understand that you usually use Roman subscripts to mean "summation of spatial coordinates only" and Greek to mean "summation over all coordinates". I've used Greek throughout, although I don't think it changes the argument.

The metric tensor provides a notion of length and angle to vectors in the tangent space of your manifold. It therefore encodes information about distances in the space and coordinate system you have chosen, and ##ds^2=g_{\mu\nu}dx^\mu dx^\nu## is the correct equation, summing over both indices independently. You can see it as a matrix equation, ##ds^2=\underline{dx}^T\mathbf{g}\underline{dx}## if you want, although the subscript notation is a bit more flexible, I think.

Let's look at the case of simple Euclidean plane, using Cartesian coordinates. Then
[tex]\begin{eqnarray}
ds^2&=&g_{\mu\nu}dx^\mu dx^\nu\\
&=&g_{xx}dx^2+g_{xy}dxdy+g_{yx}dydx+g_{yy}dy^2
\end{eqnarray}[/tex]That looks a good bit more complex than the regular expression for Pythagoras' Theorem. However, in this simple case, ##g_{xx}=g_{yy}=1## and ##g_{xy}=g_{yx}=0##. You can see for yourself that it simplifies immediately to the more familiar form. You can also see that the metric tensor is just the identity matrix, and the matrix equation in my previous paragraph simplifies to the familiar form for the dot product.

In this simple case, it so happens that ##g_{\mu\nu}dx^\mu dx^\nu=g_{\mu\mu}dx^\mu dx^\mu=dx^\mu dx^\mu##, but only because the off-diagonal elements of the metric tensor are zero. Those equalities do not hold in general, though. So the reason it can't be the simpler forms you suggested in general is that they can only handle fairly simple cases. It could not be used to describe the manifolds one finds in General Relativity.
Hi again. Do you mind giving me an eg? Cause I hav difficulty finding it online.
 
  • #24
TimeRip496 said:
Hi again. Do you mind giving me an eg? Cause I hav difficulty finding it online.
An example of what? A metric with off-diagonal elements? The Kerr metric, for an uncharged rotating black hole, has them.
 
  • #25
Might I suggest looking at an easier example for an off diagonal metric. The Kerr metric is quite horrendous.

For example, take the Minkowski metric in Cartesian coordinates ##(t,x,y,z)##: ##ds^2=-dt^2+dx^2+dy^2+dz^2## and transform to light cone coordinates ##(u,v,y,z)## where ##u=t-x## and ##v=t+x##, can you figure out the metric in this coordinate system? It will come out non-diagonal.
 
  • #27
pervect said:
The Painleve metric might be a good choice for a not-so-horrendous off-diagonal metric.
Matterwave said:
Might I suggest looking at an easier example for an off diagonal metric. The Kerr metric is quite horrendous. For example, take the Minkowski metric in Cartesian coordinates (t,x,y,z): ds2=−dt2+dx2+dy2+dz2 and transform to light cone coordinates (u,v,y,z)...

Hmmm... In my quest for simple examples, I was going to suggest ##p=x-y, q=y## where ##x## and ##y## are Cartesian coordinates in the Euclidean plane. :)
 
  • Like
Likes stevendaryl

Related to Expression for the line element ds^2

1. What is the expression for the line element ds^2?

The expression for the line element ds^2 is given by: ds^2 = dx^2 + dy^2 + dz^2, where dx, dy, and dz are the infinitesimal increments in the x, y, and z directions, respectively.

2. How is the line element ds^2 used in physics?

The line element ds^2 is used in physics as a metric of distance or displacement in a given space. It is also used in calculating the spacetime interval in special relativity.

3. What is the significance of the line element ds^2 in general relativity?

In general relativity, the line element ds^2 is used to define the metric tensor, which describes the curvature of spacetime. It is also used in the Einstein field equations to relate the distribution of matter and energy to the curvature of spacetime.

4. How is the line element ds^2 related to the Pythagorean theorem?

The line element ds^2 is closely related to the Pythagorean theorem, which states that in a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the other two sides. In a similar way, the line element ds^2 represents the squared distance between two points in space.

5. Can the line element ds^2 be used in non-Euclidean spaces?

Yes, the line element ds^2 can be used in non-Euclidean spaces, such as in general relativity where spacetime is curved. In these cases, the expression for ds^2 may be modified to take into account the curvature of the space.

Similar threads

  • Special and General Relativity
Replies
9
Views
3K
  • Special and General Relativity
2
Replies
44
Views
1K
  • Special and General Relativity
Replies
11
Views
1K
  • Special and General Relativity
Replies
9
Views
1K
  • Special and General Relativity
Replies
32
Views
3K
  • Special and General Relativity
Replies
5
Views
1K
  • Special and General Relativity
Replies
17
Views
1K
  • Special and General Relativity
Replies
29
Views
1K
  • Special and General Relativity
Replies
11
Views
526
  • Special and General Relativity
Replies
1
Views
948
Back
Top