Metric Components: Inner Product of Vector U & V

In summary: I realized that there's no need of having equal indices.In summary, the inner product between two vectors is a linear mapping from VxV to F, where V is a vector space and F is a field which satisfies properties 1... 2... 3.
  • #1
kent davidge
933
56
The component of a one-form W can be represented using the metric as Wβ = gαβUα, where Uα is the component of a vector U, and one can always multiply Wβ by some Vβ to get the inner product between the two vectors U and V.

My question is: since the inner product is defined with the two components having the same index, like WβVβ, then it would mean that any given component gαβ with α ≠ β would be zero? i.e. every metric is diagonal?

I suspect it is true at least for familiar coordinate systems, because I noticed either in cartesian coordinates or spherical coordinates any component with two different indices is equal to zero.
 
Last edited:
Physics news on Phys.org
  • #2
Counter-example: the Kerr metric, describing spacetime around a rotating black hole, has a ##dtd\phi## term.
 
  • #3
Ibix said:
Counter-example: the Kerr metric, describing spacetime around a rotating black hole, has a ##dtd\phi## term.
What about the inner product in that case?
 
  • #4
What about it? It's ##g_{\mu\nu}U^\mu V^\nu## as with any other metric. It just means that there's some cross-talk between the ##t## and ##\phi## components that gets more significant as you get closer to the hole. So (using the ordering ##t,r,\theta,\phi##) (1,0,0,0) and (0,0,0,1) aren't orthogonal although (1,0,0,0) and (0,0,1,0) are.
 
Last edited:
  • Like
Likes kent davidge
  • #5
kent davidge said:
since the inner product is defined with the two components having the same index.


Where do you get the definion? There is no "two components having the same infex" in the inner product definition.
 
  • #6
Narasoma said:
Where do you get the definion? There is no "two components having the same infex" in the inner product definition.
From Nakaraha's book.

"Take a vector V = Vµ∂/∂xµ and a one-form ω = ωµdxµ. The inner product , < , > : TpM* × Tp M → R is defined
by <ω, V> = ωµVν <dxµ, ∂/∂ xν> = ωµVνδνµ = ωµVµ.
"
 
  • #7
Narasoma said:
Where do you get the definion? There is no "two components having the same infex" in the inner product definition.

Try to find another book please. My lecturer warned me that I should be careful when I read Nakahara's book, because there are some missconceptions. Apparently you've found one. "Inner product ( , ) between two vectors u and v, is a linear mapping from VxV to F, where V is a vector space and F is a field which satisfies properties 1... 2... 3..."
If you find such definition, maybe you've found the proper definition of an inner product. It is not even necessary to introduce metric in the first place, but we can make a special inner product using metric.
 
  • #8
@kent davidge - I'm not sure if this will help.

In coordinates where the metric is diagonal the one-form created by lowering an index on a vector will have each component depending only on the corresponding component of the vector - that is ##W_t\propto V^t## etc. In coordinates where the metric is not diagonal at least one of the components of ##W## will depend on more than one component of ##V## - for example ##W_t=g_{tt}V^t+g_{t\phi}V^\phi## in the Kerr metric, where no implicit summation over indices is intended. That's how the non-diagonal nature of the metric comes out in the ##W_\mu V^\mu## version of the inner product.
 
  • Like
Likes Nugatory
  • #9
Narasoma said:
Try to find another book please. My lecturer warned me that I should be careful when I read Nakahara's book, because there are some missconceptions. Apparently you've found one. "Inner product ( , ) between two vectors u and v, is a linear mapping from VxV to F, where V is a vector space and F is a field which satisfies properties 1... 2... 3..."
If you find such definition, maybe you've found the proper definition of an inner product. It is not even necessary to introduce metric in the first place, but we can make a special inner product using metric.
Oh ok. What books are you using in your course?

Ibix said:
@kent davidge - I'm not sure if this will help.

In coordinates where the metric is diagonal the one-form created by lowering an index on a vector will have each component depending only on the corresponding component of the vector - that is ##W_t\propto V^t## etc. In coordinates where the metric is not diagonal at least one of the components of ##W## will depend on more than one component of ##V## - for example ##W_t=g_{tt}V^t+g_{t\phi}V^\phi## in the Kerr metric, where no implicit summation over indices is intended. That's how the non-diagonal nature of the metric comes out in the ##W_\mu V^\mu## version of the inner product.
I realized that there's no need of having equal indices. Thanks.
 
  • #10
kent davidge said:
Oh ok. What books are you using in your course?


For inner product definition, its on Functional Analysis class. Unfortunately, I forgot the author, nut the title is Applied Functional Analysis (GTM 108). But for simplicity, you can find the definition at wikipedia. The point is, the inner product definition doesn't need any summation over indices.
 
  • Like
Likes kent davidge
  • #11
Not every metric is diagonalizable in a coordinate basis. One can show (using rather fancy methods in some cases!) that:

In 2 dimensions, one can always find coordinates such that the metric takes the form

$$\begin{pmatrix}g_{11} & 0 \\ 0 & g_{22} \end{pmatrix}.$$
In 3 dimensions, one can always find coordinates such that the metric takes the form

$$\begin{pmatrix}g_{11} & 0 & 0 \\ 0 & g_{22} & 0 \\ 0 & 0 & g_{33} \end{pmatrix}.$$

However, in 4 dimensions, the best one can do is the following: One can always find coordinates in which the metric takes the form

$$\begin{pmatrix} g_{11} & g_{12} & 0 & 0 \\ g_{12} & g_{22} & 0 & 0 \\ 0 & 0 & g_{33} & g_{34} \\ 0 & 0 & g_{34} & g_{44} \end{pmatrix}.$$
In higher dimensions, I'm not sure what you get. But generally, in ##d## dimensions you have ##d(d+1)/2## algebraically independent components (since the metric is symmetric), and ##d## gauge symmetries (from coordinate transformations), so it is possible to reduce the metric to a form with only ##d(d-1)/2## non-zero entries. The trick is figuring out exactly which terms have to be non-zero.

Note also that this is not usually the most convenient form of the metric to use, and the coordinate transformations that get you there might be extremely complicated.
 
  • Like
Likes kent davidge
  • #12
Ben Niehoff said:
Not every metric is diagonalizable in a coordinate basis. One can show (using rather fancy methods in some cases!) that:

In 2 dimensions, one can always find coordinates such that the metric takes the form

$$\begin{pmatrix}g_{11} & 0 \\ 0 & g_{22} \end{pmatrix}.$$
In 3 dimensions, one can always find coordinates such that the metric takes the form

$$\begin{pmatrix}g_{11} & 0 & 0 \\ 0 & g_{22} & 0 \\ 0 & 0 & g_{33} \end{pmatrix}.$$

However, in 4 dimensions, the best one can do is the following: One can always find coordinates in which the metric takes the form

$$\begin{pmatrix} g_{11} & g_{12} & 0 & 0 \\ g_{12} & g_{22} & 0 & 0 \\ 0 & 0 & g_{33} & g_{34} \\ 0 & 0 & g_{34} & g_{44} \end{pmatrix}.$$
In higher dimensions, I'm not sure what you get. But generally, in ##d## dimensions you have ##d(d+1)/2## algebraically independent components (since the metric is symmetric), and ##d## gauge symmetries (from coordinate transformations), so it is possible to reduce the metric to a form with only ##d(d-1)/2## non-zero entries. The trick is figuring out exactly which terms have to be non-zero.

Note also that this is not usually the most convenient form of the metric to use, and the coordinate transformations that get you there might be extremely complicated.
I see. Thanks.
Narasoma said:
For inner product definition, its on Functional Analysis class. Unfortunately, I forgot the author, nut the title is Applied Functional Analysis (GTM 108). But for simplicity, you can find the definition at wikipedia. The point is, the inner product definition doesn't need any summation over indices.

Thanks bro
 
  • #13
Ben Niehoff said:
Not every metric is diagonalizable in a coordinate basis. One can show (using rather fancy methods in some cases!) that:

In 2 dimensions, one can always find coordinates such that the metric takes the form

$$\begin{pmatrix}g_{11} & 0 \\ 0 & g_{22} \end{pmatrix}.$$
In 3 dimensions, one can always find coordinates such that the metric takes the form

$$\begin{pmatrix}g_{11} & 0 & 0 \\ 0 & g_{22} & 0 \\ 0 & 0 & g_{33} \end{pmatrix}.$$

However, in 4 dimensions, the best one can do is the following: One can always find coordinates in which the metric takes the form

$$\begin{pmatrix} g_{11} & g_{12} & 0 & 0 \\ g_{12} & g_{22} & 0 & 0 \\ 0 & 0 & g_{33} & g_{34} \\ 0 & 0 & g_{34} & g_{44} \end{pmatrix}.$$

That's very interesting. You can always diagonalize the metric at a single point, right? So what you're saying is that you can always find a coordinate system so that it is in that "almost-diagonal" form everywhere within a coordinate patch?
 
  • #14
stevendaryl said:
That's very interesting. You can always diagonalize the metric at a single point, right? So what you're saying is that you can always find a coordinate system so that it is in that "almost-diagonal" form everywhere within a coordinate patch?

You need to be clear what you mean by "diagonalize". The metric is an invertible symmetric matrix, so it can always be diagonalized algebraically; in fact, it can be brough to a standard form where the diagonal entries are ##\pm 1##. But the resulting basis will not, in general, be a coordinate basis.

What I've discussed in my post above is a more restrictive question: Can I diagonalize the metric by coordinate transformations? Then the resulting basis is still a coordinate basis. Obviously for dimensions greater than 3, this cannot be done, because the metric has a greater number of degrees of freedom than it has diagonal entries. The surprising part is that in dimensions 3 or less, the answer is yes.

Then, a related question is, in dimensions 4 and higher: I know I cannot diagonalize the metric by coordinate transformations, but can I put it in a standard form? The answer is yes, although I don't know if the standard forms have been worked out in dimensions higher than 4.

All of this is true in an open neighborhood (i.e. coordinate patch), not just at a point.
 
  • #15
Ben Niehoff said:
You need to be clear what you mean by "diagonalize". The metric is an invertible symmetric matrix, so it can always be diagonalized algebraically; in fact, it can be brough to a standard form where the diagonal entries are ##\pm 1##. But the resulting basis will not, in general, be a coordinate basis.

What I've discussed in my post above is a more restrictive question: Can I diagonalize the metric by coordinate transformations?

I was talking about coordinate transformations, too. Can't you always find a coordinate system so that the metric in that coordinate basis is diagonal at a single chosen point?
 
  • #16
stevendaryl said:
I was talking about coordinate transformations, too. Can't you always find a coordinate system so that the metric in that coordinate basis is diagonal at a single chosen point?

Yes, of course. You can even make the diagonal entries all ##\pm 1##. These are the Riemann normal coordinates.

But I'm talking about things that hold in a finite patch.
 

Related to Metric Components: Inner Product of Vector U & V

1. What is the inner product of two vectors?

The inner product of two vectors is a mathematical operation that takes two vectors as inputs and produces a scalar value as output. It is also known as the dot product or scalar product.

2. How is the inner product calculated?

The inner product of two vectors, u and v, is calculated by taking the sum of the products of their corresponding components. In other words, it is the sum of the products of u's first component with v's first component, u's second component with v's second component, and so on.

3. What is the significance of the inner product in vector algebra?

The inner product has several important applications in vector algebra. It can be used to find the angle between two vectors, determine whether two vectors are perpendicular, and project one vector onto another. It is also essential in defining the norm and distance of a vector.

4. Can the inner product be negative?

Yes, the inner product can be negative. This happens when the angle between two vectors is greater than 90 degrees, and the cosine of the angle is negative. In this case, the inner product represents the negative of the product of the magnitudes of the two vectors.

5. How is the inner product related to the Euclidean distance between two vectors?

The inner product is related to the Euclidean distance between two vectors through the Pythagorean theorem. The square of the Euclidean distance is equal to the sum of the squares of the individual components of the vector. This can be derived from the definition of the inner product and the properties of the Euclidean norm.

Similar threads

  • Special and General Relativity
Replies
4
Views
945
  • Special and General Relativity
Replies
8
Views
1K
  • Special and General Relativity
Replies
24
Views
3K
  • Special and General Relativity
Replies
8
Views
1K
  • Special and General Relativity
Replies
33
Views
2K
  • Special and General Relativity
Replies
11
Views
870
Replies
47
Views
5K
  • Special and General Relativity
Replies
4
Views
816
  • Special and General Relativity
3
Replies
78
Views
4K
  • Special and General Relativity
Replies
8
Views
2K
Back
Top