Question on generalized inner product in tensor analysis

In summary, the conversation discusses the concept of the metric tensor and its role in calculating the inner product in a way that is invariant to the parametrization. However, the formula for the inner product can be ambiguous and its use in calculating the inner product between two vectors in polar coordinates is not straightforward. The concept of position vectors and their interpretation in manifold spaces is also discussed, with suggestions for how to approach this concept. The conversation also delves into the idea of pulling back the inner product in order to give a smooth manifold a Riemannian metric.
  • #1
mnb96
715
5
Hello,

some time ago I read that if we know the metric tensor [itex]g_{ij}[/itex] associated with a change of coordinates [itex]\phi[/itex], it is possible to calculate the (Euclidean?) inner product in a way that is invariant to the parametrization. Essentially the inner product was defined in terms of the metric tensor as: [tex]g_{ij}a^i b^j[/tex] using Einstein notation (see here).

I understand Einstein summation convention, but the problem that I have here is that this formula looks totally ambiguous to me. In fact, the article speaks about curvilinear coordinates, thus the metric tensor [itex]g_{ij}[/itex] is inevitably position-dependent. When I see [itex]g_{ij}[/itex] I interpret it as [itex]g_{ij}(u_1,\ldots,u_n)=g_{ij}(\mathbf{u})[/itex]. The formula above does not say *what* coordinates we have to plug into u.

I would really like to see how someone uses the formula above to calculate the inner product between two vectors in ℝ2 expressed in polar coordinates: [itex](r_1,\theta_1)=(1,0)[/itex] and [itex](r_2, \theta_2)=(1,\frac{\pi}{2})[/itex]. The result should be 0.
 
Physics news on Phys.org
  • #2
The metric tensor assigns an inner product ##g_p:T_pM\times T_pM\rightarrow \mathbb{R}## for each ##p\in M## and varies smoothly from point to point so in this sense yes it is position dependent. It is a coordinate independent object and you should think of it as such i.e. ##g_{p}(u,v), u,v\in T_pM## is not dependent on what chart you choose containing ##p##.

What you should realize is that when you write ##g_{ij}##, it is the coordinate representation of the metric tensor ##g## with respect to some smooth chart hence it is already determined what coordinates you are to be using for your computation. To be more precise, if ##p\in M## and ##(U,\varphi)## is a chart containing ##p## then the coordinate representation of ##g_{p}## is given by ##g_{ij} = g_p(\frac{\partial }{\partial x^{i}}|_p, \frac{\partial }{\partial x^{j}}|_p)##. If ##(V,\phi)## is another chart containing ##p## then we get another coordinate representation ##g'_{ij} = g_p(\frac{\partial }{\partial x'^{i}}|_p,\frac{\partial }{\partial x'^{j}}|_p)## and we know that under the transition map, these two representations are related by ##g'_{ij} = \frac{\partial x^{k}}{\partial x'^{i}}\frac{\partial x^{l}}{\partial x'^{j}}g_{kl}##.

As for the computation you were interested in, this is a mistake my special relativity professor loved to bring up. In polar coordinates, you must specify where the vectors are located, not just their components in the polar representation! You cannot use the metric tensor without specifying the positions of the vectors. I had originally written up the issue but there is a whole thread regarding this commonly made mistake: https://www.physicsforums.com/showthread.php?t=621707
 
Last edited:
  • #3
Hi WannabeNewton!

Thanks, your explanation was very clear. Basically the metric defines (smoothly) a field of local inner products that are different at each location. Thus only vectors belonging to the same tangent space (i.e. originating from the same location) can be fed to the local inner product operator. This is clear.

I understand now what went wrong in example, but there is still something that bothers me:

1) In the old thread that you mentioned, in one of the last posts, the author gordon831 seems to have addressed the problem and he/she claimed to have successfully obtained the correct result with an example analogous to the one I made.

It seems to me, that gordon831 essentially took one of the two vectors and he/she performed a parallel transport to the location of the other vector, then he/she re-expressed the transported vector as linear combination of the local frame of the other vector and got a correct result.
Does this trick always (and only!) work when the manifold is flat?

2) These sort of misunderstandings mostly come from the fact that when working with ordinary vector spaces (and Euclidean vectors) students are always taught to visualize vectors as entities "originating" from 0 and pointing to a specific location. However when teachers and authors of books start speak about manifolds, they don't clearly say what happens to the old concept of position vector. They seem to silently forget about it and start talking about vector fields! This is painfully confusing for a beginner.

Should I perhaps start performing the mental exercise of reinterpreting the Euclidean space as a manifold whose tangent spaces at each location are constant and having metric equal to the identity matrix? If the answer is yes, then how do you interpret the old position vectors? You interpret them as vectors in this manifold that are placed at arbitrary locations? or strictly at the origin?
 
  • #4
All gordon did was express the vectors in terms of the frame at an arbitrary but common point for both vectors; it works because we are just converting from cartesian to polar in flat space so yeah.

As for your second question, I really don't know if that mental exercise will be helpful or not. My first exposure to thinking about these things came from learning general relativity so I can't say from experience if that mental exercise will actually be useful because I never tried it.
 
  • #5
Hi, mnb96:

Besides WBN ideas, maybe you can do some "down-and-dirty" calculations in the case
of an m-manifold M embedded in R^n by pulling back the standard inner-product in R^m
(seen/considered as a 2-tensor) by the individual chart maps. Maybe start with low-dimensional
cases, like pulling back the inner-product of R^1 to a curve --single chart; don't take anything
too crazy for a start. Maybe it may also help to see the argument of how any smooth manifold
can be given a Riemannian metric by pulling back the inner-product . Here you use a lot of
the issues that I think you're interested in.
 

Related to Question on generalized inner product in tensor analysis

1. What is a generalized inner product in tensor analysis?

A generalized inner product in tensor analysis is a mathematical operation that takes two tensors as inputs and produces a scalar value as output. It is a way of measuring the relationship between two tensors, similar to how a dot product measures the relationship between two vectors. However, a generalized inner product can also be applied to tensors of higher dimensions, making it a more versatile tool in tensor analysis.

2. How is a generalized inner product calculated?

A generalized inner product is calculated by first defining a metric tensor, which is a mathematical object that assigns a scalar value to each pair of basis vectors in a vector space. The metric tensor is then used to lower the indices of one tensor, after which the two tensors are multiplied component-wise and summed over all indices. The result is a scalar value, which is the generalized inner product of the two tensors.

3. What are some real-world applications of generalized inner products in tensor analysis?

Generalized inner products have many applications in fields such as physics, engineering, and computer science. They are used to calculate physical quantities such as work, energy, and momentum in mechanics and electromagnetism. In engineering, they are used to analyze stress and strain in materials. In computer science, they are used in data compression and pattern recognition algorithms.

4. How does a generalized inner product differ from a regular inner product?

While a regular inner product is defined for two vectors in a vector space, a generalized inner product can be applied to tensors of any dimension. Additionally, a generalized inner product takes into account the metric tensor, which allows for a more flexible and versatile way of measuring the relationship between tensors. The result of a regular inner product is a scalar value, while the result of a generalized inner product is a tensor of one lower rank.

5. Are there any limitations to using generalized inner products in tensor analysis?

One limitation of using generalized inner products is that the result may vary depending on the choice of metric tensor. Different metric tensors can lead to different inner products for the same pair of tensors. Additionally, generalized inner products can only be defined for tensors in a vector space that has a metric tensor defined. This means that not all tensors can be compared using a generalized inner product, limiting its applicability in certain cases.

Similar threads

  • Quantum Physics
Replies
8
Views
2K
Replies
2
Views
2K
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
849
  • Linear and Abstract Algebra
Replies
2
Views
963
Replies
5
Views
4K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Special and General Relativity
Replies
7
Views
2K
  • Differential Geometry
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
11
Views
2K
Back
Top