Basic Tensor Questions: Decomposition, Multi-Coordinate Systems

  • Thread starter clinden
  • Start date
  • Tags
    Tensor
In summary: When we make the substitutions ##e_i\to f_i## (for all i) in the string of text ##T(e^i,e_j,e_k)##, we get an entirely different string of text ##T(f^i,f_j,f_k)##, which (typically) represents a different number, because the ##f_i## represent different vectors than the ##e_i##. But those substitutions can't change a string of text like T or 3, that doesn't contain any symbols for the basis vectors. The symbol T will still represent the same function, just like the symbol 3 will continue to represent the integer successor of 2.
  • #1
clinden
18
1
I have 2 basic questions:
1. Since a type (m,n) tensor can be created by component by component multiplication of m contravariant and n covariant vectors, does this mean an (m,n) tensor can always be decomposed into m contravariant and n covariant tensors? Uniquely?
2. Since a tensor in GR , or perhaps even more generally, is invariant with a change in coordinate system, can the vectors used to create it by component by component multiplication each be of different coordinate systems? And, if so, would the coordinate transformation equations for the multi-coordinate system tensor be the same form as the contravariant and covariant transformation equations for moving a tensor from one coordinate system to another?
 
Physics news on Phys.org
  • #2
clinden said:
Since a type (m,n) tensor can be created by component by component multiplication of m contravariant and n covariant vectors, does this mean an (m,n) tensor can always be decomposed into m contravariant and n covariant tensors? Uniquely?

No, the opposite statement is not generally true. What is true is that you can decompose tensors into linear combinations of such products.
 
  • #3
clinden said:
2. Since a tensor in GR , or perhaps even more generally, is invariant with a change in coordinate system

Where did you get that from? A tensor changes when you change the coordinates... That's also why in GR when you want to make an invariant object you'd contract the indices somehow. Also the tensor equations remain unchanged in form.
The only tensor that remains unchanged is the one that is equal to zero... in that case no matter the change the tensor will always look like 0 in every coordinate system.
 
  • #4
ChrisVer said:
Where did you get that from? A tensor changes when you change the coordinates... That's also why in GR when you want to make an invariant object you'd contract the indices somehow. Also the tensor equations remain unchanged in form.
The only tensor that remains unchanged is the one that is equal to zero... in that case no matter the change the tensor will always look like 0 in every coordinate system.

I think the tensor components change the tensor does not.
 
  • #5
ChrisVer said:
The only tensor that remains unchanged is the one that is equal to zero...

This is not true. A counter example is the Kronecker delta tensor (and its generalisations).
 
  • #6
cosmik debris said:
I think the tensor components change the tensor does not.

How do you define then the tensor? I mean the fact that the components change means that the tensor also changes (not its ranks etc).

Orodruin said:
This is not true. A counter example is the Kronecker delta tensor (and its generalisations).

Yup that went out of my mind...
 
  • #7
ChrisVer said:
How do you define then the tensor? I mean the fact that the components change means that the tensor also changes (not its ranks etc).

Well, a vector is a (rank-one contravariant) tensor, and it's the same vector even though its components change with your choice of coordinate system... That's how we can write ##\vec{F}=m\vec{a}## without committing to any particular coordinate system.

It's a tensor if the components transform according to the tensor transformation rule. Once you know that the object you're dealing with has that property, you can make various coordinate-independent statements about both the tensor and its components.
 
  • #8
ChrisVer said:
How do you define then the tensor?

To add to what Nugatory said: You start from a vector space and define the tensor product of vectors in that space. A general tensor is a linear combination of such products. There is no need to reference a particular coordinate system in the construction and the transformation properties of the components drop out of the definition as a result of the transformation of the individual basis vectors in the original vector space.

I really do not like the general physicist way of introducing tensors as objects with certain transformation properties, it obscures what a tensor is and hides what is really going on.
 
  • #9
ChrisVer said:
I mean the fact that the components change means that the tensor also changes (not its ranks etc).
Take a function ##T:V^*\times V\times V\to\mathbb R## for example. The component ##T(e^i,e_j,e_k)## changes when you change the ordered basis ##(e_1,\dots,e_n)##, but the tensor T doesn't.

When we make the substitutions ##e_i\to f_i## (for all i) in the string of text ##T(e^i,e_j,e_k)##, we get an entirely different string of text ##T(f^i,f_j,f_k)##, which (typically) represents a different number, because the ##f_i## represent different vectors than the ##e_i##. But those substitutions can't change a string of text like T or 3, that doesn't contain any symbols for the basis vectors. The symbol T will still represent the same function, just like the symbol 3 will continue to represent the integer successor of 2.

I guess that the reason why this can be confusing is that tensors are often defined using some specific ordered basis. For example, suppose that ##(e_1,\dots,e_n)## is an ordered basis and that ##\{T^i{}_{jk}|i,j,k\in\{1,\dots,n\}\}## is some set of numbers. We can now define a tensor ##T## by saying that ##T## is the unique multilinear map from ##V^*\times V\times V## into ##\mathbb R## such that ##T(e^i,e_j,e_k)=T^i{}_{jk}## for all i,j,k. Now the symbol ##T## represents a specific tensor, just like the symbol 3 represents a specific number, and is therefore unaffected by the substitution ##e_i\to f_i##.

The key thing to remember is that a change of ordered basis changes symbols, but not their meaning.
 
  • #10
Fredrik said:
I guess that the reason why this can be confusing is that tensors are often defined using some specific ordered basis.
I think the key point of confusion is the fact that tensors are presented in physics classes as objects which transform so-and-so under coordinate transformations, most often leading students to mix up tensors themselves with their components.
 
  • #11
Orodruin said:
I think the key point of confusion is the fact that tensors are presented in physics classes as objects which transform so-and-so under coordinate transformations, most often leading students to mix up tensors themselves with their components.
Yes, that's a much bigger problem. My comment was meant to be about why it might be confusing even to someone who is familiar with the good definition.
 

1. What is tensor decomposition?

Tensor decomposition is a mathematical process that breaks down a multidimensional tensor into smaller, simpler tensors. This allows for the analysis and manipulation of complex data in a more manageable way.

2. What are the types of tensor decomposition?

The two main types of tensor decomposition are canonical decomposition and parallel factor analysis. Canonical decomposition is used for symmetric tensors, while parallel factor analysis is used for asymmetric tensors.

3. How is tensor decomposition useful in data analysis?

Tensor decomposition can be used to extract important features and patterns from high-dimensional datasets. It can also help with data compression, noise reduction, and dimensionality reduction.

4. Can tensor decomposition be applied to multi-coordinate systems?

Yes, tensor decomposition can be applied to any multi-coordinate system, as long as the data can be represented as a multidimensional tensor. This includes Cartesian, cylindrical, and spherical coordinate systems.

5. Are there any limitations to tensor decomposition?

Tensor decomposition can be limited by the size and complexity of the dataset, as well as the type of tensor being analyzed. Additionally, the interpretation of the results can be challenging, especially for higher-dimensional tensors.

Similar threads

  • Special and General Relativity
Replies
10
Views
2K
  • Special and General Relativity
Replies
25
Views
996
  • Special and General Relativity
Replies
22
Views
2K
Replies
40
Views
2K
  • Special and General Relativity
Replies
5
Views
674
  • Special and General Relativity
Replies
4
Views
971
  • Special and General Relativity
Replies
4
Views
3K
  • Special and General Relativity
3
Replies
78
Views
4K
  • Special and General Relativity
Replies
11
Views
1K
Replies
5
Views
1K
Back
Top