- #1
Will Flannery
- 114
- 34
The two tensor definitions I'm (newly) familiar with, by transformation rules, and as a map from a tensor product space to the reals, don't tell me what a tensor does, and to the best of my knowledge they don't make it apparent. So, I'm looking for an operational definition, and suggesting the following one -
Scalars are rank 0 tensors.
For tensors of higher rank we start with a n-dimensional vector space V with basis e1, e2,... en and dual covector space V* with basis e1*, e2*,... en*. Vectors and covectors are rank 1 tensors.
A tensor of rank r>1 is a means of defining a linear map from tensors of rank m to tensors of rank n, where m and n are generally < r, but not necessarily, that is, the definition doesn't require it.
A tensor of rank r>1 is a linear combination of dyadic basis tensors of rank r, where a dyadic basis tensor has the form x1.x2.x3...xr where each xi is either a basis vector ej of V or a basis covector ej* of V*.
The product of two dyadic basis tensors x1.x2.x3...xr and y1.y2.y3...ys is computed by evaluating <xr,y1> and if that is not 0, evaluating <xr-1,y2>, and if that is not 0 continuing till one of the dyadic basis tensors (normally y1.y2...ys) is used up, if no 0 was produced the dyadic basis tensor that remains is the product, it's either a 1 or a dyadic basis tensor. Note when evaluating <xr-(k-1),yk> one must be a vector and one a covector, else it's an error.
The x dyadic basis tensor eats the y dyadic basis tensor till one of the nibbles is a 0 and the result is 0, or till one is used up and the result is 1 or the part of the x dyadic basis tensor that didn't get a bite (or the y dyadic basis tensor that didn't get bitten).
A tensor A maps tensor B by applying each of the dyadic basis tensors in A to each of the dyadic basis tensors in B and multiplying their coefficients when their product is not 0, and summing the resulting dyadic basis tensors to get the result of A applied to B.
I think this is how tensors work, and I think this definition does explicitly spell it out and make it clear. But I'd like to have it verified or corrected if necessary.
Scalars are rank 0 tensors.
For tensors of higher rank we start with a n-dimensional vector space V with basis e1, e2,... en and dual covector space V* with basis e1*, e2*,... en*. Vectors and covectors are rank 1 tensors.
A tensor of rank r>1 is a means of defining a linear map from tensors of rank m to tensors of rank n, where m and n are generally < r, but not necessarily, that is, the definition doesn't require it.
A tensor of rank r>1 is a linear combination of dyadic basis tensors of rank r, where a dyadic basis tensor has the form x1.x2.x3...xr where each xi is either a basis vector ej of V or a basis covector ej* of V*.
The product of two dyadic basis tensors x1.x2.x3...xr and y1.y2.y3...ys is computed by evaluating <xr,y1> and if that is not 0, evaluating <xr-1,y2>, and if that is not 0 continuing till one of the dyadic basis tensors (normally y1.y2...ys) is used up, if no 0 was produced the dyadic basis tensor that remains is the product, it's either a 1 or a dyadic basis tensor. Note when evaluating <xr-(k-1),yk> one must be a vector and one a covector, else it's an error.
The x dyadic basis tensor eats the y dyadic basis tensor till one of the nibbles is a 0 and the result is 0, or till one is used up and the result is 1 or the part of the x dyadic basis tensor that didn't get a bite (or the y dyadic basis tensor that didn't get bitten).
A tensor A maps tensor B by applying each of the dyadic basis tensors in A to each of the dyadic basis tensors in B and multiplying their coefficients when their product is not 0, and summing the resulting dyadic basis tensors to get the result of A applied to B.
I think this is how tensors work, and I think this definition does explicitly spell it out and make it clear. But I'd like to have it verified or corrected if necessary.