Tensor Algebras - Dummit and Foote, Section 11.5

In summary: Further, how do we know that with respect to multiplication ##\mathcal{T}^{i} (M) \ \mathcal{T}^{j} (M) \subseteq \mathcal{T}^{i+j} (M)## ... ?Theorem 31 states that for any two tensor algebras ##\mathcal{T} (M)## and ##\mathcal{T}^{i+j} (M)##, there exists a bilinear map from ##\mathcal{T} (M)## to ##\mathcal{T}^{i+j} (M)##.
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Dummit and Foote: Abstract Algebra (Third Edition) ... and am focused on Section 11.5 Tensor Algebras. Symmetric and Exterior Algebras ...

In particular I am trying to understand Theorem 31 but at present I am very unsure about how to interpret the theorem and need some help in understanding the basic form of the elements involved and the mechanics of computations ... so would appreciate any help however simple ...

Theorem 31 and its proof read as follows:
?temp_hash=dbd758b94b4679c84602119236eb2a44.png

My (rather simple) questions are as follows:Question 1

In the above text from D&F we read the following:" ... ... ##\mathcal{T} (M)## is an ##R##-Algebra containing ##M## with multiplication defined by the mapping:

##( m_1 \otimes \ ... \ \otimes m_i ) ( m'_1 \otimes \ ... \ \otimes m'_j ) = m_1 \otimes \ ... \ \otimes m_i \otimes m'_1 \otimes \ ... \ \otimes m'_j ##
... ... ... "... my questions are as follows:

What do the distributive laws look like in this case ... and would sums of elements be just formal sums ... or would we be able to add elements in the same sense as in the ring ##\mathbb{Z}## where the sum ##2+3## gives an entirely different element ##5## ... ?

Further, how do we know that with respect to multiplication ##\mathcal{T}^{i} (M) \ \mathcal{T}^{j} (M) \subseteq \mathcal{T}^{i+j} (M)## ... ... ?
Question 2

In the proof we read the following:"The map

## \underbrace{ M \times M \times \ ... \ \times M }_{ i \ factors} \times \underbrace{ M \times M \times \ ... \ \times M }_{ j \ factors} \longrightarrow \mathcal{T}^{i+j} (M) ##

defined by

##(m_1, \ ... \ , m_i, m'_1, \ ... \ , m'_j) \mapsto m_1 \otimes \ ... \ ... \ \otimes m_i \otimes m'_1 \otimes \ ... \ ... \ \otimes m'_j ##

is ##R##-multilinear, so induces a bilinear map ## \mathcal{T}^{i} (M) \times \mathcal{T}^{j} (M)## to ##\mathcal{T}^{i+j} (M)## ... ... "My questions are:

... what does the multlinearity of the above map look like ... ?

and

... how do we demonstrate that the above map induces a bilinear map ## \mathcal{T}^{i} (M) \times \mathcal{T}^{j} (M)## to ##\mathcal{T}^{i+j} (M)## ... ... ? How/why is this the case... ?Hope someone can help ...

Peter
 

Attachments

  • D&F - Theorem 31 - Section 11.5 - Page 442 ....png
    D&F - Theorem 31 - Section 11.5 - Page 442 ....png
    43.8 KB · Views: 1,169
Last edited:
Physics news on Phys.org
  • #2
If ##R = \mathbb{R}## and ##M=\mathbb{R}^2##, then ##\mathcal{T}(M)## is as a vector space the same as ##\mathbb{R}\langle X,Y\rangle##, the set of polynomials over ##\mathbb{R}## in two noncommuting variables. What does Theorem 31 tell us in this case?
 
  • #3
Hmm ... thanks micromass ... but I think that is a bit beyond me ...

Initial thoughts (really only at square zero ... but must show an effort ... :frown: ... )

We have for elements of the tensor algebra ##\mathcal{T} ( \mathbb{R}^2 )## we have

##\mathcal{T}^0 ( \mathbb{R}^2 ) = R = \mathbb{R}##

##\mathcal{T}^1 ( \mathbb{R}^2 ) = \mathbb{R}^2 = \mathbb{R} \times \mathbb{R}##

##\mathcal{T}^2 ( \mathbb{R}^2 ) = \mathbb{R}^2 \times \mathbb{R}^2##

##\mathcal{T}^3 ( \mathbb{R}^2 ) = \mathbb{R}^2 \times \mathbb{R}^2 \times \mathbb{R}^2##

...

...

##\mathcal{T} ( \mathbb{R}^2 ) = \bigoplus_{ k = 0}^{ \infty } \mathcal{T}^k ( \mathbb{R}^2 )##
Now, the polynomial ring ##\mathbb{R} (x,y)## in two variables ##x## and ##y## with real coefficients consists of all finite sums (sounds a bit like a direct sum!) of monomial terms of the form ##ax^iy^j## (of degree ##i+j## ) ...

For example ##p(x,y) = 2x^3 + xy - y^2##

and

##q(x,y) = -3xy + 2y^2 + x^2y^3##

are both elements of ##\mathbb{R} (x,y)##, of degrees ##3## and ##5## respectively ... (perhaps elements of degree ##k## belong to ##\mathcal{T}^k ( \mathbb{R}^2 ) ## ? but why and how?)... and of course we can add and multiply ##p(x,y)## and ##q(x,y)## as well as multiply each by a scalar ...

But ... ? ... how do we fit this polynomial ring with the tensor algebra ##\mathcal{T} ( \mathbb{R}^2 ) = \bigoplus_{ k = 0}^{ \infty } \mathcal{T}^k ( \mathbb{R}^2 )##
Can you help further ...

Peter
[My apologies for not getting further than restating some of the definitions involved ... ... BUT ... the idea of a link between polynomial rings and tensor algebras such as you propose is most interesting ... ... ]
 
  • #4
Math Amateur said:
Question 1

In the above text from D&F we read the following:" ... ... ##\mathcal{T} (M)## is an ##R##-Algebra containing ##M## with multiplication defined by the mapping:

##( m_1 \otimes \ ... \ \otimes m_i ) ( m'_1 \otimes \ ... \ \otimes m'_j ) = m_1 \otimes \ ... \ \otimes m_i \otimes m'_1 \otimes \ ... \ \otimes m'_j ##
... ... ... "

... my questions are as follows:

What do the distributive laws look like in this case ...
The distributive laws are the same ones you are familiar with from arithmetic. Given ##T_1,T_2,T_3\in\mathcal T(M)##:
$$(1):\ \ (T_1+T_2)T_3=T_1T_3+T_2T_3$$
$$(2):\ \ T_3(T_1+T_2)=T_3T_1+T_3T_2$$
where juxtaposition of elements of the algebra indicates multiplication according to the algebra's definition of multiplication.
and would sums of elements be just formal sums ... ?
Yes, they are formal sums. In fact, most things are formal sums with tensors. From the set that is the Cartesian product of elements of ##m## copies of ##M## we make a vector space of formal sums of the elements of that set. We then take a quotient to make it a tensor space. We then make a direct sum space ##\mathcal T(M)## out of the tensor spaces ##\mathcal T^k(M)## of all different orders, and the elements of that direct sum space are formal sums of elements of the component tensor spaces.

Further, how do we know that with respect to multiplication ##\mathcal{T}^{i} (M) \ \mathcal{T}^{j} (M) \subseteq \mathcal{T}^{i+j} (M)## ... ... ?
Let [itex]B[/itex] be a generating set for [itex]M[/itex] and let [itex]B^{[j]}\equiv\{\bigotimes_{k=1}^j b_k\ |\ \forall r:\ b_k\in B\}[/itex]. Then elements of [itex]\mathcal T^j[/itex] are of the form
$$\Big(\sum_{b\in B^{[ i ]}} c_bb\Big)\Big(\sum_{b\in B^{[ j ]}} a_bb\Big)$$
where all the [itex]a_b[/itex] are in [itex]R[/itex].

So elements of [itex]\mathcal T^i\mathcal T^j[/itex], which is [itex]\{T_1T_2\ |\ T_1\in\mathcal T^i\wedge T_2\in \mathcal T^j\}[/itex], are of the form
$$\Big(\sum_{b\in B^{[ i ]}} c_bb\Big)\Big(\sum_{b\in B^{[j]}} a_bb\Big)$$
where all the [itex]a_b[/itex] and [itex]c_b[/itex] are in [itex]R[/itex].

Apply the distributive law, simplify and you'll see that these elements are all of a form that is consistent with the general form for elements of [itex]\mathcal T^{i+j}[/itex].

You might find it a useful exercise to identify an element of [itex]\mathcal T^{i+j}[/itex] that is not in [itex]\mathcal T^i\mathcal T^j[/itex], thereby showing that the inclusion is not an equality.

Math Amateur said:
We have for elements of the tensor algebra ##\mathcal{T} ( \mathbb{R}^2 )## we have

##\mathcal{T}^0 ( \mathbb{R}^2 ) = R = \mathbb{R}##

##\mathcal{T}^1 ( \mathbb{R}^2 ) = \mathbb{R}^2 = \mathbb{R} \times \mathbb{R}##

##\mathcal{T}^2 ( \mathbb{R}^2 ) = \mathbb{R}^2 \times \mathbb{R}^2##

##\mathcal{T}^3 ( \mathbb{R}^2 ) = \mathbb{R}^2 \times \mathbb{R}^2 \times \mathbb{R}^2##
These are not correct. The first (##\mathcal T^0(\mathbb R^2)##) is correct but thereafter they are not. The items need to be tensor products, not Cartesian products. The Cartesian product does not have the operations of addition and scalar multiplication defined for it. If you replace the ##\times## signs by ##\otimes## they will be correct.
 
Last edited:
  • Like
Likes jim mcnamara and Math Amateur
  • #5
andrewkirk said:
The distributive laws are the same ones you are familiar with from arithmetic. Given ##T_1,T_2,T_3\in\mathcal T(M)##:
$$(1):\ \ (T_1+T_2)T_3=T_1T_3+T_2T_3$$
$$(2):\ \ T_3(T_1+T_2)=T_3T_1+T_3T_2$$
where juxtaposition of elements of the algebra indicates multiplication according to the algebra's definition of multiplication.

Yes, they are formal sums. In fact, most things are formal sums with tensors. From the set that is the Cartesian product of elements of ##m## copies of ##M## we make a vector space of formal sums of the elements of that set. We then take a quotient to make it a tensor space. We then make a direct sum space ##\mathcal T(M)## out of the tensor spaces ##\mathcal T^k(M)## of all different orders, and the elements of that direct sum space are formal sums of elements of the component tensor spaces.Let [itex]B[/itex] be a generating set for [itex]M[/itex] and let [itex]B^{[j]}\equiv\{\bigotimes_{k=1}^j b_k\ |\ \forall r:\ b_k\in B\}[/itex]. Then elements of [itex]\mathcal T^j[/itex] are of the form
$$\Big(\sum_{b\in B^{[ i ]}} c_bb\Big)\Big(\sum_{b\in B^{[ j ]}} a_bb\Big)$$
where all the [itex]a_b[/itex] are in [itex]R[/itex].

So elements of [itex]\mathcal T^i\mathcal T^j[/itex], which is [itex]\{T_1T_2\ |\ T_1\in\mathcal T^i\wedge T_2\in \mathcal T^j\}[/itex], are of the form
$$\Big(\sum_{b\in B^{[ i ]}} c_bb\Big)\Big(\sum_{b\in B^{[j]}} a_bb\Big)$$
where all the [itex]a_b[/itex] and [itex]c_b[/itex] are in [itex]R[/itex].

Apply the distributive law, simplify and you'll see that these elements are all of a form that is consistent with the general form for elements of [itex]\mathcal T^{i+j}[/itex].

You might find it a useful exercise to identify an element of [itex]\mathcal T^{i+j}[/itex] that is not in [itex]\mathcal T^i\mathcal T^j[/itex], thereby showing that the inclusion is not an equality.These are not correct. The first (##\mathcal T^0(\mathbb R^2)##) is correct but thereafter they are not. The items need to be tensor products, not Cartesian products. The Cartesian product does not have the operations of addition and scalar multiplication defined for it. If you replace the ##\times## signs by ##\otimes## they will be correct.
Hi Andrew ... thanks for the help ...

Sorry about the careless typo with respect to the \otimes in the definition of \mathcal{T}^i ( \mathbb{R}^2 ) ... ...

Just a basic clarification ... you write:

" The distributive laws are the same ones you are familiar with from arithmetic. Given ##T_1,T_2,T_3\in\mathcal T(M)##:
$$(1):\ \ (T_1+T_2)T_3=T_1T_3+T_2T_3$$
$$(2):\ \ T_3(T_1+T_2)=T_3T_1+T_3T_2$$
where juxtaposition of elements of the algebra indicates multiplication according to the algebra's definition of multiplication."Yes, I understand that ... but what is the form of the ##T_i## ...

Indeed ... we have that ...

##\mathcal{T} (M) = R \oplus \mathcal{T}^1 (M) \oplus \mathcal{T}^2 (M) \oplus \mathcal{T}^1 (M) \oplus \ ... \ ... \ ...##

which seems to suggest that an element of ##\mathcal{T} (M)## is of the form##(r, \ m_1, \ m_2 \otimes m_3, \ m_4 \otimes m_5 \otimes m_6, \ ... \ ... \ ... \ )##

where only a finite number of terms are different from zero (finite support) ... ...

BUT ...

... ... the definition of multiplication for ##\mathcal{T} (M)## seems to imply that elements of ##\mathcal{T} (M)## are of the form:

##m_1 \otimes m_2 \otimes \ ... \ ... \ \otimes m_i##

?

Can you clarify ...?

Peter
 
  • #6
The elements ##T_i## of ##\mathcal T(M)## are formal sums of elements of the component tensor spaces.
The para labelled (1) in the excerpt you quote in the OP defines multiplication between two elements of ##\mathcal T(M)## However, there is identification going on again here. Each of the three elements in the formula is actually (via abuse of notation) really a formal sum of elements of the component tensor spaces, in which all coefficients (elements of the ring) are zero except for the one corresponding to the tensor space of which the element is written.

For instance when the authors write ##(m_1\otimes...\otimes m_i)##, they actually mean the formal sum for which all coefficients are zero except for the coefficient of ##(m_1\otimes...\otimes m_i)##.

Also, don't think of the formal sums like this:
Math Amateur said:
which seems to suggest that an element of ##\mathcal{T} (M)## is of the form

##(r, \ m_1, \ m_2 \otimes m_3, \ m_4 \otimes m_5 \otimes m_6, \ ... \ ... \ ... \ )##
That's like saying that decimal numbers are of the form
(1, 10, 100, ...)
and that doesn't distinguish between the decimal numbers 123 and 321.
The way of writing these numbers is (3,2,1,0,0,...) and (1,2,3,0,0,0...). Think of the number in each place as a coefficient.

A decimal number is a formal sum of the coefficients (which are from the ring ##\mathbb Z_{10}##) over the module with generators ##\{10^k\ :\ k\in\mathbb N\}##. The generators specify the 'place value'. [We actually then have to take a quotient, to get the 'carry the 1' thing happening. But we don't need to worry about that right now]

Similarly, a member of ##\mathcal T(M)## is a sequence of coefficients in ##R##, where the place of a coefficient in the sequence is its 'place value' that determines what it is notionally multiplied by.

Thinking of the ring elements in the sequences as coefficients may help you get an insight into what Micromass is referring to when he talks about thinking about these objects as spaces of polynomials.
 
  • Like
Likes Math Amateur
  • #7
andrewkirk said:
The elements ##T_i## of ##\mathcal T(M)## are formal sums of elements of the component tensor spaces.
The para labelled (1) in the excerpt you quote in the OP defines multiplication between two elements of ##\mathcal T(M)## However, there is identification going on again here. Each of the three elements in the formula is actually (via abuse of notation) really a formal sum of elements of the component tensor spaces, in which all coefficients (elements of the ring) are zero except for the one corresponding to the tensor space of which the element is written.

For instance when the authors write ##(m_1\otimes...\otimes m_i)##, they actually mean the formal sum for which all coefficients are zero except for the coefficient of ##(m_1\otimes...\otimes m_i)##.

Also, don't think of the formal sums like this:

That's like saying that decimal numbers are of the form
(1, 10, 100, ...)
and that doesn't distinguish between the decimal numbers 123 and 321.
The way of writing these numbers is (3,2,1,0,0,...) and (1,2,3,0,0,0...). Think of the number in each place as a coefficient.

A decimal number is a formal sum of the coefficients (which are from the ring ##\mathbb Z_{10}##) over the module with generators ##\{10^k\ :\ k\in\mathbb N\}##. The generators specify the 'place value'. [We actually then have to take a quotient, to get the 'carry the 1' thing happening. But we don't need to worry about that right now]

Similarly, a member of ##\mathcal T(M)## is a sequence of coefficients in ##R##, where the place of a coefficient in the sequence is its 'place value' that determines what it is notionally multiplied by.

Thinking of the ring elements in the sequences as coefficients may help you get an insight into what Micromass is referring to when he talks about thinking about these objects as spaces of polynomials.
Hi Andrew ... I am reflecting on and trying to understand your statement;

" ... Similarly, a member of ##\mathcal T(M)## is a sequence of coefficients in ##R##, where the place of a coefficient in the sequence is its 'place value' that determines what it is notionally multiplied by. ... ... "So ... it seems you advise thinking of an element of ##\mathcal T(M)## as

##(r_0, r_1, r_2, r_3, \ ... \ ... \ ... , r_i \ ... \ ... \ ... \ ... )##

where ##r_i## is a coefficient of ##m_1 \otimes m_2 \otimes \ ... \ ... \ ... \ \otimes m_i \in \mathcal T^i (M)## ... ...

but this seems to assume that the ##m##'s in ##m_1 \otimes m_2 \otimes \ ... \ ... \ ... \ \otimes m_i## don't vary ...

Isn't ##r_i m_1 \otimes m_2 \otimes \ ... \ ... \ ... \ \otimes m_i## a different element from ##r_i m'_1 \otimes m'_2 \otimes \ ... \ ... \ ... \ \otimes m'_i##?

... and so how can we represent both elements by ##r_i## in the infinite sequence ##(r_0, r_1, r_2, r_3, \ ... \ ... \ ... , r_i \ ... \ ... \ ... \ ... )##?

Can you help?

Peter
 
  • #8
Math Amateur said:
So ... it seems you advise thinking of an element of ##\mathcal T(M)## as

##(r_0, r_1, r_2, r_3, \ ... \ ... \ ... , r_i \ ... \ ... \ ... \ ... )##

where ##r_i## is a coefficient of ##m_1 \otimes m_2 \otimes \ ... \ ... \ ... \ \otimes m_i \in \mathcal T^i (M)## ... ...
Actually, I think of the place values as not
$$m_1 \otimes m_2 \otimes \ ... \ ... \ ... \ \otimes m_i$$
but rather
$$b_1 \otimes b_2 \otimes \ ... \ ... \ ... \ \otimes b_i$$
where the ##b_k## are elements of a minimal generating set ##B## for ##M##. See my definitions in post 4. That removes the possibility of different choices of ##m_k##.

So the series of coefficients has one place for each product of ##B## vectors at each order. If ##|B|=n## then there are ##n^k## coefficients in the series for each ##k\in\mathbb N##.

If ##B## is independent in ##M## then it's a basis for ##M## and then, under this approach, every element of ##\mathcal B## has a unique representation in terms of items constructed from ##B##. If ##M## is not a vector space then it may not be possible to find a generating set ##B## that is also a basis, in which case we'll need to take quotients. That complicates things a little, so we won't worry about that unless you want to.

Note that if ##M## is not finitely-generated (has no finite generating set) then there will be infinitely many different products of basis vectors at each order, so there is no easy way to think of the coefficients as being in a series. Further, if ##M## is not countably generated, then there is no possible representation of the set of coefficients as a series (ie a bijection between ##\mathbb N## and the place values). But the notation in post 4 copes perfectly well with that, as it uses ##B^{[j]}## itself as an index set, rather than ##\mathbb N## or a subset thereof.

With that notation, the general element of ##\mathcal T(M)## can be written

$$\sum_{k=1}^r \sum_{b\in B^{[k]}} a_{kb}b$$

where the coefficients ##a_{kb}## are in ##R## and the items ##b## indicate the place value.

Under this approach, one gets a set-theoretically different algebra for each different choice of ##B##. One then proves that those algebras are all isomorphic and (once one is comfortable with the surrounding concepts) identifies them.

D&F's approach seems different, and does appear to suffer from the problem you identify. I think they would need to take a quotient somewhere to get around the problem you describe. Indeed their definition of multiplication may not necessarily be well-defined without taking a quotient as, if an element ##T_1## of ##\mathcal T^k(M)## can be written two different ways as a pure product (it may be provable that that's impossible, but no proof leaps instantly to mind) then for some ##j\in\mathbb N## there may be some pure product ##T_2\in\mathcal T^j(M)## such that ##T_1T_2## differs according to which representation of ##T_1## is used.

I suspect that proving that can't happen will boil down to essentially the same thing as proving, under my method, that the different algebras arising from different choices of basis are all isomorphic.
 

Related to Tensor Algebras - Dummit and Foote, Section 11.5

What is a tensor algebra?

A tensor algebra is a mathematical structure used to study tensors, which are mathematical objects that describe linear relationships between vectors and other tensors. It is a vector space that is generated by taking the tensor products of a given vector space with itself multiple times.

What is the significance of tensor algebras?

Tensor algebras are used in various fields of mathematics and physics, including differential geometry, representation theory, and quantum mechanics. They provide a powerful tool for studying linear transformations and multilinear maps in a general setting.

How is a tensor algebra defined?

A tensor algebra is defined as the direct sum of all tensor powers of a given vector space, including the zeroth power (which is just the base vector space itself). It is denoted by T(V) or T*(V), where V is the base vector space.

What is the relation between tensor algebras and symmetric and exterior algebras?

The symmetric and exterior algebras are both subalgebras of the tensor algebra. The symmetric algebra is generated by the symmetric tensors, while the exterior algebra is generated by the exterior (or antisymmetric) tensors. In some cases, the tensor algebra may be isomorphic to either the symmetric or exterior algebra.

What are some applications of tensor algebras?

Tensor algebras are used in many areas of mathematics and physics, including algebraic geometry, topology, and quantum field theory. They are also used in engineering applications, such as in the study of stress and strain tensors in mechanics.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
17
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
926
  • Linear and Abstract Algebra
Replies
1
Views
828
  • Linear and Abstract Algebra
Replies
2
Views
988
  • Linear and Abstract Algebra
Replies
8
Views
2K
Back
Top