Tensor Product - Knapp, Chapter VI, Section 6

In summary: Specifically, we're going to quotient out (identify with 0) all the elements we need to to force the map $(e,f) \mapsto e\otimes f$ to be bilinear. This means that $e\otimes f$ is a coset, specifically the coset $(e,f) + V_0$.In simpler terms, we're taking a "whole lotta lines" (represented by the ordered pairs of elements from $E$ and $F$) and combining them into a larger vector space. This is similar to taking the direct sum of vector spaces. Knapp uses the notation $\Bbb K(e,f)$ to represent
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Anthony W. Knapp's book: Basic Algebra in order to understand tensor products ... ...

I need some help with an aspect of Theorem 6.10 in Section 6 of Chapter VI: Multilinear Algebra ...

The text of Theorem 6.10 reads as follows:https://www.physicsforums.com/attachments/5391
https://www.physicsforums.com/attachments/5392About midway in the above text, just at the start of "PROOF OF EXISTENCE", Knapp writes the following:

" ... ... Let \(\displaystyle V_1 = \bigoplus_{ (e,f) } \mathbb{K} (e, f)\), the direct sum being taken over all ordered pairs \(\displaystyle (e,f)\) with \(\displaystyle e \in E\) and \(\displaystyle f \in F\). ... ... "


I do not understand Knapp's notation for the exact sum ... what exactly does he mean by \(\displaystyle \bigoplus_{ (e,f) } \mathbb{K} (e, f)\) ... ... ? What does he mean by the \(\displaystyle \mathbb{K} (e, f)\) after the \(\displaystyle \bigoplus_{ (e,f) }\) sign ... ? If others also find his notation perplexing then maybe those readers who have a good understanding of tensor products can interpret what he means from the flow of the proof ...Note that in his section on direct products Knapp uses standard notation and their is nothing in his earlier sections that I know of that gives a clue to the notation I am querying here ... if any readers request me to provide some of Knapp's text on the definition of direct products I will provide it ...Hope someone can help ...

Peter*** NOTE ***

To give readers an idea of Knapp's approach and notation regarding tensor products I am proving Knapp's introduction to Chapter VI, Section 6: Tensor Product of Two Vector Spaces ... ... ... as follows ... ... ... :https://www.physicsforums.com/attachments/5393
https://www.physicsforums.com/attachments/5394
 
Physics news on Phys.org
  • #2
He means the same thing as Cooperstein and Winitzki, the free vector space over the field $\Bbb K$ generated by the SET $E \times F$.

As you may well appreciate, this vector space is HUGE, each pair $(e,f)$ is a basis element. As such, to get it to a "manageable size", we're going to take a QUOTIENT SPACE.

Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear.

In other words, $e\otimes f$ is a COSET, the coset $(e,f) + V_0$.

Since, by definition, we have (for example):

$(e_1 + e_2,f) - (e_1,f) - (e_2,f) \in V_0$,

we have $(e_1 + e_2,f) + V_0 = (e_1,f) + (e_2,f) + V_0$, that is:

$(e_1+e_2)\otimes f = e_1\otimes f + e_2\otimes f$.

In all fairness, Knapp is the "most correct": if you have infinitely many vector spaces, and you want to make a large vector space out of all of them, the correct thing to do is to use the DIRECT SUM, not the DIRECT PRODUCT. Cooperstein and Winitzki side-step this issue by only considering a finite number of spaces, and finite linear combinations. In these cases, the direct sum, and the direct product are isomorphic.

Knapp writes $\Bbb K(e,f)$ because he is considering each "factor" as the "line" consisting of:

$\{k(e,f) = (ke,kf): k \in \Bbb K\}$, where $e$ and $f$ are FIXED elements of $E$ and $F$, respectively. We are thus taking the direct sum of a "whole lotta lines", if $\Bbb K = E = F$, the free vector space over $\Bbb K$ generated by two copies of the real numbers has an UNCOUNTABLE basis (every point of the plane generates a line that lies in its "own dimension"). As I indicated before, this space is "so big" it's hard to actually imagine, whereas the tensor product $\Bbb R \otimes \Bbb R$ only has dimension 1 (and the tensor product $a\otimes b$ for real numbers $a,b$ is just ordinary multiplication).
 
Last edited:
  • #3
Deveno said:
He means the same thing as Cooperstein and Winitzki, the free vector space over the field $\Bbb K$ generated by the SET $E \times F$.

As you may well appreciate, this vector space is HUGE, each pair $(e,f)$ is a basis element. As such, to get it to a "manageable size", we're going to take a QUOTIENT SPACE.

Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear.

In other words, $e\otimes f$ is a COSET, the coset $(e,f) + V_0$.

Since, by definition, we have (for example):

$(e_1 + e_2,f) - (e_1,f) - (e_2,f) \in V_0$,

we have $(e_1 + e_2,f) + V_0 = (e_1,f) + (e_2,f) + V_0$, that is:

$(e_1+e_2)\otimes f = e_1\otimes f + e_2\otimes f$.

In all fairness, Knapp is the "most correct": if you have infinitely many vector spaces, and you want to make a large vector space out of all of them, the correct thing to do is to use the DIRECT SUM, not the DIRECT PRODUCT. Cooperstein and Winitzki side-step this issue by only considering a finite number of spaces, and finite linear combinations. In these cases, the direct sum, and the direct product are isomorphic.

Knapp writes $\Bbb K(e,f)$ because he is considering each "factor" as the "line" consisting of:

$\{k(e,f) = (ke,kf): k \in \Bbb K\}$, where $e$ and $f$ are FIXED elements of $E$ and $F$, respectively. We are thus taking the direct sum of a "whole lotta lines", if $\Bbb K = E = F$, the free vector space over $\Bbb K$ generated by two copies of the real numbers has an UNCOUNTABLE basis (every point of the plane generates a line that lies in its "own dimension"). As I indicated before, this space is "so big" it's hard to actually imagine, whereas the tensor product $\Bbb R \otimes \Bbb R$ only has dimension 1 (and the tensor product $a\otimes b$ for real numbers $a,b$ is just ordinary multiplication).
Thank you Deveno ... That post was very clear and instructive ... EXTREMELY helpful ...

Your support has been critical to my achieving some basic understanding of the notion of tensor products ...

Still reflecting on all your posts on the topic ...

Thanks again,

Peter
 
  • #4
Deveno said:
He means the same thing as Cooperstein and Winitzki, the free vector space over the field $\Bbb K$ generated by the SET $E \times F$.

As you may well appreciate, this vector space is HUGE, each pair $(e,f)$ is a basis element. As such, to get it to a "manageable size", we're going to take a QUOTIENT SPACE.

Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear.

In other words, $e\otimes f$ is a COSET, the coset $(e,f) + V_0$.

Since, by definition, we have (for example):

$(e_1 + e_2,f) - (e_1,f) - (e_2,f) \in V_0$,

we have $(e_1 + e_2,f) + V_0 = (e_1,f) + (e_2,f) + V_0$, that is:

$(e_1+e_2)\otimes f = e_1\otimes f + e_2\otimes f$.

In all fairness, Knapp is the "most correct": if you have infinitely many vector spaces, and you want to make a large vector space out of all of them, the correct thing to do is to use the DIRECT SUM, not the DIRECT PRODUCT. Cooperstein and Winitzki side-step this issue by only considering a finite number of spaces, and finite linear combinations. In these cases, the direct sum, and the direct product are isomorphic.

Knapp writes $\Bbb K(e,f)$ because he is considering each "factor" as the "line" consisting of:

$\{k(e,f) = (ke,kf): k \in \Bbb K\}$, where $e$ and $f$ are FIXED elements of $E$ and $F$, respectively. We are thus taking the direct sum of a "whole lotta lines", if $\Bbb K = E = F$, the free vector space over $\Bbb K$ generated by two copies of the real numbers has an UNCOUNTABLE basis (every point of the plane generates a line that lies in its "own dimension"). As I indicated before, this space is "so big" it's hard to actually imagine, whereas the tensor product $\Bbb R \otimes \Bbb R$ only has dimension 1 (and the tensor product $a\otimes b$ for real numbers $a,b$ is just ordinary multiplication).

Hi Deveno,

Thanks again for the help ... just reflecting on your post ... and have a basic question ... ...You write:


" ... ... Specifically, we're going to quotient out (identify with 0) all the elements we need to to FORCE the map:

$(e,f) \mapsto e\otimes f$

to be bilinear. ... ... ... "


My question is as follows:

Why, exactly, do we wish the map $(e,f) \mapsto e\otimes f$ to be bilinear ... ... ?


(My apologies in advance if you have somewhere answered this question somewhere before ... ... )

Peter
 
  • #5
"Universal objects" (that is, objects which are defined by a universal mapping property), represent, in some sense, the "most general way to do something".

For example, a quotient object in a category that has homomorphisms is "the most general way to annihilate something".

Similarly, a free object is "the most general way to create more structure from less".

The guiding THEME in these kinds of constructions, is that any SPECIFIC quality we wish for test for, is related in some canonical way to the "universal object".

With tensor products, the quality we are seeking to "capture" is *multi-linearity*. The tensor product converts multi-linear maps to a linear map (in the most general sense, an $R$-module homomorphism).

So, in a sense, the map $\otimes: V \times W \to V\otimes W$ is "the grandfather of all bilinear maps", just as the quotient group is the grandfather of all "subgroup contracting maps", and the free group is the "grandfather of all group representations" (generators and relations).

For more than two factors, replace "bilinear" by "multilinear".

One of the defining feature of linear algebra, is that given a vector space $V$, we can create, for any other vector space $W$, a third vector space:

$\mathcal{L}(V,W)$, linear maps from $V$ to $W$.

This defines a "super-mapping":

$W \mapsto \mathcal{L}(V,W)$

(note our variables here are ENTIRE VECTOR SPACES).

Let's call this "super-mapping" $F$.

Now suppose we have a linear transformation: $T: W \to U$.

We can, evidently, define a mapping $F(T)$:

$\mathcal{L}(V,W) \to \mathcal{L}(V,U)$, by

for each $A \in \mathcal{L}(V,W)$, $[F(T)](A) = TA$.

Now here is where it gets interesting:

If we have two linear transformations:

$T:W \to U$ and $S:U \to X$

then $F(ST) = F(S)F(T)$.

So $F$ acts like a "super-homomorphism" on linear transformations between vector spaces.

It turns out that the tensor product is a sort of "inverse" (the correct term is adjoint) to this "super-homomorphism" $F$. And so, questions about vector spaces of linear transformations can be turned into questions of tensor products, and vice versa. It's sort of "similar" to how multiplication and factoring are "inverses" of each other; one leads us to make bigger and better things out of smaller things, and one let's us "break down" bigger things into more manageable "bites".

Now this is speaking rather fast-and-loose, but it's sort of the "why" of things.

****************

Short, but totally mystifying explanation: by definition, the tensor product is a multilinear map that satisfies its UMP.
 
Last edited:
  • #6
Deveno said:
"Universal objects" (that is, objects which are defined by a universal mapping property), represent, in some sense, the "most general way to do something".

For example, a quotient object in a category that has homomorphisms is "the most general way to annihilate something".

Similarly, a free object is "the most general way to create more structure from less".

The guiding THEME in these kinds of constructions, is that any SPECIFIC quality we wish for test for, is related in some canonical way to the "universal object".

With tensor products, the quality we are seeking to "capture" is *multi-linearity*. The tensor product converts multi-linear maps to a linear map (in the most general sense, an $R$-module homomorphism).

So, in a sense, the map $\otimes: V \times W \to V\otimes W$ is "the grandfather of all bilinear maps", just as the quotient group is the grandfather of all "subgroup contracting maps", and the free group is the "grandfather of all group representations" (generators and relations).

For more than two factors, replace "bilinear" by "multilinear".

One of the defining feature of linear algebra, is that given a vector space $V$, we can create, for any other vector space $W$, a third vector space:

$\mathcal{L}(V,W)$, linear maps from $V$ to $W$.

This defines a "super-mapping":

$W \mapsto \mathcal{L,W}$

(note our variables here are ENTIRE VECTOR SPACES).

Let's call this "super-mapping" $F$.

Now suppose we have a linear transformation: $T: W \to U$.

We can, evidently, define a mapping $F(T)$:

$\mathcal{L}(V,W) \to \mathcal{L}(V,U)$, by

for each $A \in \mathcal{L}(V,W)$, $[F(T)](A) = TA$.

Now here is where it gets interesting:

If we have two linear transformations:

$T:W \to U$ and $S:U \to X$

then $F(ST) = F(S)F(T)$.

So $F$ acts like a "super-homomorphism" on linear transformations between vector spaces.

It turns out that the tensor product is a sort of "inverse" (the correct term is adjoint) to this "super-homomorphism" $F$. And so, questions about vector spaces of linear transformations can be turned into questions of tensor products, and vice versa. It's sort of "similar" to how multiplication and factoring are "inverses" of each other; one leads us to make bigger and better things out of smaller things, and one let's us "break down" bigger things into more manageable "bites".

Now this is speaking rather fast-and-loose, but it's sort of the "why" of things.

****************

Short, but totally mystifying explanation: by definition, the tensor product is a multilinear map that satisfies its UMP.
Thanks Deveno ... That was helpful ... and a bit challenging ...

Still reflecting on what you have said ...

Peter
 

Related to Tensor Product - Knapp, Chapter VI, Section 6

1. What is a tensor product?

A tensor product is a mathematical operation that combines two vector spaces to create a new vector space. It is used to represent the relationship between multiple variables or quantities in a system.

2. How is the tensor product defined in Knapp, Chapter VI, Section 6?

In Knapp, Chapter VI, Section 6, the tensor product is defined as a bilinear map that takes two vector spaces and produces a new vector space by combining their elements. The resulting vector space is also called the tensor product of the two original vector spaces.

3. What are some applications of the tensor product in scientific research?

The tensor product is commonly used in physics, engineering, and computer science to model and solve complex systems. It is also used in linear algebra, quantum mechanics, and signal processing to represent relationships between multiple variables or quantities.

4. How is the tensor product related to the direct sum?

The tensor product is similar to the direct sum, but it allows for more flexibility in combining vector spaces. While the direct sum only adds two vector spaces together, the tensor product can combine them in a more complex manner, resulting in a larger vector space.

5. Are there any limitations to using the tensor product?

The tensor product is a powerful mathematical tool, but it does have limitations. It can only be applied to vector spaces, and it may not always be possible to find a unique tensor product for a given set of vector spaces. Additionally, the tensor product may not always preserve the properties of the original vector spaces, such as dimensionality or basis vectors.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
845
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
5
Views
2K
Replies
3
Views
1K
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
3K
  • Linear and Abstract Algebra
Replies
18
Views
3K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
5
Views
2K
Back
Top