Matrices and linear transformations.

In summary, this thread discusses the proposition that all matrices define linear transformations. It is argued that every m x n matrix A over a field k determines a linear transformation T:k^n--->k^m, and conversely, every linear transformation T:V--->W and bases of V and W can be associated with a matrix. It is also noted that matrices can be used in contexts independent of linear maps, but this does not change the fact that every matrix gives a linear map and vice versa. The concept of using colors as labels in the matrix equation is discussed, with the conclusion that while it may not have a physical interpretation, it is still a valid mathematical concept.
  • #1
Studiot
5,440
9
This thread is posted to examine the proposition that all matrices define linear transformations.

A Bahat : https://www.physicsforums.com/showthread.php?t=642161

Every m x n matrix A over a field k determines a linear transformation T:k^n--->k^m, namely left-multiplication by A. Conversely, if we are given a linear transformation T:V--->W and bases of V and W (i.e. isomorphisms V≈k^n and W≈k^m) there is some matrix associated with T in these bases.

Now, sometimes matrices are used in contexts independent of linear maps (I have in mind more analytic topics like stochastic matrices). But this doesn't change the fact that every matrix gives a linear map and every linear map gives a matrix once a basis is chosen.



But what of the matrix equation?



[tex]\left[ {\begin{array}{*{20}{c}}
0 & 1 & 0 \\
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{blue} \\
{red} \\
{green} \\
\end{array}} \right] = red[/tex]

The left hand row matrix is not over a field since it is restricted to integers {0,1}

The right hand column matrix is not a vector since you cannot form the a linear combination (αblue+βred+γgreen) since this makes no sense.

Yet the equation makes perfect sense if I perform the experiment of withdrawing a coloured ball from a bag of balls and wish to input the result into a computer for processing.
 
Physics news on Phys.org
  • #2
Is there a basis for your colors? Can you map these colors to vectors in R^n (preferrably orthonormal ones)?

Your choice of saying the colors don't make sense is a bit odd because these are just labels just like x,y,z and everything else. A label is a label, but something like <1,0,0,0> in R^n is something that is more specific and more constrained.
 
  • #3
Thank you for the reply.

I can't seen any relation to Rn.

If you can display one I would be interested.

I'm not even sure that my statement about {1,0} being integers is correct. They are really Boolean truth values.
 
  • #4
This is informal but you'll get the idea...

##
\begin{align*}
\begin{pmatrix}
0 & 1 & 0
\end{pmatrix}\left(k_1\begin{pmatrix}
a\\
b\\
c
\end{pmatrix}+k_2\begin{pmatrix}
e\\
f\\
g
\end{pmatrix} \right )&=\begin{pmatrix}
0 & 1 & 0
\end{pmatrix}\begin{pmatrix}
k_1a+k_2e\\
k_1b+k_2f\\
k_1c+k_2g
\end{pmatrix}
\\&= k_1b+k_2f\\
&=k_1\begin{pmatrix}
0 & 1 & 0
\end{pmatrix}\begin{pmatrix}
a\\
b\\
c
\end{pmatrix}
+k_2\begin{pmatrix}
0 & 1 & 0
\end{pmatrix}\begin{pmatrix}
e\\
f\\
g
\end{pmatrix}
\end{align*}##

Hence, it is a homomorphism and therefore a linear transformation.
 
  • #5
If you have independent quantities then these are just orthogonal basis vectors. As an example, consider <red,green,blue> Then we define <red,blue> = <red,green> = <blue,green> = 0.

Set red = <1,0,0>, blue = <0,1,0> and green = <0,0,1> and then apply linear algebra and vector algebra and everything behaves mathematically is if they were really independent attributes.

If you want them dependent, then just curve the geometry and relate things together that are dependent: this is all curved geometry really is.

Curved geometry (i.e not R^n) is just a way of describing spaces that have dependencies between the elements. It doesn't have to be some high level concept like space-time: it can be anything you want: it could be any system where you change one thing and something else also changes in some way.

Normal linear algebra under the R^n geometry assumes that everything is independent but the differential geometry extends this to the cases where it isn't, and this is the reason why you need differential geometry to study relativity.
 
  • #6
Hi Studiot,

Actually {0,1} is a field.
It's also denoted as ##F_2## or ##\mathbb Z/2 \mathbb Z##. See: http://en.wikipedia.org/wiki/Field_(mathematics)
In particular ##1+1=0## and ##1 \cdot 1=1##.Furthermore, a vector space is defined as the combation of a set V with a field F that satisfy eight specific axioms.
See: http://en.wikipedia.org/wiki/Vector_spaceIn your case the vector space V is the set ##\{r \cdot \mathbf{red}+ g \cdot \mathbf{green}+ b \cdot \mathbf{blue}| r,g,b \in F_2\}## combined with the field F2, which defines the operations ##+## and ##\cdot##.

##1 \cdot \mathbf{red} + 1 \cdot \mathbf{green}## is one of the elements of V.
Mathematically it does not have to have a physical meaning, but of course it does.

Note that ##1 \cdot \mathbf{red} + 1 \cdot \mathbf{red} = 0 \cdot \mathbf{red}##, which I guess has a more problemetic physical meaning. ;)
It particular it means that your matrix (0 1 0) is well defined and defines a linear transformation.
 
  • #7
which means {red, green, blue} is a basis if mathematically (disregarding physical sense) we assume each color is independent
 
  • #8
Thank you all for your replies so far, this is proving a most enlightening discussion.

@Vaedoris

Your equation is not the same as mine. I require a black box that accepts the left hand side as inputs and the RHS as the output. Specifically the output has to be simply red. (No pun intended)

@Chiro

Thank you for a most interesting viewpoint about curved geometry.

@ILS

Wikipedia also allows that the integers do not form a field, so perhaps I should not have restricted this to F2. Thank you for the insight. It is interesting that parts of a non field can form a field. I will have to think about that.

'1red+1green is one of the elements of V.' is specifically excluded from the dataset and is not defined.

This is rather like continuing the the curve of some variable plotted against temperature backwards below absolute zero. You can draw the curve but such a zone is specifically excluded from the domain.
 
  • #9
Vaedoris said:
which means {red, green, blue} is a basis if mathematically (disregarding physical sense) we assume each color is independent

Agreed.

Studiot said:
Wikipedia also allows that the integers do not form a field

Which wikipedia article do you mean?

'1red+1green is one of the elements of V.' is specifically excluded from the dataset and is not defined.

Your input is ##\begin{pmatrix}blue \\ red \\ green\end{pmatrix}##.
I believe an alternative way to write this is 1blue + 1red + 1green.
This would be part of your input dataset?

Perhaps we need to make a distinction between your input dataset and your output dataset.
Your output dataset appears to be {0,red}.
This one indeed does not contain '1red+1green', but I think the input dataset does.

This is rather like continuing the the curve of some variable plotted against temperature backwards below absolute zero. You can draw the curve but such a zone is specifically excluded from the domain.

It's indeed usual to extend a definition beyond what is physically possible, then do calculations, and then restrict it again to what is physically possible.
 
  • #10
Which wikipedia article do you mean?


...The lack of multiplicative inverses, which is equivalent to the fact that Z is not closed under division, means that Z is not a field....

This is as I was taught, and there plenty of further references.
 
  • #11
Yes, the integers do not form a field, but the integers modulo a prime number do form a field.

Furthermore, a linear transformation is only defined in the context of a vector space.
And a vector space requires a field, which is needed for the scalar multiplication.
 
  • #12
a linear transformation is only defined in the context of a vector space.
And a vector space requires a field, which is needed for the scalar multiplication.

Which has been my contention! (That some matrices are not linear transformations).

I can define my equation to avoid vector spaces; the fact that my set coincides with part of some vector space is irrelevant.
 
  • #13
Studiot said:
Which has been my contention! (That some matrices are not linear transformations).

I can define my equation to avoid vector spaces; the fact that my set coincides with part of some vector space is irrelevant.

Ah, but the proposition only claims that a matrix determines a linear transformation on a vector space, and even more specifically a vector space of the form k^n.

It does not say anything about what you get when you apply a matrix to something that is not a vector space.
If you apply a matrix to something that is not a vector space, then indeed that is not a linear transformation.
 
  • #14
a linear transformation is only defined in the context of a vector space.
And a vector space requires a field, which is needed for the scalar multiplication.

Maybe I was wrong, but I took the original comment (which was in another thread) to mean that a matrix can only be a linear transformation and nothing else.

To try to be fair and unbiased I did reproduce part of it in my initial post here.

It is like saying that the letter a is only the side of a triangle, because it can be applied as such, regardless of the fact that it can be a coefficient or a variable in an algebraic expression or many other things.

Thanks for the discussion.
 
  • #15
Yes, you can talk about a matrix being defined as a box that contains blanks that you write stuff in. But so what? It seems tha as soon as you define any sort of matrix multiplication, and if it is well defined (if it isn't well defined you haven't done anything really) then you must impose enough structure to get a vector space or at least a module but I am not sure about that.
 
  • #16
then you must impose enough structure to get a vector space or at least a module but I am not sure about that.

Please explain why you say that.
 
  • #17
Studiot said:
It is like saying that the letter a is only the side of a triangle, because it can be applied as such, regardless of the fact that it can be a coefficient or a variable in an algebraic expression or many other things.

I think you are confusing "concepts" with "notation". You can use notation that looks like a rectangular arrray or table of "stuff" in useful ways that have nothing to do with vector spaces or linear transformations. It might even make sense to apply some of the rules of matrix algebra to them (but not necessarily all the rules).

In the your OP it wasn't clear (to me at least) what your notation of color names meant. You seem to be usinig it in at least two dufferent ways. First taking linear combinations (αblue+βred+γgreen) which makes perfect sense if it represents the intensity of the components of a color display system for example. Then you switched to the "names" of three colored balls where linear combinations don't make much sense, except for integer coefficients, and even then the concept involved looks more like "set theory" than "matrix algebra" to me. If you want to use the notation [ i j k ] to represent a set with i red balls etc, that's fine, but just writing the symbols "[ i j k ]" doesn't make a matrix (in the mathematical sense) appear from nowhere.
 
  • #18
Studiot said:
Please explain why you say that.

Because to define what matrix multiplication means, you have to know what it means to multiply something. And if you want everything to be well defined and work properly, then you have to impose some structure. You can't just say 2blue+3red unless you define what this means. Otherwise, you're just writing stuff down (and as I said before, this is ok, but it's just kind of pointless.)
 
  • #19
AlephZero, I really don't follow your line of reasoning.
You appear to be agreeing with me and disagreeing with me at the same time.

:confused:

The thread is about a discussion of the definition of a matrix.

My maths dictionary and I hold that a matrix is

'A rectangular array of elements, usually themselves members of a field...' (my bold)

It was proposed that a matrix always is (represents) a linear transformation.

I hold that the cases where the elements are not members of a field may well not be (represent) linear transformations.

It is true that two matrices picked at random may not be able to participate in all the rules of matrix algebra.
So what?
Some pairs whose members are all real numbers may still be non conformable.

But thank you for your thoughts.
 
  • #20
But Robert, what isn't well defined or working properly?

The example I gave works perfectly, and my rules specifically excluded (2blue+3red) etc.
 
  • #21
Studiot said:
But Robert, what isn't well defined or working properly?

The example I gave works perfectly, and my rules specifically excluded (2blue+3red) etc.

Well, tell me what blue + red means. If it means nothing, then you haven't defined anything, which falls into the first case of what I said.
 
  • #22
If I have a monocoloured ball, how many colours can it have?

You need to work within the rules set, not add ones of your own.

By the way have you heard of Macaulay brackets?
 
  • #23
Studiot said:
If I have a monocoloured ball, how many colours can it have?

You need to work within the rules set, not add ones of your own.

By the way have you heard of Macaulay brackets?

You seem to be missing the point.

You have two options:

1) write a bunch of stuff in boxes and call them matricies. But then, guess what, you haven't actually done anything - you've just written a list in a box.

2)Actually define what it means to multiply two matricies and the elements in them. So, if, in your first problem, the matrix on the left was [1,1,1], what would this mean? You are trying to have it both ways: you want define matrix multiplication AND you want to say that the result of your matrix multiplication doesn't make sense.

And when did I not work within any rules? What are you talking about?
 
  • #24
You seem to be missing the point.

No you are the one here missing the point (several of them actually).

Discussion is a two way process, I have answered your questions : you have ignored mine.

As to matrix multiplication or element addition, where does any definition of a matrix require these properties to be existent for matrices?

Where have I said that numeric matrix multiplication or numeric element addition are available operations?

Since you need the rules laid out try these.

1)The elements of the left matrix may be 1 or 0 (or T or F if you prefer to make it plain that numeric arithmetic is unavailable). This is called the incidence matrix.
2)The element 1 may only appear once in every line. Other elements in that line are therefore 0.
3)The right hand matrix contains the dataset.
4)There is a binary operation between an incidence matrix and a dataset matrix which works as follows:
The rows into columns rule which associates each element of the incidence matrix with a unique element of the dataset matrix uses the combination rules ° that

If ei° is 0 or F, < ei°ed > is discarded.
If ei° is 1 or T, ed is entered as the result.

I have borrowed the pointy brackets from Macaulay; they are called Macaulay brackets which work similarly albeit their discard criteria are slightly different.

You ignored my polite question about these brackets, do you fully understand them?

It is impossible to follow the rules above and reach the following situation.

So, if, in your first problem, the matrix on the left was [1,1,1], what would this mean?
 
  • #25
I don't understand very well the purpose of this thread. It is obvious that not all matrices define linear transformations, otherwise how about all the matrices that define non-linear transformations: like affine and projective matrix transformations just to name the two more frequently used. Or say, a complex invertible 2X2 matrix.
 
Last edited:
  • #26
I don't see the point of this thread. On one hand, matrices with coefficients in a field correspond exactly to linear transformations.
However, we can also apply matrices to other contexts where they do not correspond to linear transformations. The post in the OP is an example of this. This seems pretty obvious, so I don't really know why this thread has gone on for two pages already.
 
  • #27
TrickyDicky said:
I don't understand very well the purpose of this thread. It is obvious that not all matrices define linear transformations, otherwise how about all the matrices that define non-linear transformations: like affine and projective matrix transformations just to name the two more frequently used. Or say, a complex invertible 2X2 matrix.

micromass said:
I don't see the point of this thread. On one hand, matrices with coefficients in a field correspond exactly to linear transformations.
However, we can also apply matrices to other contexts where they do not correspond to linear transformations. The post in the OP is an example of this. This seems pretty obvious, so I don't really know why this thread has gone on for two pages already.

micro, you either have me in your ignore list or just wanted to stress the point. Either way it feels great to agree with you.
 
  • #28
I don't see the point of this thread.

Act 1 scene 1 line 1

This thread is posted to examine the proposition that all matrices define linear transformations.

This is very upsetting.

I carefully separated a side discussion to avoid diverting another thread where several posters were trying to help someone.
I also provided the promoter of that idea the opportunity to put his point of view, although he has not done so.

I have also learned some things from other posters in this thread and acknowledged the same.

If you don't want me to help other members, or learn from answers to questions just say so and I will stop.
 
  • #29
Studiot said:
This is very upsetting.

Don't be upset, your point is right, it is just that it seems an obvious point.
But I agree that one always gets to learn something new by debating things, even those apparently obvious.
 
  • #30
Studiot said:
No you are the one here missing the point (several of them actually).

Discussion is a two way process, I have answered your questions : you have ignored mine.

As to matrix multiplication or element addition, where does any definition of a matrix require these properties to be existent for matrices?
It doesn't, and I never said that it did. All I said is that IF you are going to define them, you WILL have to impose enough structure to make either a vector space OR a module *I think*.

Where have I said that numeric matrix multiplication or numeric element addition are available operations?
You didn't, and I never said you did; I never said anything about numbers.


1)The elements of the left matrix may be 1 or 0 (or T or F if you prefer to make it plain that numeric arithmetic is unavailable). This is called the incidence matrix.
2)The element 1 may only appear once in every line. Other elements in that line are therefore 0.
3)The right hand matrix contains the dataset.
Fine
4)There is a binary operation between an incidence matrix and a dataset matrix which works as follows:
The rows into columns rule which associates each element of the incidence matrix with a unique element of the dataset matrix uses the combination rules ° that
If ei° is 0 or F, < ei°ed > is discarded.
If ei° is 1 or T, ed is entered as the result.
OK, that's fine. But what my initial point was that you have actually defined something here. I'm not so sure that what you have defined has actually imposed enough structure to make a vector space (i.e. as I said earlier, I might be wrong.) On the other hand, if you want to do much more, I think you will have to start making some more definitions. If, on the other hand, this is just some notation that compactly expresses a choice that is being made, then I agree, this really isn't a vector space, but I never claimed it would be.

I have borrowed the pointy brackets from Macaulay; they are called Macaulay brackets which work similarly albeit their discard criteria are slightly different.

You ignored my polite question about these brackets, do you fully understand them?

It is impossible to follow the rules above and reach the following situation.
What following situation?

And no, past the meager wikipedia article, I don't know much about Macaulay Brackets. But, again, I never ever claimed that ALL matricies have to represent a linear transformation. For example, when I program, I sometimes use a matrix to represent data, but I never really do any matrix operations. Perhaps I should have been more clear when I was talking about defining operations, but I meant that if you are going to define what the normal matrix operations are for a given matrix, (i.e. matrix multiplication, multiplication by a scalar, etc) then you have to impose so much structure that (again, I think) you get a vector space or at least a module. BUT, you haven't made these definitions, which means what I just said (and what I have been saying) doesn't apply to your situation. To be more precise: I agree, with you, if what you have said is the only definition you have made.
 
  • #31
Studiot said:
[tex]\left[ {\begin{array}{*{20}{c}}
0 & 1 & 0 \\
\end{array}} \right] \, \left[ {\begin{array}{*{20}{c}}
\textbf{{blue}} \\
\textbf{{red}} \\
\textbf{{green}} \\
\end{array}} \right] = red[/tex]

I'm sorry, what is this supposed to mean?! Specifically, how do you define:

[tex]
x \times \left( \mathrm{blue}/\mathrm{red}/\mathrm{green} \right) = ?
[/tex]
where [itex]x \in \mathbb{R}[/itex].
 
  • #32
Dickfore said:
I'm sorry, what is this supposed to mean?! Specifically, how do you define:

[tex]
x \times \left( \mathrm{blue}/\mathrm{red}/\mathrm{green} \right) = ?
[/tex]
where [itex]x \in \mathbb{R}[/itex].

Well, x can only be 0 or 1. So he defines [itex]1\cdot \mathrm{color}=\mathrm{color}[/itex] and [itex]0\cdot \mathrm{color}=0[/itex]. And further, he sets [itex]0+\mathrm{color}=\mathrm{color}[/itex].

This is a completely consistent algebraic structure, but it of course has nothing to do with usual linear algebra.
 
  • #33
micromass said:
Well, x can only be 0 or 1. So he defines [itex]1\cdot \mathrm{color}=\mathrm{color}[/itex] and [itex]0\cdot \mathrm{color}=0[/itex].
I guess you/he are/is trying to define multiplication of a vector (in the Hilbert space [itex]\left\lbrace \mathrm{red}, \mathrm{green}, \mathrm{blue} \right\rbrace[/itex]) by a scalar. However, the set since the set [itex]\left\lbrace 0, 1 \right\rbrace[/itex] does not form a field, since 1 does not have an additive inverse in this set. You need to extend it with at least -1. Then, we need to define [itex] -1 \times \mathrm{red}/\mathrm{green}/\mathrm{blue}[/itex].

micromass said:
And further, he sets [itex]0+\mathrm{color}=\mathrm{color}[/itex].
I'm afraid I don't understand this part! You are adding a scalar with a vector! This is not defined in a vector space.

You need to define:
[tex]
\begin{array}{lcccl}
\mathrm{red} & + &\mathrm{red} & = & ? \\

\mathrm{red} & + &\mathrm{green} & = & ? \\

\mathrm{red} & + &\mathrm{blue} & = & ? \\

\ldots
\end{array}
[/tex]

micromass said:
This is a completely consistent algebraic structure, but it of course has nothing to do with usual linear algebra.
So, it has been shown that this algebraic structure is not a linear space. Therefore, it does not support linear algebra.
 
  • #34
Dickfore said:
I guess you/he are/is trying to define multiplication of a vector (in the Hilbert space [itex]\left\lbrace \mathrm{red}, \mathrm{green}, \mathrm{blue} \right\rbrace[/itex]) by a scalar.

What makes you think that [itex]\{\mathrm{red},\mathrm{green},\mathrm{blue}\}[/itex] is a Hilbert space :confused:

However, the set since the set [itex]\left\lbrace 0, 1 \right\rbrace[/itex] does not form a field, since 1 does not have an additive inverse in this set. You need to extend it with at least -1. Then, we need to define [itex] -1 \times \mathrm{red}/\mathrm{green}/\mathrm{blue}[/itex].

Well, we could set 1=-1 and obtain the field [itex]\mathbb{Z}_2[/itex]. But that is not the point.

I'm afraid I don't understand this part! You are adding a scalar with a vector! This is not defined in a vector space.

The point is that Studiot wants to do something completely different from vector spaces. There are some relations with vector spaces as matrices multiply the same way, but the rest is completely different.

You need to define:
[tex]
\begin{array}{lcccl}
\mathrm{red} & + &\mathrm{red} & = & ? \\

\mathrm{red} & + &\mathrm{green} & = & ? \\

\mathrm{red} & + &\mathrm{blue} & = & ? \\

\ldots
\end{array}
[/tex]

He does not need to define this as those additions will never show up in practice. Again, nobody claims that [itex]\{\mathrm{red},\mathrm{green},\mathrm{red}\}[/itex] forms a vector space.
 
  • #35
micromass said:
What makes you think that [itex]\{\mathrm{red},\mathrm{green},\mathrm{blue}\}[/itex] is a Hilbert space :confused:
Then, what kind of structure do they form. I am certainly not aware of such a structure.

micromass said:
Well, we could set 1=-1 and obtain the field [itex]\mathbb{Z}_2[/itex]. But that is not the point.
Ok, that makes sense.


micromass said:
The point is that Studiot wants to do something completely different from vector spaces. There are some relations with vector spaces as matrices multiply the same way, but the rest is completely different.

What does he want to do, exactly?!
 

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
2K
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
993
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
12
Views
3K
Back
Top