Welcome to our community

Be a part of something great, join today!

[SOLVED] Basis of Matrix Units

Sudharaka

Well-known member
MHB Math Helper
Feb 5, 2012
1,621
Hi everyone, :)

We are given the following question.

Find the matrix of the linear transformation \(f:M_{2}(\Re )\rightarrow M_{2}(\Re )\) given by \(f(X)=X\begin{pmatrix}a&b\\c&d\\ \end{pmatrix}\), with respect to the basis of matrix units \(E_{11},\,E_{12},\,E_{21},\,E_{22}\).
I don't expect a full answer to this question, but I don't have any clue as to what is a Matrix Unit. Do any of you people know what is a matrix unit?
 

Petrus

Well-known member
Feb 21, 2013
739
Hi everyone, :)

We are given the following question.



I don't expect a full answer to this question, but I don't have any clue as to what is a Matrix Unit. Do any of you people know what is a matrix unit?
This is what they mean! Notice that 11 means row 1 columne 1



Regards,
\(\displaystyle |\pi\rangle\)
 

Sudharaka

Well-known member
MHB Math Helper
Feb 5, 2012
1,621
This is what they mean! Notice that 11 means row 1 columne 1



Regards,
\(\displaystyle |\pi\rangle\)
Hi Petrus, :)

Great, thanks very much. My guess was correct then. But can you please let me know where you found this information.
 

Sudharaka

Well-known member
MHB Math Helper
Feb 5, 2012
1,621
Hi Petrus, :)

Great, thanks very much. My guess was correct then. But can you please let me know where you found this information.
Okay, I found the verification I needed from >>Planetmath<<. Thanks again Petrus. :)
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Those matrices are the standard basis for $M_2(\Bbb R)$ given by the inverse of the vectorization function: $M_2(\Bbb R) \to \Bbb R^4$, which sends:

$E_{11} \to e_1$
$E_{12} \to e_2$
$E_{21} \to e_3$
$E_{22} \to e_4$.

The reason this works so well, is that the diagonal matrices of the form:

$aI = \begin{bmatrix}a&0\\0&a \end{bmatrix},\ a \in \Bbb R$

form a field isomorphic to the real numbers, and these matrices commute with all 2x2 matrices, so we have an extension ring with a field in its center, which naturally forms an algebra (a ring that is also a vector space).

(Probably more than you wanted to know (Tongueout))
 

Sudharaka

Well-known member
MHB Math Helper
Feb 5, 2012
1,621
Those matrices are the standard basis for $M_2(\Bbb R)$ given by the inverse of the vectorization function: $M_2(\Bbb R) \to \Bbb R^4$, which sends:

$E_{11} \to e_1$
$E_{12} \to e_2$
$E_{21} \to e_3$
$E_{22} \to e_4$.

The reason this works so well, is that the diagonal matrices of the form:

$aI = \begin{bmatrix}a&0\\0&a \end{bmatrix},\ a \in \Bbb R$

form a field isomorphic to the real numbers, and these matrices commute with all 2x2 matrices, so we have an extension ring with a field in its center, which naturally forms an algebra (a ring that is also a vector space).

(Probably more than you wanted to know (Tongueout))
Hi Denevo, :)

Thanks very much for the insight. I haven't studied things such as extension rings much and my knowledge about these things are pretty much the basic. Let me ask some questions for clarification.

So the given set of diagonal matrices; \(\{aI: \mbox{I is the identity matrix},\,a\in\Re\}\) with with the set of all 2x2 matrices (as the scaler field) is the algebra isn't?
 

Sudharaka

Well-known member
MHB Math Helper
Feb 5, 2012
1,621
Hi again, :)

And here's my way of solving my original question. Let me know if I am wrong. We plug in the basis vectors to the linear transformation equation to get,

\[f(E_{11})=\begin{pmatrix}a&b\\0&0\end{pmatrix}\]

\[f(E_{12})=\begin{pmatrix}c&d\\0&0\end{pmatrix}\]

\[f(E_{21})=\begin{pmatrix}0&0\\a&b\end{pmatrix}\]

\[f(E_{22})=\begin{pmatrix}0&0\\c&d\end{pmatrix}\]

Then the transformation matrix \(A\) with respect to the basis matrices \(\{E_{11},\, E_{21},\, E_{21},\, E_{22}\}\) will be,

\[A=\begin{pmatrix} a&b&c&d&0&0&0&0\\0&0&0&0&a&b&c&d \end{pmatrix}\]
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Naively, most people think of vectors as "$n$-tuples" of something.

But really, a vector space is composed of 2 things:

1) stuff we can add together (these are the vectors),

2) stuff we can use to "stretch/shrink" the vectors (these are the scalars).

Formally, a vector space $V$ over a field $F$ is an abelian group:

$(V,+)$ together with an operation:

$\cdot :F \times V \to V$ with:

1) $\alpha \cdot (u + v) = \alpha \cdot u + \alpha \cdot v, \forall \alpha \in F, u,v \in V$

(the dot is usually omitted to avoid confusion with the "dot product", I just want to call attention to the fact that there is an operation here).

2) $(\alpha + \beta)\cdot u = \alpha\cdot u + \beta \cdot u, \forall \alpha,\beta \in F, u \in V$

These two conditions tell us the "scalar product" is compatible with the vector addition and the field addition.

3) $\alpha \cdot (\beta \cdot u) = (\alpha\beta)\cdot u, \forall \alpha,\beta \in F, u \in V$

4) $1_F \cdot u = u, \forall u \in V$

These two conditions tells us that the scalar product is a kind of multiplication compatible with the field multiplication.

Now in a (square) matrix ring with entries in a field, the scalar multiples of the identity matrix act just like the underlying field (a ring, by the way, is pretty much like a field but with no division...often (but not always) because it has "more zero-like things"....with matrices these "zero-like things" are called SINGULAR matrices).

In such a matrix ring, we can "keep the scalar multiplication entirely in the ring" by DEFINING the scalar multiplication to be:

$\alpha M = (\alpha I)(M)$

On the LHS, we have a "vector-looking" scalar product, on the RHS, we have a product of 2 matrices (in the ring).

There is nothing special about matrix rings in this regard...for example, we have an algebra of polynomials as well:

1) we can add polynomials
2) we can multiply polynomials by a number (field element) <--a scalar multiplication
3) we can multiply polynomials together (the "ring multiplication")

In THIS algebra, the constant polynomials play the role of the embedded field in the center (unlike matrix multiplication, this multiplication is commutative, which makes polynomials "nicer" to work with than matrices).

So, to recap:

IN an algebra (which is what the set of all 2x2 real matrices is), we have a number of different things going on:

1) We can add, subtract, mutliply and (for non-zero elements) divide field elements (the matrix entries)
2) We can add (or subtract) matrices together
3) We can multiply two matrices together
4) We can use a field element to "scale" the matrix

and all these different things work together harmoniously, to create a very satisfying structure that lets us use tools of abstract algebra, arithmetic or geometry as ways of gaining insight.

In your last sentence you seem to have it backwards: the scalar multiples of the identity act as the field, and it is the 2x2 matrices that act as the vectors. This lets us "throw away" our notion of some kind of "hybrid multiplication" (mixing scalars and vectors), and just keep the single matrix multiplication as the one we use. This streamlines having to keep track of "what came from where".
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Hi again, :)

And here's my way of solving my original question. Let me know if I am wrong. We plug in the basis vectors to the linear transformation equation to get,

\[f(E_{11})=\begin{pmatrix}a&b\\0&0\end{pmatrix}\]

\[f(E_{12})=\begin{pmatrix}c&d\\0&0\end{pmatrix}\]

\[f(E_{21})=\begin{pmatrix}0&0\\a&b\end{pmatrix}\]

\[f(E_{22})=\begin{pmatrix}0&0\\c&d\end{pmatrix}\]

Then the transformation matrix \(A\) with respect to the basis matrices \(\{E_{11},\, E_{21},\, E_{21},\, E_{22}\}\) will be,

\[A=\begin{pmatrix} a&b&c&d&0&0&0&0\\0&0&0&0&a&b&c&d \end{pmatrix}\]
I don't usually double-post, but I want to correct some misunderstandings you have.

First, let's look at what the dimension of $M_2(\Bbb R)$ is: since we have a basis with 4 elements, it must have dimension 4. This means that the matrix $A$ should be a 4x4 matrix.

Secondly, let's look explicitly at what $f(E_{11})$ is:

$f(E_{11}) = \begin{bmatrix}1&0\\0&0 \end{bmatrix} \begin{bmatrix}a&b\\c&d \end{bmatrix} = \begin{bmatrix}a&b\\0&0 \end{bmatrix} = aE_{11} + bE_{12}$

So the first column of $A$ should be:

$\begin{bmatrix}a\\b\\0\\0 \end{bmatrix}$

Can you continue?

NOTE: the ORDER of the basis you choose will affect the columns you get. My column is based on the ordered basis: $\{E_{11},E_{12},E_{21},E_{22}\}$ (reading the entries like a book).
 

Sudharaka

Well-known member
MHB Math Helper
Feb 5, 2012
1,621
Naively, most people think of vectors as "$n$-tuples" of something.

But really, a vector space is composed of 2 things:

1) stuff we can add together (these are the vectors),

2) stuff we can use to "stretch/shrink" the vectors (these are the scalars).

Formally, a vector space $V$ over a field $F$ is an abelian group:

$(V,+)$ together with an operation:

$\cdot :F \times V \to V$ with:

1) $\alpha \cdot (u + v) = \alpha \cdot u + \alpha \cdot v, \forall \alpha \in F, u,v \in V$

(the dot is usually omitted to avoid confusion with the "dot product", I just want to call attention to the fact that there is an operation here).

2) $(\alpha + \beta)\cdot u = \alpha\cdot u + \beta \cdot u, \forall \alpha,\beta \in F, u \in V$

These two conditions tell us the "scalar product" is compatible with the vector addition and the field addition.

3) $\alpha \cdot (\beta \cdot u) = (\alpha\beta)\cdot u, \forall \alpha,\beta \in F, u \in V$

4) $1_F \cdot u = u, \forall u \in V$

These two conditions tells us that the scalar product is a kind of multiplication compatible with the field multiplication.

Now in a (square) matrix ring with entries in a field, the scalar multiples of the identity matrix act just like the underlying field (a ring, by the way, is pretty much like a field but with no division...often (but not always) because it has "more zero-like things"....with matrices these "zero-like things" are called SINGULAR matrices).

In such a matrix ring, we can "keep the scalar multiplication entirely in the ring" by DEFINING the scalar multiplication to be:

$\alpha M = (\alpha I)(M)$

On the LHS, we have a "vector-looking" scalar product, on the RHS, we have a product of 2 matrices (in the ring).

There is nothing special about matrix rings in this regard...for example, we have an algebra of polynomials as well:

1) we can add polynomials
2) we can multiply polynomials by a number (field element) <--a scalar multiplication
3) we can multiply polynomials together (the "ring multiplication")

In THIS algebra, the constant polynomials play the role of the embedded field in the center (unlike matrix multiplication, this multiplication is commutative, which makes polynomials "nicer" to work with than matrices).

So, to recap:

IN an algebra (which is what the set of all 2x2 real matrices is), we have a number of different things going on:

1) We can add, subtract, mutliply and (for non-zero elements) divide field elements (the matrix entries)
2) We can add (or subtract) matrices together
3) We can multiply two matrices together
4) We can use a field element to "scale" the matrix

and all these different things work together harmoniously, to create a very satisfying structure that lets us use tools of abstract algebra, arithmetic or geometry as ways of gaining insight.

In your last sentence you seem to have it backwards: the scalar multiples of the identity act as the field, and it is the 2x2 matrices that act as the vectors. This lets us "throw away" our notion of some kind of "hybrid multiplication" (mixing scalars and vectors), and just keep the single matrix multiplication as the one we use. This streamlines having to keep track of "what came from where".
Thanks very much. This explains everything and clarifies all my doubts. :D

I don't usually double-post, but I want to correct some misunderstandings you have.

First, let's look at what the dimension of $M_2(\Bbb R)$ is: since we have a basis with 4 elements, it must have dimension 4. This means that the matrix $A$ should be a 4x4 matrix.

Secondly, let's look explicitly at what $f(E_{11})$ is:

$f(E_{11}) = \begin{bmatrix}1&0\\0&0 \end{bmatrix} \begin{bmatrix}a&b\\c&d \end{bmatrix} = \begin{bmatrix}a&b\\0&0 \end{bmatrix} = aE_{11} + bE_{12}$

So the first column of $A$ should be:

$\begin{bmatrix}a\\b\\0\\0 \end{bmatrix}$

Can you continue?

NOTE: the ORDER of the basis you choose will affect the columns you get. My column is based on the ordered basis: $\{E_{11},E_{12},E_{21},E_{22}\}$ (reading the entries like a book).
Yes, I understand now. The other columns of \(A\) can be obtained similarly by multiplying the basis matrices with $\begin{pmatrix}a&b\\c&d \end{pmatrix}$.
 

Petrus

Well-known member
Feb 21, 2013
739
May I ask what Will \(\displaystyle A^{-1}\) mean in words? Then I mean inverse of that transformation matrix, what transformation Will that be? Will it be transformation for vice versa?

Regards,
\(\displaystyle |\pi\rangle\)