Can Matrix Dimensions Vary Within the Same Vector Space Transformation?

In summary, the conversation discusses the concept of finite dimensional inner product spaces and the set of n² (n x n)-matrices which form a basis for such spaces. It also covers the idea of linear operators and how they can be represented by matrices. The speaker also questions a possible mistake in the dimensions of the matrices involved in these calculations.
  • #1
AKG
Science Advisor
Homework Helper
2,567
4
If I have a finite dimensional inner product space [itex]V = M_{n \times n}(\mathbb{R})[/itex], then one basis of V is the set of n² (n x n)-matrices, [itex]\beta = \{E_1, \dots , E_{n^2}\}[/itex] where [itex]E_i[/itex] has a 1 in the [itex]i^{th}[/itex] position, and zeroes elsewhere (and by [itex]i^{th}[/itex] position, I mean that the first position is the top-left, the second position is just to the right of the top-left, and the last position is the bottom-right). Since these matrices are linearly independent and span V, they certainly form a basis, and since there are n² of them, dim(V) = n². Therefore, if I have some linear operator T on V, then [itex]A = [T]_{\beta}[/itex] is an (n² x n²)-matrix, right? However, if v is some element of V, then T(v) = Av, but Av is not even possible, since it involves multiplying two square matrices of different dimension. Now, if I had made a mistake earlier, then maybe A is supposed to be an (n x n)-matrix. But that doesn't seem right.

My textbook proves:

If V is an N-dimensional vector space with an ordered basis [itex]\beta[/itex], then [itex][I_V]_{\beta} = I_N[/itex], where [itex]I_V[/itex] is the identity operator on V. Now, in our case, N = n², but if I was wrong before, and in the previous example, A should have been an (n x n)-matrix, then the equality above essentially states that an (n x n)-matrix is equal to an (n² x n²)-matrix. Where have I (or my book) made a mistake?
 
Physics news on Phys.org
  • #2
>> However, if v is some element of V, then T(v) = Av, but Av is not even possible, since
>> it involves multiplying two square matrices of different dimension.

I´m not completely sure if I got your question but maybe I´m guessing right where your problems lie:

Assume a linear operator O over the R^n. It can be written in the form of O(a) = Ma for any vector a. M is an n x n matrix here. Certainly M and a don´t have the same "dimension" (dunno the proper english term; probably "level"). And you probably wouldn´t feel that this is not going to work, because you know how to interpret the equation.
In tensorial notation using the components of the vector a above is written like this:
[tex] T(a^\nu) = \sum_{\nu=1} ^n M^\mu _{\, \nu} a^\nu = b^\mu [/tex]
The last "=" was put into show that the result b is a vector of R^n again.


Rewriting your "T(v) = Av" in tensorial terms it would be:
[tex] T(v^{\mu \nu}) = \sum_{\mu=1} ^n \sum _{\nu=1} ^n A^{\alpha \beta}_{\, \, \, \mu \nu} v^{\mu \nu} = b^{\alpha \beta} [/tex]
So the equation is defined and the result is an element of V.

Sidenotes:
- Av = A*v is never possible unless you define what it´s supposed to be. Tensorial notation like above does this.
- I didn´t understand what your textbook sais because I neither know the notation nor do I know what [tex] I_N [/tex] is. So it´s well possible that I completely missed your question.
 
Last edited:
  • #3


In this case, the mistake lies in assuming that the linear operator T maps from V to V, when in fact it maps from V to itself. This means that the matrix A is not an (n x n)-matrix, but rather an (n² x n²)-matrix, as stated in the original content. The confusion may arise from thinking of T as a transformation between different vector spaces, but in this case, it is a transformation within the same vector space V. Therefore, the dimensions of the matrices involved will also be the same.

To understand this better, consider the example of a linear transformation T: \mathbb{R}^2 \rightarrow \mathbb{R}^2 given by T(x,y) = (2x,3y). Here, the matrix representation of T with respect to the standard basis is a 2x2 matrix. However, if we consider the same transformation T as a mapping from \mathbb{R}^4 to \mathbb{R}^4, then the matrix representation will be a 4x4 matrix. The dimensions of the matrices change depending on the vector space they are representing the transformation in.

In the case of a linear operator T on V, the matrix representation A = [T]_{\beta} is an (n² x n²)-matrix because T maps from V to itself, and the basis \beta is of n² (n x n)-matrices. This is consistent with the dimension of V being n². The equality [I_V]_{\beta} = I_N also makes sense, as both sides are (n² x n²)-matrices.

In conclusion, there is no mistake in the original content or in the textbook. The confusion may arise from not differentiating between a linear operator and a linear transformation, and the dimensions of matrices representing them.
 

Related to Can Matrix Dimensions Vary Within the Same Vector Space Transformation?

1. What is a vector space of matrices?

A vector space of matrices is a set of matrices that satisfy certain properties, such as closure under addition and scalar multiplication. It is a mathematical structure used to study linear transformations between vector spaces.

2. How do you determine if a set of matrices is a vector space?

To determine if a set of matrices is a vector space, you need to check if it satisfies the 10 axioms of a vector space. These include properties such as closure under addition and scalar multiplication, existence of an additive identity and inverse, and distributive properties.

3. What is the dimension of a vector space of matrices?

The dimension of a vector space of matrices is the number of matrices in a basis for that space. It can also be thought of as the minimum number of matrices needed to span the entire space.

4. Can a matrix be in more than one vector space?

Yes, a matrix can belong to multiple vector spaces. This is because a vector space is defined by a set of properties, and a matrix can satisfy these properties in different contexts or with different bases.

5. How is a vector space of matrices used in scientific research?

A vector space of matrices is used in a variety of scientific fields, such as physics, engineering, and computer science. It is particularly useful in linear algebra and the study of linear transformations, which have applications in modeling and analyzing systems, data analysis, and solving differential equations.

Similar threads

  • Introductory Physics Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
15
Views
966
  • Differential Geometry
Replies
21
Views
797
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
808
  • Calculus and Beyond Homework Help
Replies
8
Views
308
  • Classical Physics
Replies
3
Views
2K
Replies
9
Views
1K
Replies
0
Views
159
  • Linear and Abstract Algebra
Replies
6
Views
1K
Back
Top