Homework: Matrix is invertible when rows are linearly independent

In summary, the person is struggling to solve a homework question involving proving that a square matrix is invertible. They ask for help and provide a hint that linear transformations are bijective when a basis for a vector space is used. They try several methods but they are not able to solve the problem. Finally, they ask for help and provide a summary of their attempts.
  • #1
skrat
748
8
Hi there,

I have a homework where I have to do this:
Prove that square matrix is invertible if the columns of the matrix are linearly independent.
There is also a hint: You can help with the following statement: Linear transformation L: U->V is bijection when a vector space basis N for U, it's "picture" (- i don't know the right english word) L(N) is a basis for vector space V. This applies for any space basis for U.

I tried several ways but I just can't get to the end. I don't want anybody here to write me the whole prove - I will hate you for that. However, I would be really happy if somebody could put me on the right path, show me the way and give me clues obvious enough to make it to the end.
If necessary I can write it down how I tried to solve this but I seriously doubt it would be any help to you!

Thanks in advance!

EDIT: OK, I changed my mind. I would be happy to know where I got it wrong. My procedure:
Let [itex]A:U\rightarrow V[/itex] linear transformation and
[itex]A=\begin{bmatrix}
a_{11} & a_{12}^{} &\cdots &a_{1n} \\
a_{21}& a_{22} &\cdots &a_{2n} \\
\vdots && &\vdots\\
a_{n1}&a_{n2} & \cdots &a_{nn}
\end{bmatrix}[/itex]

if [itex]N=\begin{Bmatrix}
u_{1} & \cdots & u_{n}
\end{Bmatrix}[/itex] basis for vector space U and
[itex]M=\begin{Bmatrix}
A(u_{1}) & \cdots & A(u_{n})
\end{Bmatrix}=\begin{Bmatrix}
v_{1} & \cdots & v_{n}
\end{Bmatrix}[/itex] basis for vector space V.

Than I decided to take one vector x from space U:
[itex]x=\alpha _{1}u_{1}+\alpha _{2}u_{2}+\cdots +\alpha _{n}u_{n}[/itex]
What linear transformation A does with x is:
[itex]Ax=\begin{bmatrix}
a_{11} & a_{12}^{} &\cdots &a_{1n} \\
a_{21}& a_{22} &\cdots &a_{2n} \\
\vdots && &\vdots\\
a_{n1}&a_{n2} & \cdots &a_{nn}
\end{bmatrix}\cdot \begin{bmatrix}
x_{1} \\
x_{2}\\
\vdots \\
x_{n}
\end{bmatrix}=\begin{bmatrix}
b_{1} \\
b_{2}\\
\vdots \\
b_{n}
\end{bmatrix}[/itex]
Where I am not sure what exactly [itex]x_{1}... x_{n}[/itex] are (yes, components, but combination of what?) and the same goes for b's..

Hmmm, Is this even close to right way?
 
Last edited:
Physics news on Phys.org
  • #2
First, treat the square matrix as an endomorphism, i.e. it maps a space onto itself. So use the same basis for input and output.

Second, any basis will do so use the standard basis.

Third, try a 3x3 or 2x2 before trying the proof for arbitrary size.

The key is understanding what each column means in this context.
 
  • #3
You can also use the fact that the operations of row exchange and adding a multiple of one row to another row
preserve the value of the determinant. If rows are LDep , then a multiple of one row +some other row will generate a 0 row, giving you a matrix with determinant zero. You can prove this by using elementary matrices ( which generate all matrices).

In a different way, Gaussian elimination preserves the value of the determinant, which you can calculate at the end of elimination. If you start with all rows LD, your final matrix will have non-zero determinant, etc.
 
  • #4
skrat said:
Hi there,

I have a homework where I have to do this:
Prove that square matrix is invertible if the columns of the matrix are linearly independent.
There is also a hint: You can help with the following statement: Linear transformation L: U->V is bijection when a vector space basis N for U, it's "picture" (- i don't know the right english word) L(N) is a basis for vector space V. This applies for any space basis for U.

I tried several ways but I just can't get to the end. I don't want anybody here to write me the whole prove - I will hate you for that. However, I would be really happy if somebody could put me on the right path, show me the way and give me clues obvious enough to make it to the end.
If necessary I can write it down how I tried to solve this but I seriously doubt it would be any help to you!

Thanks in advance!

EDIT: OK, I changed my mind. I would be happy to know where I got it wrong. My procedure:
Let [itex]A:U\rightarrow V[/itex] linear transformation and
[itex]A=\begin{bmatrix}
a_{11} & a_{12}^{} &\cdots &a_{1n} \\
a_{21}& a_{22} &\cdots &a_{2n} \\
\vdots && &\vdots\\
a_{n1}&a_{n2} & \cdots &a_{nn}
\end{bmatrix}[/itex]

if [itex]N=\begin{Bmatrix}
u_{1} & \cdots & u_{n}
\end{Bmatrix}[/itex] basis for vector space U and
[itex]M=\begin{Bmatrix}
A(u_{1}) & \cdots & A(u_{n})
\end{Bmatrix}=\begin{Bmatrix}
v_{1} & \cdots & v_{n}
\end{Bmatrix}[/itex] basis for vector space V.

Than I decided to take one vector x from space U:
[itex]x=\alpha _{1}u_{1}+\alpha _{2}u_{2}+\cdots +\alpha _{n}u_{n}[/itex]
What linear transformation A does with x is:
[itex]Ax=\begin{bmatrix}
a_{11} & a_{12}^{} &\cdots &a_{1n} \\
a_{21}& a_{22} &\cdots &a_{2n} \\
\vdots && &\vdots\\
a_{n1}&a_{n2} & \cdots &a_{nn}
\end{bmatrix}\cdot \begin{bmatrix}
x_{1} \\
x_{2}\\
\vdots \\
x_{n}
\end{bmatrix}=\begin{bmatrix}
b_{1} \\
b_{2}\\
\vdots \\
b_{n}
\end{bmatrix}[/itex]
Where I am not sure what exactly [itex]x_{1}... x_{n}[/itex] are (yes, components, but combination of what?) and the same goes for b's..

Hmmm, Is this even close to right way?

Another way, using Rank-Nullity : your original space has dimension n -- you have a basis with n elements. The image of the basis vectors is a collection of n-dimensional vectors , so that the rank of L is n. The elements {A(ui); i=1,2,..n} are a basis for your target space, so that your target space is n-dimensional. Since the rank of L is n, its nullity is trivial. The map is then injective (trivial kernel) and surjective ( rank=n in an n-dim space), so it is invertible.
 
  • #5
Bacle2 said:
You can also use the fact that the operations of row exchange and adding a multiple of one row to another row
preserve the value of the determinant. If rows are LDep , then a multiple of one row +some other row will generate a 0 row, giving you a matrix with determinant zero. You can prove this by using elementary matrices ( which generate all matrices).
We haven't mentioned the determinants yet, but thanks for the idea!


Ok, so [itex]A: U\rightarrow U[/itex] and [itex]A=\begin{bmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21}& a_{22} & a_{23}\\
a_{31}& a_{32} & a_{33}
\end{bmatrix}[/itex]. Basis [itex]N=\begin{Bmatrix}
u_{1},u_{2},u_{3}
\end{Bmatrix}=\begin{Bmatrix}
(1,0,0),(0,1,0),(0,0,1)
\end{Bmatrix}[/itex].
Lets take one vector x from U. [itex]x=\alpha _{1}(1,0,0)+\alpha _{2}(0,1,0)+\alpha _{3}(0,0,1)=\begin{bmatrix}
\alpha _{1}\\
\alpha _{2}\\
\alpha _{3}
\end{bmatrix}[/itex]
Again, what A does with x is:
[itex]Ax=\begin{bmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21}& a_{22} & a_{23}\\
a_{31}& a_{32} & a_{33}
\end{bmatrix}\cdot \begin{bmatrix}
\alpha _{1}\\
\alpha _{2}\\
\alpha _{3}
\end{bmatrix}=\begin{bmatrix}
a_{11}\alpha _{1}+a_{12}\alpha _{2}+a_{13}\alpha _{3}\\
a_{21}\alpha _{1}+a_{22}\alpha _{2}+a_{23}\alpha _{3}\\
a_{31}\alpha _{1}+a_{32}\alpha _{2}+a_{33}\alpha _{3}
\end{bmatrix}=\begin{bmatrix}
\beta _{1}\\
\beta _{2}\\
\beta _{3}
\end{bmatrix}=y[/itex]

so
[itex]y= (a_{11}\alpha _{1}+a_{12}\alpha _{2}+a_{13}\alpha _{3})u_{1}+(
a_{21}\alpha _{1}+a_{22}\alpha _{2}+a_{23}\alpha _{3})u_{2}+(
a_{31}\alpha _{1}+a_{32}\alpha _{2}+a_{33}\alpha _{3})u_{3}[/itex][itex]
y=\beta_{1}u_{1}+\beta_{2}u_{2}+\beta_{3}u_{3}[/itex]

If I am on the right way to the finish, I should now be able to find out what each column means in this context, as jambaugh said. Yet I don't.
 
  • #6
skrat said:
...
Lets take one vector x from U.[...]
Go less general, take one specific vector, say the first basis element. What is it mapped to?
 
  • #7
You mean:
[itex]Au_{1}=\begin{bmatrix}
a_{11} &a_{12} &a_{13} \\
a_{21} &a_{22} &a_{23} \\
a_{31} &a_{32} &a_{33}
\end{bmatrix}\cdot \begin{bmatrix}
1\\
0\\
0
\end{bmatrix}=\begin{bmatrix}
a_{11} \\
a_{21} \\
a_{31}
\end{bmatrix}[/itex]
Which is first column of matrix A.
 
  • #8
skrat said:
You mean:
[itex]Au_{1}=\begin{bmatrix}
a_{11} &a_{12} &a_{13} \\
a_{21} &a_{22} &a_{23} \\
a_{31} &a_{32} &a_{33}
\end{bmatrix}\cdot \begin{bmatrix}
1\\
0\\
0
\end{bmatrix}=\begin{bmatrix}
a_{11} \\
a_{21} \\
a_{31}
\end{bmatrix}[/itex]
Which is first column of matrix A.
Right, so the columns are the images of the basis under the (left) mapping. If you look at the right action, the rows are the image of the standard dual basis (for row vectors) under the dual (right) mapping. (row x matrix = row). Now these facts are the starting point for the other techniques suggested. It is the prime fact you should understand.

What is a basis? A minimal spanning set, and as such its elements should be...?
 
  • #9
jambaugh said:
What is a basis? A minimal spanning set, and as such its elements should be...?

linearly independent?

I'm not sure if I understand you completely but I came up with this idea, I hope it is not too bad..

So:
[itex]Au_{1}=\begin{bmatrix}
a_{11} &a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}
\end{bmatrix}\cdot \begin{bmatrix}
1\\
0\\
0
\end{bmatrix}=\begin{bmatrix}
a_{11}\\
a_{21}\\
a_{31}
\end{bmatrix}[/itex] in other words: [itex]Au_{1}=v_{1}[/itex] if [itex]A: U\rightarrow V[/itex] . [itex]v_{1}[/itex] is than basis vector for vector space V and since it is basis vector it has to be linearly independent. So vectors [itex]Au_{1}=v_{1}[/itex],[itex]Au_{2}=v_{2}[/itex] and [itex]Au_{3}=v_{3}[/itex] are linearly independent and [itex]v_{1}=\begin{bmatrix}
a_{11}\\
a_{21}\\
a_{31}
\end{bmatrix}[/itex], [itex]v_{2}=\begin{bmatrix}
a_{12}\\
a_{22}\\
a_{32}
\end{bmatrix}[/itex] and [itex]v_{3}=\begin{bmatrix}
a_{13}\\
a_{23}\\
a_{33}
\end{bmatrix}[/itex]. Basis vectors for V are columns of matrix A and are linearly independent.

Is it OK if I now define another transformation, let's say [itex]B: V\rightarrow U[/itex]. Again basis vector for space V should be mapped into basis vector for space U. But we already know that basis vector for U are [itex]u_{1}[/itex], [itex]u_{2}[/itex] and [itex]u_{3}[/itex] and elements in basis vectors in V are [itex]Au_{1}[/itex], [itex]Au_{2}[/itex] and [itex]Au_{3}[/itex].

So this has to be true: [itex]B(Au_{1})=u_{1}[/itex] for all the basis vectors!
[itex]B(Au_{i})=u_{i}[/itex]
[itex]BAu_{i}=Iu_{i}[/itex] where [itex]I[/itex] is Identity.
or in other words: [itex]BA=I[/itex] so [itex]B=A^{-1}[/itex]

and finally A is reversible if the columns are linearly independent. Columns have to be linearly independent, because if they weren't than, they could be the basis for vector space V.
 
Last edited:
  • #10
skrat said:
linearly independent?

I'm not sure if I understand you completely but I came up with this idea, I hope it is not too bad..
.

You got the essential point I was making. For the specific proof I would look at the fact that any vector can be expanded in the "new" basis and that gives you which vector it is mapped from via A by using the same coefficients but the "old" basis. (A mapping "old" u's to "new" v's). You thereby construct the inverse map. As proofs go, being able to directly construct the asserted inverse mapping is the most unequivocal form.

I also think this means of proving the theorem gives you better intuition as to exactly what the matrix is, it is in essence a list of the image of the current basis under the corresponding linear transformation. This also keeps apparent the basis dependence of and distinctness of the matrix representation of a Linear Op and the operator itself. It helps set the stage for change of basis, diagonalization, and so on.
 
  • #11
Thank you very much jambaugh!
 

Related to Homework: Matrix is invertible when rows are linearly independent

1. Can you explain what it means for a matrix to be invertible?

A matrix is invertible when it has an inverse matrix, meaning that when multiplied together, the resulting matrix is the identity matrix. This means that the matrix can be undone or reversed, similar to how we can undo multiplication with division.

2. What is the significance of having linearly independent rows in a matrix?

Linearly independent rows in a matrix means that none of the rows can be written as a linear combination of the other rows. This is important because it guarantees that the matrix will have an inverse, making it possible to solve equations involving that matrix.

3. How can I determine if the rows of a matrix are linearly independent?

One way to determine if the rows of a matrix are linearly independent is to perform row operations and see if any row becomes entirely zeros. If not, then the rows are linearly independent. Another way is to calculate the determinant of the matrix - if it is non-zero, then the rows are linearly independent.

4. Can a matrix be invertible if its rows are not linearly independent?

No, a matrix cannot be invertible if its rows are not linearly independent. This is because without linearly independent rows, the matrix does not have an inverse and cannot be undone or reversed.

5. What is the practical application of knowing that a matrix is invertible when its rows are linearly independent?

Knowing that a matrix is invertible when its rows are linearly independent is important in solving systems of linear equations. It allows us to easily find the inverse of the matrix and solve for the variables in the system. This is useful in various fields such as engineering, physics, and economics.

Similar threads

  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
639
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Replies
27
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
939
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
Replies
34
Views
2K
Back
Top