Row space, Column space, Null space & Left null space

  • #1
DumpmeAdrenaline
78
2
\begin{pmatrix} 2 & 4 & 6 \\ 3 & 5 & 8 \\ 1 & 2 & 3 \end{pmatrix}

Using the row operations, R2<-- R2-3R1 R3<-- R3-R1 we find the row echelon form of the matrix.
\begin{pmatrix} 1 & 2 & 3 \\ 0 & -1 & -1 \\ 0 & 0 & 0 \end{pmatrix}
Based on the definition of row space in the book Í am studying from, the row space is a subspace that comprises an infinite collection of linearly independent rows of X.

To check if the row vectors [1,2,3] and [0,-1,-1] are linearly independent we write

$$ \beta_{1} [1,2,3]+\beta_{2} [0,-1,-1]=[\beta_{1}, 2\beta_{1}-\beta_{2}, 3\beta_{1}-\beta_{2}]=[0,0,0] $$

where β1 and β2 are scalars that belong to the field of real numbers.
If we consider the above, the only scalars (solution) that yields the 0 row vector are β1=β2=0. Therefore, the row vectors are independent. How to determine if the independent row vectors span the subspace and they form the basis vector for that subspace. Is the subspace considered here the infinite collection of 3*1 row vectors? If they are basis vectors for subspace does this imply if we add a new row vector to the given matrix we can write it in terms of the identified LI vectors or do we have to go through a new LU decomposition?
 
Physics news on Phys.org
  • #2
I'm not entirely clear what you are asking. But here are a few (non-mathematician's) comments...

DumpmeAdrenaline said:
\begin{pmatrix} 2 & 4 & 6 \\ 3 & 5 & 8 \\ 1 & 2 & 3 \end{pmatrix}
To get a matrix to display using LateX (which is what is used on this site) don't use ICODE. Use double hash-tags as delimiters to produce this:

##\begin{pmatrix}
2 & 4 & 6 \\
3 & 5 & 8 \\
1 & 2 & 3
\end{pmatrix}##

If you want the matrix to be enclosed in square brackets, use 'bmatrix' rather than 'pmatrix' to get this:

##\begin{bmatrix}
2 & 4 & 6 \\
3 & 5 & 8 \\
1 & 2 & 3
\end{bmatrix}##

LaTex guide is here: https://www.physicsforums.com/help/latexhelp/

Use the preview icon (top right of editing window) to check formatting is correct before posting.

DumpmeAdrenaline said:
Using the row operations, R2<-- R2-3R1 R3<-- R3-R1 we find the row echelon form of the matrix.
It's more usual to use right-pointing arrows. Also you missed out the first operation: ##\frac {R_1}2 \rightarrow {R_1}##.

DumpmeAdrenaline said:
\begin{pmatrix} 1 & 2 & 3 \\ 0 & -1 & -1 \\ 0 & 0 & 0 \end{pmatrix}
Row echelon form requires that (for rows which do not contain all zeroes) the first non-zero entry is 1 (not -1). So (using ##-R_2 \rightarrow R_2##) the row echelon form is:

##\begin{bmatrix}
1 & 2 & 3 \\
0 & 1 & 1 \\
0 & 0 & 0
\end{bmatrix}##

DumpmeAdrenaline said:
Based on the definition of row space in the book Í am studying from, the row space is a subspace that comprises an infinitecollection of linearly independent rows of X.
What you mean is '...the infinite set of linear combinations of the linearly independent rows.

DumpmeAdrenaline said:
To check if the row vectors [1,2,3] and [0,-1,-1] are linearly independent we write

$$ \beta_{1} [1,2,3]+\beta_{2} [0,-1,-1]=[\beta_{1}, 2\beta_{1}-\beta_{2}, 3\beta_{1}-\beta_{2}]=[0,0,0] $$

where β1 and β2 are scalars that belong to the field of real numbers.
If we consider the above, the only scalars (solution) that yields the 0 row vector are β1=β2=0.
The 2 non-zero rows can immediately be seen to be linearly independent. One is not a scalar multiple of the other.

DumpmeAdrenaline said:
Therefore, the row vectors are independent. How to determine if the independent row vectors span the subspace and they form the basis vector for that subspace. Is the subspace considered here the infinite collection of 3*1 row vectors?
It helps to think geometrically. In 3D space, [1 2 3] and [0 1 1] are 2 vectors pointing in different directions; they lie in some (2D) plane. Linear combinations of these 2 vectors can produce any vector in this plane. So this 2D plane is a subspace of 3D space; the 2 vectors span this 2D subspace and hence are basis vectors.

DumpmeAdrenaline said:
If they are basis vectors for subspace does this imply if we add a new row vector to the given matrix we can write it in terms of the identified LI vectors or do we have to go through a new LU decomposition?
The 2 vectors are basis vectors (because they span the subspace, as noted above). Note that these 2 basis vectors are not othogonal and not normalised.

It is not clear what you mean by 'add a new row vector to the given matrix'. If you replace the [0 0 0] row by a row which is linearly independent of the other 2 rows, this new row vector does not lie in the 2D subspace discussed above. In this case all 3 rows can be considered as a set basis vectors which will span the whole 3D space.
 
Last edited:
  • Like
  • Informative
Likes Mark44, mathwonk and DumpmeAdrenaline
  • #3
From what I understand, the row operations (interchanging rows, scaling, and replacing one row by the sum of itself and a multiple of another row) ensure that the number of independent rows equals the number of independent columns. However, my book states that the independent columns do not form the basis set for the column space. From the example, I understand that the independent column yield two vectors lying in the x-y plane (z=0) and they span all vectors in that plane. But they do not form the basis set because for the column space because the column vectors in X are all out of the plane of the independent column vectors. Is it because the rank of the column is less than the number of column vectors. Is there a way to prove why the book says so in the most general terms ?
## \begin{pmatrix}
1 \\
0 \\
0
\end{pmatrix} ## and ## \begin{pmatrix}
2 \\
1 \\
0
\end{pmatrix} ## are the independent columns and the span of subspace generated by these column vectors is ## \begin{pmatrix}
\beta_{1}+2 \beta_{2} \\
\beta_{2} \\
0
\end{pmatrix} ## where β1 and β2 belong to the set of real numbers
 
  • #4
I'm no expert so what follows is based on my (amateur) understanding. But I'm sure someone will correct me if needed...

DumpmeAdrenaline said:
From what I understand, the row operations (interchanging rows, scaling, and replacing one row by the sum of itself and a multiple of another row) ensure that the number of independent rows equals the number of independent columns.
Ok. It might be better to say: the column rank and the row rank of a matrix are always equal and the allowed row operations don't change this.

DumpmeAdrenaline said:
However, my book states that the independent columns do not form the basis set for the column space.
I don't understand that. Check it carefully. I believe that a set of all linearly independent columns is (by definition) a basis set for the column space.

In the Post #1 matrix for example, any 2 columns are independent of the 3rd. Any 2 columns can be used as the basis of the column space.

DumpmeAdrenaline said:
From the example, I understand that the independent column yield two vectors lying in the x-y plane (z=0) and they span all vectors in that plane. But they do not form the basis set because for the column space because the column vectors in X are all out of the plane of the independent column vectors.
I don't think that's right.

For the Post #1 matrix, the first 2 columns are linearly independent (the 3rd column is the sum of these). So a perfectly good basis for the column space are the 2 vectors [2 3 1]##^T## and [4 5 2]##^T##.

This column vector space has dimension 2 and can be visualised as a tilted plane in 3D. It is a different space to the matrix's row space (though both spaces are necessarily 2D). They can be visualised as two different subspaces (2 different planes) in 3D space (the 'ambient' space) if this helps.

DumpmeAdrenaline said:
Is it because the rank of the column is less than the number of column vectors. Is there a way to prove why the book says so in the most general terms ?
## \begin{pmatrix}
1 \\
0 \\
0
\end{pmatrix} ## and ## \begin{pmatrix}
2 \\
1 \\
0
\end{pmatrix} ## are the independent columns and the span of subspace generated by these column vectors
This is confused. You can't use columns of the row echelon form as basis vectors for the column space of the original matrix. That's wrong

What you can do (though it's not necessary here) is to note that the pivots in the row echelon form are in column s 1 and 2. So columns 1 and 2 of the original matrix are basis vectors for the original matrix's column space.

DumpmeAdrenaline said:
is ## \begin{pmatrix}
\beta_{1}+2 \beta_{2} \\
\beta_{2} \\
0
\end{pmatrix} ## where β1 and β2 belong to the set of real numbers
Not sure about what you are saying but you don't need sums such as ##\beta_1 + 2\beta_2##. If ##\vec {b_1}## and ##\vec {b_2}## are basis vectors for a 2D space, then the space is spanned by ##a_1\vec {b_1} + a_2\vec {b_2}## where ##a_1## and ##a_2## are arbitrary (scalar) values. ##a_1## and ##a_2## are completely independent.
 
  • Like
Likes DumpmeAdrenaline and mathwonk
  • #5
What Steve is telling you is that row operations do not change the row space, but they do change the column space. so a basis for the row space of the echelon form will also be a basis for the row space of the original matrix, but this is not true for the column spaces. Row operations do however preserve the dimension of the column space, so the column rank is the same for the echelon form and the original matrix.

what is also true is that the two matrices have the same null space, since the null space space is determined by the row space (it is the orthogonal complement of the row space). But an element of the null space is a sequence of coefficient vectors that give a linear relation among the columns. so if certain columns of the echelon form are dependent, say the first and third, by means of a certain relation, that same relation will give a dependency among the first and third columns of the original matrix.

consequently, a certain subset of the columns of the echelon form will be independent if and only if the same subset of columns of the original matrix are independent, and hence a basis for the column space of the echelon form will be in the same positions as a basis for the column space of the original matrix. Here, since the first two columns of the echelon form are a basis for the column space of the echelon matrix, then also the original first two columns are a basis for the original column space.
 
  • Informative
  • Like
Likes DumpmeAdrenaline and Steve4Physics
  • #6
Do we say that vectors in the null space are orthogonal to vectors in the row space because the dot product of row vectors of matrix X with the column vector is 0. $$ X\beta=0 $$
$$ x+2y+3z=0 \hspace{0.2cm} y+z=0 $$
How do we interpret the pair of linear equations? Is it correct to say that each equation represents a plane with the same normal vector therefore the pair of row vectors lie in the same because a normal vector cannot be orthogonal to two different planes.
If the three row vectors are linearly independent they cannot all lie in the same plane. Therefore, if the row vectors originate from a common different pair of the three vectors forming different planes will intersect at the point they originate from which is the zero vector.

If we take the transpose of the column vector $$ \beta $$ and multiply the resulting row vector by the given matrix (either in its original form or reduced echelon form) we will be finding the vectors that are normal to the column subspace spanned by the linearly independent columns.

$$ \beta ^T X=0 $$
mathwonk said:
But an element of the null space is a sequence of coefficient vectors that give a linear relation among the columns.
Can you please clarify this point?
 
  • #7
" Do we say that vectors in the null space are orthogonal to vectors in the row space because the dot product of row vectors of matrix X with the column vector is 0. "
yes.

"How do we interpret the pair of linear equations? Is it correct to say that each equation represents a plane with the same normal vector "

no. the first equation is for a plane with normal vector = (1,2,3), and the second equation is for a plane with normal vector (0,1,1).

"Can you please clarify this point?"
when you multiply an mxn matrix M times a column vector A of length n, the answer is a linear combination of the length m column vectors in the matrix; the coefficients of this linear combination are the n entries in A.

This is absolutely basic information about matrix multiplication. You should play a little with matrices until you grasp this fundamental information.
 
  • Like
Likes Mark44
  • #8
mathwonk said:
no. the first equation is for a plane with normal vector = (1,2,3), and the second equation is for a plane with normal vector (0,1,1).
We have two equations of planes that contain the origin, and we are trying to find the points that lie on both planes. Since the planes are not parallel the points of intersection will lie on a line. This line will be normal to the two normal row vectors.

$$ y+z=0 \rightarrow z=-y $$
$$ x+2y+3z=0 \rightarrow x+2y-3y=0 \rightarrow x=y $$

The null space is the set of column vectors ## \begin{pmatrix}
y \\
y \\
-y
\end{pmatrix} ## where y is any real number

mathwonk said:
"Can you please clarify this point?"
when you multiply an mxn matrix M times a column vector A of length n, the answer is a linear combination of the length m column vectors in the matrix; the coefficients of this linear combination are the n entries in A.
I understand that there different views to think about a matrix-vector product. You are saying that If we interpret matrix M as a set of rows. When we write MA=0 we are solving for the entries of A) such that A is orthogonal to each of the independent row vectors. If we are interpreting M as a set of column vectors we are finding the scalars that would yield from the linear combination of column vectors (independent and dependent) the zero vector.
## \begin{pmatrix}2 & 4 & 6 \\3 & 5 & 8 \\1 & 2 & 3\end{pmatrix} ## * ## \begin{pmatrix}
y \\
y \\
-y
\end{pmatrix} ##=y## \begin{pmatrix}2 \\3 \\1 \end{pmatrix} ## +y## \begin{pmatrix}4 \\5 \\
2 \end{pmatrix} ## -y ## \begin{pmatrix}6 \\8 \\3 \end{pmatrix} ##=## \begin{pmatrix}0 \\0 \\0 \end{pmatrix} ##
 
Last edited:
  • #9
nice. that's it.
 
  • Like
Likes DumpmeAdrenaline

What is the row space of a matrix?

The row space of a matrix is the subspace spanned by the rows of the matrix. It represents all possible linear combinations of the rows of the matrix.

What is the column space of a matrix?

The column space of a matrix is the subspace spanned by the columns of the matrix. It represents all possible linear combinations of the columns of the matrix.

What is the null space of a matrix?

The null space of a matrix is the set of all vectors that, when multiplied by the matrix, result in the zero vector. In other words, it is the set of all solutions to the homogeneous equation Ax=0.

What is the left null space of a matrix?

The left null space of a matrix is the set of all vectors that, when multiplied by the transpose of the matrix, result in the zero vector. In other words, it is the set of all solutions to the equation yA=0.

How are the row space, column space, null space, and left null space related?

The row space and column space are subspaces of the vector space defined by the matrix, while the null space and left null space represent the solutions to specific equations involving the matrix. Together, they provide a comprehensive understanding of the properties and behavior of the matrix in a linear algebra context.

Similar threads

  • Linear and Abstract Algebra
Replies
6
Views
885
  • Linear and Abstract Algebra
Replies
2
Views
428
  • Linear and Abstract Algebra
Replies
4
Views
884
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
900
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Back
Top