How Can You Complete a Set of Vectors to Form a Basis in R^5?

  • Thread starter ELB27
  • Start date
  • Tags
    Basis
In summary: R}^5##. Are you saying that the vectors in the standard basis are the only two that can fulfill the requirements?The two remaining vectors must be from the standard basis in ##\mathbb{R}^5##.
  • #1
ELB27
117
15

Homework Statement


Consider in the space ##\mathbb{R}^5## vectors ##\vec{v}_1 = (2,1, 1, 5, 3)^T## , ##\vec{v}_2 = (3, 2, 0, 0, 0)^T## , ##\vec{v}_3 = (1, 1, 50, 921, 0)^T##.
a) Prove that these vectors are linearly independent.
b) Complete this system of vectors to a basis.
If you do part b) first you can do everything without any computation.

Homework Equations

The Attempt at a Solution


If I were to do a) first, I would put the 3 vectors in a matrix, get it to echelon form by row reduction and note that there is a pivot in every column. Even better - I could do the row reduction with additional two arbitrary vectors and choose their components such that the final echelon form has a pivot in every row and column. However, this method is cumbersome and requires tedious calculations. The question clearly suggests I do b) first to avoid all calculations (that's probably the reason for the hint and the ugly numbers in ##\vec{v}_3##).
However, I do not see a way to choose two more vectors not belonging to span(v1,v2,v3) to complete to a basis without guessing or using the tedious row reduction suggested earlier (I could do it, but I prefer to find a more elegant approach).
Any suggestions on the best method to solve this one?

Any suggestions comments will be greatly appreciated!
 
Physics news on Phys.org
  • #2
There isn't much to it. Row reducing the vectors to prove they are linearly independent will show that ##x_4## and ##x_5## are free.

Writing out the solution set will show the span of two vectors forms a linearly independent basis.

I think it would be difficult to see this basis directly, unless you assume the conclusion of a) is true at the outset of the problem. Then you would know how to assume the form of the linearly independent spanning vectors.
 
  • Like
Likes ELB27
  • #3
Zondrina said:
There isn't much to it. Row reducing the vectors to prove they are linearly independent will show that ##x_4## and ##x_5## are free.

Writing out the solution set will show the span of two vectors forms a linearly independent basis.

I think it would be difficult to see this basis directly, unless you assume the conclusion of a) is true at the outset of the problem. Then you would know how to assume the form of the linearly independent spanning vectors.
Alright then, I guess I will have to do some dirty work :p.
Thank you for the reply!
 
  • #4
ELB27 said:
Alright then, I guess I will have to do some dirty work :p.
Thank you for the reply!

I want to clarify what I said earlier, I feel as if I was a little ambiguous.

If you know the vectors are linearly independent, then you know what the final form of the matrix will look like when you reduce ##A \vec x = \vec 0## before you even reduce it.

If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.

Since the vectors are linearly independent, only the trivial solution exists for the independent variables, i.e you can comfortably place ##0's## in many of the vector indices for the solution set without much thought.

All that would be left to do is to place a ##1## in the index of each free variable for their respective vector in the solution set.

The span of these vectors will form the basis without the need to row reduce.
 
  • Like
Likes ELB27
  • #5
Zondrina said:
I want to clarify what I said earlier, I feel as if I was a little ambiguous.

If you know the vectors are linearly independent, then you know what the final form of the matrix will look like when you reduce ##A \vec x = \vec 0## before you even reduce it.

If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.

Since the vectors are linearly independent, only the trivial solution exists for the independent variables, i.e you can comfortably place ##0's## in many of the vector indices for the solution set without much thought.

All that would be left to do is to place a ##1## in the index of each free variable for their respective vector in the solution set.

The span of these vectors will form the basis without the need to row reduce.
I think I get it now. Basically, assuming linear independence of the first 3 vectors, the two remaining vectors must be from the standard basis in ##\mathbb{R}^5## and all that's left is to find which ones?
 
  • #6
Zondrina said:
If you have ##3## vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables because there will be ##2## full rows of ##0's## when the matrix is reduced.
You will have at least 2 free variables, since the three vectors might be linearly dependent (coplanar or even collinear). In the previous paragraph you made the assumption that the three vectors were linearly independent, in which case the sentence above is correct, but I wasn't sure if that assumption still held in the next paragraph.For clarity, you might have written, "If you have ##3## linearly independent vectors in ##\mathbb{R}^5##, you know immediately there will be ##2## free variables..."
 
  • Like
Likes ELB27 and STEMucator
  • #7
ELB27 said:
If I were to do a) first, I would put the 3 vectors in a matrix, get it to echelon form by row reduction and note that there is a pivot in every column.
A better approach would be to start with the definition of linear independence and think more generally about how to solve the system of equations rather than resorting to using matrices. To show linear independence, you want to solve
$$c_1\begin{pmatrix} 2\\1\\1\\5\\3\end{pmatrix} + c_2 \begin{pmatrix} 3\\2\\0\\0\\0 \end{pmatrix} + c_3 \begin{pmatrix} 1\\1\\50\\921\\0\end{pmatrix} = 0.$$ You should be able to see by inspection that ##c_1=0##. And it's pretty easy to show ##c_2 = c_3 = 0## follows with virtually no calculating.
 
  • Like
Likes ELB27
  • #8
vela said:
A better approach would be to start with the definition of linear independence and think more generally about how to solve the system of equations rather than resorting to using matrices. To show linear independence, you want to solve
$$c_1\begin{pmatrix} 2\\1\\1\\5\\3\end{pmatrix} + c_2 \begin{pmatrix} 3\\2\\0\\0\\0 \end{pmatrix} + c_3 \begin{pmatrix} 1\\1\\50\\921\\0\end{pmatrix} = 0.$$ You should be able to see by inspection that ##c_1=0##. And it's pretty easy to show ##c_2 = c_3 = 0## follows with virtually no calculating.
Ah, that's the simplicity I was looking for! (Is 'by inspection' a valid formal argument?) ##c_1=0## because of the ##3## in the bottom of ##\vec{v}_1##, ##c_3=0## because of the ##921## and ##c_2=0## because it's the last left standing, right? As for a possible completion to a basis, by inspection I think that these two will work: ##\vec{v}_4 = (0,0,1,0,0)^T ; \vec{v}_5 = (0,1,0,0,0)^T## or ##\vec{v}_5 = (1,0,0,0,0)^T##.
 

Related to How Can You Complete a Set of Vectors to Form a Basis in R^5?

What does it mean to "complete to a basis"?

"Completing to a basis" refers to the process of adding additional vectors to a set of vectors to create a basis for a vector space. A basis is a set of linearly independent vectors that span the entire vector space. Completing to a basis ensures that the set of vectors can represent any vector in the vector space.

Why is it important to complete to a basis?

Completing to a basis is important because it allows us to fully describe and understand the vector space. A basis provides a unique representation for each vector in the space, and completing to a basis ensures that we have enough vectors to represent any vector in the space.

What is the difference between a spanning set and a basis?

A spanning set is a set of vectors that can represent all vectors in a vector space, but it may contain redundant or linearly dependent vectors. A basis, on the other hand, is a minimal spanning set that is also linearly independent. This means that a basis contains the fewest possible vectors needed to represent all vectors in the space without redundancy.

How do you complete a set of vectors to a basis?

To complete a set of vectors to a basis, you can use the Gram-Schmidt process or row reduction to find a set of linearly independent vectors from the original set. Then, you can add additional vectors to this set until it spans the entire vector space. These additional vectors can be found by solving linear equations or using other techniques such as finding the null space of a matrix.

Can a set of vectors always be completed to a basis?

No, not all sets of vectors can be completed to a basis. The set of vectors must have a certain number of linearly independent vectors in order to form a basis for the vector space. If the set of vectors is linearly dependent or does not have enough vectors, it cannot be completed to a basis. Additionally, the vector space itself may not have a basis, in which case no set of vectors can be completed to a basis.

Similar threads

  • Calculus and Beyond Homework Help
Replies
16
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
823
  • Calculus and Beyond Homework Help
Replies
13
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
19
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • General Math
2
Replies
42
Views
3K
  • Linear and Abstract Algebra
Replies
8
Views
963
  • Calculus and Beyond Homework Help
Replies
6
Views
997
Back
Top