How do I determine whether a set of polynomials form a basis?

In summary: In (1), what you found is that if b3 - 2b2 + b1 = 0, then the set {v1, v2, v3} would not be a spanning set and hence not a basis for P2.The major distinguishing feature in the equation that's going to tell me whether it's some of the time or all of the time is that
  • #1
NewtonianAlch
453
0

Homework Statement



Are the following statements true or false? Explain your answers carefully, giving all necessary working.

(1) p[itex]_{1}[/itex](t) = 3 + t[itex]^{2}[/itex] and p[itex]_{2}[/itex](t) = -1 +5t +7t[itex]^{2}[/itex] form a basis for [itex]P_{2}[/itex]

(2) p[itex]_{1}[/itex](t) = 1 + 2t + t[itex]^{2}[/itex], p[itex]_{2}[/itex](t) = -1 + t[itex]^{2}[/itex] and p[itex]_{3}[/itex](t) = 7 + 5t -6t[itex]^{2}[/itex] form a basis for [itex]P_{2}[/itex]



The Attempt at a Solution



So I rendered both (1) and (2) into matrices and did row reduction on them:

[PLAIN]http://img411.imageshack.us/img411/9639/43512286.jpg

I believe (1) does not form a basis for [itex]P_{2}[/itex] because there is no solution even though the vectors are linearly independent. Where as (2) does have a solution and the vectors are linearly independent so therefore it should form a basis.

Thoughts: To form a basis in [itex]P_{2}[/itex] wouldn't you need at least 3 vectors always? In my book it states that to form a basis the vectors need to be linearly independent (which is established) and also must be a spanning set, what does this exactly mean?

It also does an example of 3 vectors just like (2): here's what the end result of their row-reduction looked like:

[PLAIN]http://img443.imageshack.us/img443/489/17419228.jpg

Their comment was

The row-echelon matrix had a non-leading right-hand column and hence the equation Ax=b has a solution. Therefore span(S) = R[itex]^{3}[/itex].
Moreover, the left side of the row-echelon matrix has no non-leading columns, so the only solution for a zero right-hand side is x[itex]_{1}[/itex] = x[itex]_{2}[/itex] = x[itex]_{3}[/itex] = 0. This shows that S is a linearly indepedent set. We have now proved S is a linearly independent spanning set for R[itex]^{3}[/itex] and is therefore a basis for R[itex]^{3}[/itex]

So does this mean if b3 - 2b2 + b1 = 0, then it wouldn't be a spanning set and hence not a basis? Why is this?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
NewtonianAlch said:

Homework Statement



Are the following statements true or false? Explain your answers carefully, giving all necessary working.

(1) p[itex]_{1}[/itex](t) = 3 + t[itex]^{2}[/itex] and p[itex]_{2}[/itex](t) = -1 +5t +7t[itex]^{2}[/itex] form a basis for [itex]P_{2}[/itex]

(2) p[itex]_{1}[/itex](t) = 1 + 2t + t[itex]^{2}[/itex], p[itex]_{2}[/itex](t) = -1 + t[itex]^{2}[/itex] and p[itex]_{3}[/itex](t) = 7 + 5t -6t[itex]^{2}[/itex] form a basis for [itex]P_{2}[/itex]

The Attempt at a Solution



So I rendered both (1) and (2) into matrices and did row reduction on them:

I believe (1) does not form a basis for [itex]P_{2}[/itex] because there is no solution even though the vectors are linearly independent. Where as (2) does have a solution and the vectors are linearly independent so therefore it should form a basis.
To be a bit more precise, (1) has a solution only when
[tex]-3b_1 + \frac{22}{5}b_2 + b_3 = 0[/tex]so it's not that there's never a solution but that there isn't always a solution.
Thoughts: To form a basis in [itex]P_{2}[/itex] wouldn't you need at least 3 vectors always?
Yes, that's right. P2 is a three-dimensional vector space, so any basis for it will have exactly 3 vectors.
In my book it states that to form a basis the vectors need to be linearly independent (which is established) and also must be a spanning set, what does this exactly mean?
When you say a set of vectors {v1, v2, …, vn} spans a space V, that means if you take any element b in V, you can find some linear combination of v1, v2, …, and vn that's equal to b. In other words, you can find a solution to the equation
[tex]c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_n\vec{v}_n = \vec{b}[/tex]
In (1), what you found was that you could find solutions only some of the time, not all of the time. Consequently, even though p1(x) and p2(x) are independent, they do not span P2 and therefore do not form a basis for P2.
So does this mean if b3 - 2b2 + b1 = 0, then it wouldn't be a spanning set and hence not a basis? Why is this?
No, that's not what they mean.
 
  • #3
So if [tex]-3b_1 + \frac{22}{5}b_2 + b_3 = 3[/tex] for e.g. would that mean it would be a spanning set? I'm not sure I understand what the distinction is if the equation was equal to zero as opposed to non-zero.

You said:
In (1), what you found was that you could find solutions only some of the time, not all of the time.
How is this so? What is the major distinguishing feature in the the equation that's going to tell me whether it's some of the time or all of the time?

When you say a set of vectors {v1, v2, …, vn} spans a space V, that means if you take any element b in V, you can find some linear combination of v1, v2, …, and vn that's equal to b. In other words, you can find a solution to the equation
[tex]c_1\vec{v}_1 + c_2\vec{v}_2 + \cdots + c_n\vec{v}_n = \vec{b}[/tex]

So if b had elements (x,x,x) just as an example. I'd get a system of equations to solve for c[itex]_{1}[/itex], c[itex]_{2}[/itex], c[itex]_{3}[/itex] - wouldn't you always be able to find solutions for c[itex]_{x}[/itex]? Hence any vector b would always be in the span?

Thank you for the very detailed response by the way, it's helped in understanding this.
 
  • #4
What's the reasoning behind forming the matrix and then row-reducing it? Can you explain that?
 
  • #5
vela said:
What's the reasoning behind forming the matrix and then row-reducing it? Can you explain that?

So that we can get the values of b[itex]_{1..x}[/itex]? Effectively the scalar multipliers for the vectors in a set to determine if a given vector is in a spanning set?

So if we got b[itex]_{1}[/itex], b[itex]_{2}[/itex], b[itex]_{3}[/itex] all equal to 3.

Then 3v[itex]_{1}[/itex] + 3v[itex]_{2}[/itex] + 3v[itex]_{3}[/itex] = b ?
 
  • #6
Not exactly. In (1), you're trying to find c1 and c2 such that [tex]c_1 p_1(t) + c_2 p_2(t) = f(t)[/tex]where f(t)=b1t2+b2t+b3 is an element of P2. This is the whole point of setting up the matrix and reducing it.

Now if you plug everything in, you get
[tex]c_1 (t^2+3) + c_2(7t^2+5t-1) = b_1t^2+b_2t+b_3[/tex]or
[tex](c_1 + 7c_2)t^2 + (5c_2)t + (3c_1 - c_2) = b_1t^2+b_2t+b_3[/tex]Matching coefficients from the two sides of the equations, you get
\begin{align*}
c_1 + 7c_2 &= b_1 \\
5c_2 &= b_2 \\
3c_1-c_2 &= b_3
\end{align*}To solve this system of equations, you set up the augmented matrix
[tex]\left(\begin{array}{cc|c} 1 & 7 & b_1 \\ 0 & 5 & b_2 \\ 3 & -1 & b_3 \end{array}\right)[/tex]This is the matrix you formed. Solving this system of equations is equivalent to solving the top equation. And remember you're solving for c1 and c2.


So now what you're doing is using what you learned before about solving systems of equations to see if you can always find a solution, or if there are no solutions, or if there are infinite solutions.

  1. If you have a basis, you should find you get a unique solution for any possible values of the b's.
  2. If you find you get no solution for some values of the b's, that means that some vectors can not be expressed as a linear combination of the vectors. In other words, the vectors don't span the space. This is what you found for problem (1).
  3. If you find you can get an infinite number of solutions, that means the vectors are linearly dependent.
 
  • #7
Ah, I'm starting to see the picture now. I guess I never really thought about where b[itex]_{1}[/itex], b[itex]_{2}[/itex], and b[itex]_{3}[/itex] came about, but now I do. Thanks so much for your help, this was very interesting.
 

Related to How do I determine whether a set of polynomials form a basis?

1. How many polynomials do I need to determine whether they form a basis?

The number of polynomials needed to form a basis is equal to the dimension of the vector space they span. For example, if the vector space is three-dimensional, then you will need three linearly independent polynomials to form a basis.

2. How do I check if the polynomials are linearly independent?

To check if the polynomials are linearly independent, you can use the determinant method or the row reduction method. If the determinant of the coefficient matrix is non-zero or if the reduced row echelon form of the matrix has all non-zero rows, then the polynomials are linearly independent.

3. Can I use any set of polynomials to form a basis?

No, the set of polynomials must meet two criteria to form a basis. Firstly, they must be linearly independent. Secondly, they must span the entire vector space. If these criteria are met, then the set of polynomials can form a basis.

4. How do I write a polynomial as a linear combination of the basis polynomials?

To write a polynomial as a linear combination of the basis polynomials, you can use the coefficients of the basis polynomials as the constants and multiply them with the corresponding basis polynomial. Then, add all these terms together to form the desired polynomial.

5. Can a set of polynomials form more than one basis for a vector space?

Yes, a set of polynomials can form more than one basis for a vector space. This is possible if the polynomials are linearly independent and span the entire vector space. In this case, any linear combination of the polynomials can also be a basis for the vector space.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
13
Views
1K
  • Precalculus Mathematics Homework Help
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Precalculus Mathematics Homework Help
Replies
12
Views
2K
  • Precalculus Mathematics Homework Help
Replies
14
Views
5K
Replies
5
Views
962
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
458
  • Precalculus Mathematics Homework Help
Replies
17
Views
2K
Back
Top