Deducing Basis of Set T from Coordinates in Matrix A with Respect to Basis S

In summary: I just want to make sure, the 0 is a constant (the constant function) and the set of functions that make it are zero functions, right?Yes, that is correct. The 0 in this context is the zero function, which is a specific type of function that always outputs the number 0 regardless of the input. And the set of functions that make it are the zero functions, meaning all functions that output 0 for all inputs.
  • #1
Kraz
7
0
Hello,

I am just doing my homework and I believe that there is a fault in the problem set.

Consider the set of functions defined by
V= f : R → R such that f(x) = a + bx for some a, b ∈ R

It is given that V is a vector space under the standard operations of pointwise
addition and scalar multiplication; that is, under the operations

(f + g)(x) := f(x) + g(x)
(λf)(x) := λf(x).

Consider the set S = {f1, f2} ⊆ V consisting of the vectors
f1 : R → R defined by f1(x) = 1
and
f2 : R → R defined by f2(x) = x.

(a) Show that the set S is a basis of V ; that is, show that S is a linearly independent
set which spans V . Also state the dimension of V .
(b) State the coordinates (f1)S and (f2)S of the vectors f1 and f2 with respect to
the basis S, what is wrong is that if a set of linear functions is independat that would mean that Af1+Bf2=0 only has teh trivial solution A=B=0 as a solution but this is obviously not right as f1(x)=1 and f2(x)=x so A+Bx=0, has other solutions to it than the trivial, thus its not indpendant and S not a basis of V.
 
Physics news on Phys.org
  • #2
Kraz said:
A+Bx=0, has other solutions to it than the trivial

Give an example of a non-trivial solution.

You'd have to find an A and B such that g(x) = A + Bx is the constant function g(x) = 0 because the constant function g(x) = 0 is the zero vector in this space.
 
  • #3
Stephen Tashi said:
Give an example of a non-trivial solution.

You'd have to find an A and B such that g(x) = A + Bx is the constant function g(x) = 0 because the constant function g(x) = 0 is the zero vector in this space.

what do you mean, I am not english so do not really understand, but as x is elemnt of R it can take any of such values, and so can the scalars, as g(x)=0, A+Bx=g(x) for example when A=2, B=2 and x=-1 ... and many more
 
  • #4
Kraz said:
what do you mean, I am not english so do not really understand, but as x is elemnt of R it can take any of such values, and so can the scalars, as g(x)=0, A+Bx=g(x) for example when A=2, B=2 and x=-1 ... and many more

Suppose A = 2, B = 2. The function h(x) = 2 + 2x is not the function g(x) = 0. It is true that h(-1) = 0 , but h(x) is not zero for all values of x. The function h(x) is not the zero function.
 
  • #5
Stephen Tashi said:
Suppose A = 2, B = 2. The function h(x) = 2 + 2x is not the function g(x) = 0. It is true that h(-1) = 0 , but h(x) is not zero for all values of x. The function h(x) is not the zero function.

yes but if I put a different x, there will always be different values of A and B such that that equation would equal 0, therefore.

could you maybe ellaborate your logic? Thanks

Do you mean that in order for the set to be not independant I would need to find a function h(x) with specific, unchanable non zero values for A and B such that it is 0 for all x?
 
  • #6
Kraz said:
Do you mean that in order for the set to be not independant I would need to find a function h(x) with specific, unchanable non zero values for A and B such that it is 0 for all x?

Yes.
 
  • #7
Stephen Tashi said:
Yes.

thanks, but let's say we have a set of vectors (v1,v2,v3...vn)

do you agree that if this set is dependant Av1+Bv2+Cv3...=0, then A could have different values that lead to the solution, obviously each time A changes the other sclars change to?
 
  • #8
Kraz said:
thanks, but let's say we have a set of vectors (v1,v2,v3...vn)

do you agree that if this set is dependant Av1+Bv2+Cv3...=0, then A could have different values that lead to the solution, obviously each time A changes the other sclars change to?

Yes. I agree.

For example, (kA)v1 + (kB)v2 + (kC)v3 ... = 0, for any scalar k.To use an example from this problem, let
p(x) = x + 1
q(x) = 2x + 2
r(x) = 3x + 3

The set of vectors {p(x),q(x),r(x)} is not linearly indepdent.

A(p(x)) + B(q(x)) + C(r(x)) = 0 (for all x) has solutions such as A = 5, B = -1, C = -1 and A = -2, B = 1, C = 0.
 
  • #9
Kraz said:
that would mean that Af1+Bf2=0 only has teh trivial solution A=B=0 as a solution but this is obviously not right as f1(x)=1 and f2(x)=x so A+Bx=0, has other solutions to it than the trivial, thus its not indpendant and S not a basis of V.
To say that ##Af_1+Bf_2=0## is to say that ##(Af_1+Bf_2)(x)=0## for all real numbers ##x##. (Note that the 0 in the first equation is a function, and the 0 in the second equation is a number). So if ##Af_1+Bf_2=0##, then for all real numbers ##x##, we have
$$0=(Af_1+Bf_2)(x)=Af_1(x)+Bf_2(x)=A+Bx.$$ Do you understand why this implies that ##A=B=0##? If you don't, then you should think about it until you do. (Hint: Think about what "for all" means).
 
Last edited:
  • #10
Fredrik said:
To say that ##Af_1+Bf_2=0## is to say that ##(Af_1+Bf_2)(x)=0## for all real numbers ##x##. (Note that the 0 in the first equation is a function, and the 0 in the second equation is a number). So if ##Af_1+Bf_2=0##, then for all real numbers ##x##, we have
$$0=(Af_1+Bf_2)(x)=Af_1(x)+Bf_2(x)=A+Bx.$$ Do you understand why this implies that ##A=B=0##? If you don't, then you should think about it until you do. (Hint: Think about what "for all" means).
Yes I understood, I did not really know that it must be valid for all x, which was a dumb mistake as I should have realized that by looking at this equation: (Af1+Bf2)(x)=0is it then correct to say then that the coordinate (f1)s with respect to the basis S is the follwoing column matrix : (row 1,column1 :1-Bx )
( row 2 column 1;B )and the coordinate (f2)s with respect to the basis S is the follwoing column matrix: ( row 1 coloum 1:0 )
(row 2 coloumn 1: 1)
 
  • #11
The matrix of components of a vector ##v## with respect to the ordered basis ##(f_1,f_2)## is the matrix ##\begin{pmatrix}a\\ b\end{pmatrix}## such that ##v=af_1+bf_2##. When ##v=f_1##, this is ##\begin{pmatrix}1\\ 0\end{pmatrix}##, not ##\begin{pmatrix}1-Bx\\ B\end{pmatrix}##. (What is ##B## and ##x## here?)
 
  • #12
Fredrik said:
The matrix of components of a vector ##v## with respect to the ordered basis ##(f_1,f_2)## is the matrix ##\begin{pmatrix}a\\ b\end{pmatrix}## such that ##v=af_1+bf_2##. When ##v=f_1##, this is ##\begin{pmatrix}1\\ 0\end{pmatrix}##, not ##\begin{pmatrix}1-Bx\\ B\end{pmatrix}##. (What is ##B## and ##x## here?)

well because f1(x)= 1 , f2(x)=x, so v=af1 + bf2 , where v=f1 1= a+bx, so for every x and every value of A,B the coordinates are (a= 1-bx , b=b)
 
  • #13
Kraz said:
f1(x)= 1 , f2(x)=x,
OK, this is true for all x, because ##f_1## is the constant function that takes everything to 1 (the output is always 1), and ##f_2## is the identity function (the output is always equal to the input).

Kraz said:
so v=af1 + bf2 , where v=f1 1= a+bx
You seem to be saying that if ##f_1=af_1+bf_2##, then ##1=a+bx##. But why would it be, and what is this ##x## supposed to be?

Kraz said:
so for every x and every value of A,B the coordinates are (a= 1-bx , b=b)
I don't follow the argument at all now. What is A and B, and how is there an x involved in the coordinates?
 
  • #14
Fredrik said:
OK, this is true for all x, because ##f_1## is the constant function that takes everything to 1 (the output is always 1), and ##f_2## is the identity function (the output is always equal to the input).You seem to be saying that if ##f_1=af_1+bf_2##, then ##1=a+bx##. But why would it be, and what is this ##x## supposed to be?I don't follow the argument at all now. What is A and B, and how is there an x involved in the coordinates?
I figured it out.

I have one last questions if I may ask:

Lets say you have 2 other vectors which are part of set T, these 2 vectors g1 and g2 can be expressed in terns of the vectors in a set of a basis called S. By considering the 2 × 2 matrix A = ((g1)S(g2)S) whose columns are the coordinates of g1 and g2 with respect to the basis S how can you deduce that the set T is also a basis of V.

 

Related to Deducing Basis of Set T from Coordinates in Matrix A with Respect to Basis S

What is a basis of a set of functions?

A basis of a set of functions is a set of linearly independent functions that can be used to represent any other function in the set through a linear combination.

Why is a basis important in mathematics?

A basis is important because it allows us to represent more complex functions in terms of simpler functions. It also helps us to understand the structure and properties of a set of functions.

How do you determine if a set of functions is a basis?

A set of functions is a basis if they are linearly independent and span the entire set. This means that no function can be written as a linear combination of the others and every function in the set can be represented by a unique combination of the basis functions.

Can a set of functions have more than one basis?

Yes, a set of functions can have multiple bases. For example, the set of all polynomials with degree less than or equal to n has many different bases, such as the standard monomial basis or the Bernstein basis.

How are bases of a set of functions useful in practical applications?

Bases of a set of functions are useful in practical applications because they allow us to simplify complex problems into simpler, more manageable ones. They also help us to approximate functions and solve differential equations.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
404
Replies
3
Views
2K
Replies
5
Views
944
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
Replies
2
Views
825
  • Linear and Abstract Algebra
Replies
23
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
924
  • Linear and Abstract Algebra
2
Replies
43
Views
5K
Back
Top