Linearly Independence and Sets of Functions

In summary, the conversation discusses the concept of linear independence and how it applies to a set of functions. The main idea is that a set is linearly independent if a linear combination of its elements can only equal zero if all the coefficients are zero. The conversation then goes into more details and examples, and ultimately explains how to show that a certain transformation is linear by evaluating its properties for all elements in the set.
  • #1
TranscendArcu
285
0

Homework Statement



2012_02_06_5_29_39_PM.png


The Attempt at a Solution


I don't think I'm really understanding this problem. Let me tell you what I know: A set is linearly independent if [itex]a_1 A_1 +...+a_n A_n = \vec0[/itex] for [itex]a_1,...,a_n \in R[/itex] forces [itex]a_1 = ...=a_n = 0[/itex]. If [itex]f,g,h[/itex] take any of the [itex]x_i \in S[/itex], then one of the [itex]f,g,h[/itex] will map that element of S to zero, and no set containing [itex]\vec0[/itex] can be linearly independent.

Somehow, however, I feel like I'm supposed to be arriving at the opposite conclusion.
 
Physics news on Phys.org
  • #2
Okay, so these will be dependent if it is possible to have Af(x)+ Bg(x)+ Ch(x)= 0 for all x with at least one of A, B, C not equal to 0. Using the information given, [itex]Af(x_1)+ Bg(x_1)+ Ch(x_1)= A(0)+ B(1)+ C(1)= B+ C= 0.[/itex] Use the information given about [itex]x_2[/itex] and [itex]x_3[/itex] to get two more equations and solve for A, B, C.

It is certainly true that no set containing the 0 vector is independent but that is irrelevant to this problem since none of f, g, or h is the 0 vector.
 
Last edited by a moderator:
  • #3
So, [itex]\vec0 = Af(x_2) + Bg(x_2) + Ch(x_2) = A +C[/itex] and [itex]\vec0 = Af(x_3)+ Bg(x_3) + Ch(x_3) = A+ B[/itex]. So we have equations [itex]A+B = B+C =A+C = 0[/itex], which necessitates that [itex]A=B=C=0[/itex]. Thus, the only way to get the set vector from a linear combination of f,g,h is with all-zero coefficients, meaning that the set is linearly independent.
 
  • #4
You may want to formalize things some more if you feel confused, by formally making the space of (Real, I assume)-valued functions, into a vector space, choosing a basis and then testing.
 
  • #5
Okay. I've made my discussion more "formalized" in my work. Thank you. Can I also get some help with this problem:

Screen_shot_2012_02_06_at_7_14_08_PM.png


Here's my work so far:

I know that a linear transformation must have the property: Let T take elements of V to elements of W such that T(A) +T(B) + T(A+B) for all A,B in V. Thus, I said, let c,f be elements of S. I need (I'll use "ψ" as my special character indicating the transformation in the problem),

ψ(T)(c) + ψ(T)(f) = T(χc) + T(χf) = T(χc + χf) = ψ(T)(c+f), so it passes addition. (Is that right? I'm not sure.) I also need rT(A) = T(rA) for all r in R. I write,

ψ(T)r(c) = T(r(χc)) = rT(χc) = r ψ(T)(c).

I'm not really sure that I'm doing these correctly. It would be most helpful if you could tell me what in particular is wrong with the approaches I attempt. That way, I think I will know both why the right way is right, and why the wrong way is wrong. Thanks!
 
  • #6
Just to make sure: is L(Fun(S),R) the vector space of linear maps from Fun(S) into R?

If this is so, it may help re showing linearity.

And if you can show that your map is a linear bijection between vector spaces of the same

dimension, that shows you have an isomorphism.
 
Last edited:
  • #7
Actually, ψ is described as being linear in L(Fun(S),ℝ), not in S.

But you cannot declare it to be linear; linearity follows from the properties
of L(Fun(S),ℝ ), as well as from the definition of ψ .
 
Last edited:
  • #8
this problem is more complicated then it looks. let me help you "un-wrap" it:

φ:L(Fun(S),R)→Fun(S)

so the input for φ is an element of L(Fun(S),R), such as T:

φ:T→φ(T).

now φ(T) is in Fun(S), which means it takes elements of S to elements of R:

φ(T):s→φ(T)(s) = T(χs),

where χs is the characteristic function of s

(this would make more sense to me if it were χ{s}, as i am used to seeing characteristic functions defined on sets).

to show φ is linear, you need to show that if T and T' are 2 elements of L(Fun(S),R):

φ(T+T') = φ(T) + φ(T')
φ(cT) = cφ(T).

these are equalities of functions, so to prove they are equal functions, you need to show that for every s in S:

φ(T+T')(s) = φ(T)(s) + φ(T')(s)
φ(cT)(s) = cφ(T)(s).

to evaluate the functions given above, you determine if:

(T+T')(χs)

and T(χs) + T'(χs) are equal,

and similarly for the second equation.
 

Related to Linearly Independence and Sets of Functions

1. What is linear independence?

Linear independence is a property of a set of vectors or functions in a vector space. It means that none of the vectors can be written as a linear combination of the others. In other words, the vectors or functions are unique and do not depend on each other.

2. How do you determine if a set of functions is linearly independent?

To determine if a set of functions is linearly independent, you can use the Wronskian determinant. If the determinant is non-zero for all values of x, then the set of functions is linearly independent. Another method is to set up a system of equations and solve for the coefficients. If there is only one solution, then the set of functions is linearly independent.

3. Why is linear independence important in mathematics?

Linear independence is important because it allows us to solve systems of equations and determine unique solutions. It also allows us to study vector spaces and understand the properties of functions and vectors within those spaces.

4. Can a set of linearly dependent functions still be useful?

Yes, a set of linearly dependent functions can still be useful. They may not be unique, but they can still be used to represent a larger set of functions or solve certain problems. For example, trigonometric functions are linearly dependent, but they are still essential in many areas of mathematics and physics.

5. How is linear independence related to linear transformations?

Linear independence is closely related to linear transformations, as it determines the number of independent variables or dimensions in a vector space that can be transformed. A set of linearly independent functions can be used as a basis for a vector space, and any linear combination of those functions can be transformed using linear transformations.

Similar threads

  • Calculus and Beyond Homework Help
Replies
14
Views
559
  • Calculus and Beyond Homework Help
Replies
1
Views
369
  • Calculus and Beyond Homework Help
Replies
5
Views
352
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
6
Views
459
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
Back
Top