Vector Spaces, Subspaces, Bases etc

And how do you get "s= 0" and "t= 0" from the fact that "they are a basis"? Let V= span(B) and W= span(D). Then B U D= span(B) U span(D)= V U W and you are asked to prove that B U D is linearly independent. That is, if \alpha_1b_1+ ...+ \alpha_nb_n+ \beta_1d_1+ ...+ \beta_md_m= 0 then all of the \alphas and \betas are 0. Since V and W are subspaces
  • #1
rad0786
188
0
Vector Spaces, Subspaces, Bases etc... :(

Hello. I was doing some homework questions out of the textbook and i came across a question which is difficult to understand, could somebody please help me out with it?

-- if U and W are subspaces of V, define their intersection U ∩ W as follows:
U ∩ W = {v / v is in both U and W}

a) show that U ∩ W is a subspace contained in U and W

b) If B and D are bases of U and W, and if U ∩ W = {0}, show that
B U D = {v / v is in B or D} is linearly independent.
--

I was able to do part a)! That wasn't so tricky. You just show that clousre of addition and scalar multiplication hold. (and show that the vectors each belong in U and W etc...)

So i understand part a), but part b) is where I am lost :confused:

To begin, I am not sure if "B is a basis of U" and "D is a basis of W"... or is "B and D" a basis of U and "B and D" is a basis of W? I think its the first one.

Next... U ∩ W = {0} means that the zero vector lies in U and lies in W.
Furthermore... B U D has a vector that lies in only B or in only D, and not B ∩ D.

so now is where I don't know how to show that it is linearly independent.

All i know so far is that U ∩ W = {0} has a basis of 1 and that is all i have to work with :( Can somebody please help me further?

Thanks
 
Physics news on Phys.org
  • #2
"B U D has a vector that lies in only B or in only D, and not B ∩ D." What are you basing this statement on?

What does it mean for a vector space (B U D) to be linearly independent? (Indep. of what?)
 
Last edited by a moderator:
  • #3
well... B U D = {v / v is in B or D} which means that v can only be in B or in D and not in both, B ∩ D.

For a vector (B U D) to be linearly independent, then the cooefficents of its linear combination has to equal to zero.

The problem is, i don't know how to even get the linear combination of B U D.

I know that the basis of B ∩ D is just { (1) } since U ∩ W = {0} however, B U D I just cannot see...
 
  • #4
EnumaElish said:
"B U D has a vector that lies in only B or in only D, and not B ∩ D." What are you basing this statement on?
What does it mean for a vector space (B U D) to be linearly independent? (Indep. of what?)

B and D are not vector spaces! They are sets of vectors. "Linearly independent" means the same thing it always does for sets of vectors in vector spaces in vector spaces.

rad0786 said:
The problem is, i don't know how to even get the linear combination of B U D.
The same way you always do! Let the vectors in B be
{b1, b2, ..., bn} and the vectors in D be {d1, d2, ..., dm}

Then a linear combination is something of the form:
[tex]\alpha_1b_1+ \alpha_2b_2+ ...+ \alpha_nb_n+ \beta_1d_1+ ...+\beta_md_m[/tex].

Suppose such a thing were equal to 0. If all of the [tex]\alpha[/tex]s were equal to 0, the [tex]\beta[/tex]s must be also because D is linearly independent, and vice versa. So the only question is if it is possible for some of the [tex]\alpha[/tex]s and [tex]\beta[/tex]s to be 0 and to cancel. Suppose that were true. Move all of the terms involving [tex]\beta[/tex]s (i.e. multiplying vectors in D) over to the right side of the equation, leaving only terms involving [tex]\alpha[/tex]s (i.e. multiplying vectors in B) on the left. What can you say about thevector on the left side of the equation? What about the vector on the right?
 
Last edited by a moderator:
  • #5
HallsofIvy. Thank you for that!

I wrote out what you were saying, and moved all the a's and b's to the left side and right side respectively. I just don't see how they would cancel.

However, I thought of something slightly different...

Let the vectors in B be {b1, b2, ..., bn} and the vectors in D be {d1, d2, ..., dm}

then: s1b1 + ... + snbn + t1d1 + ... + tndn = 0 (s and t are scalars)

Now because they are a basis, then it has to n linearly independent vectors...
and so s = t = 0

Does that work?
 
  • #6
B U D = {v / v is in B or D} which means that v can only be in B or in D and not in both, B ∩ D.
Wrong. Ordinarily, "or" is inclusive.
 
  • #7
EnumaElish said:
Wrong. Ordinarily, "or" is inclusive.
Yes, but in this problem, we were already told that U ∩ W = {0}. Of course, the 0 vector cannot be in either basis so no vector is in both B and D.
 
  • #8
Okay, I had missed that.
 
  • #9
rad0786 said:
HallsofIvy. Thank you for that!

I wrote out what you were saying, and moved all the a's and b's to the left side and right side respectively. I just don't see how they would cancel.

However, I thought of something slightly different...

Let the vectors in B be {b1, b2, ..., bn} and the vectors in D be {d1, d2, ..., dm}

then: s1b1 + ... + snbn + t1d1 + ... + tndn = 0 (s and t are scalars)

Now because they are a basis, then it has to n linearly independent vectors...
and so s = t = 0

Does that work?

I'm sorry, what has "n linearly independent vectors"? Certainly B does, only because we were told that B is a basis and defined B to have n vectors in it!
Did you notice that I said, and you also, Let the vectors in B be {b1, b2, ..., bn} and the vectors in D be {d1, d2, ..., dm}, but then you have
s1b1 + ... + snbn + t1d1 + ... + tndn = 0
where you seem to be assuming that m= n.
Oh, and you can't say "s and t are scalars- you don't have any "s and t", you have s1, s2, ..., sn and t1, t2,...,tm: two sets ofscalars.

My point was that if you write s1b1 + ... + snbn + t1d1 + ... + tmdm = 0
(I have changed that last "n" to "m".)
as s1b1 + ... + snbn = -(t1d1 + ... + tmdm ) on the left side you have a linear combination of vectors in B- so it must be a vector in U- and on the right you have a linear combination of vectors in D- so it must be a vector in V. But they are equal and the only vector in both U and V is 0 (that's an important part of the hypotheses you didn't use!), each side must be equal to 0. NOW use the fact that each of B and D is linearly independent to show that all of the scalars must be 0.
 
  • #10
Hey I am so sorry... i made a typo... yes i did mean that m=n.

I was trying to arrive at this conclusion:

Let U = a1u1 + a2u2 + ... + anun which is in B
Let V = a1v1 + a2v2 + ...+ anvn which is in D
Vectors in both B and D = 0

a1u1 + a2u2 + ... + anun + a1v1 + a2v2 + ...+ anvn = 0
a1u1 + a1v1 + a2u2 + a2v2...+ anun + anvn = 0
a1(u1 + v1) + a2(u2 + v2) + ... + an(un + vn) = 0

and then show that a1 = a2 =... an = 0?

Yes i am assuming that B and D have n vectors... but now that i think of it... i can't make that assumption since we don't know the number of vectors in the bases of B and D

So speaking of what you had... since B is a basis with n vectors and D is a basis with m vectors... then the n vectors are a spanning set and linearly indepednet in B and the m vectors are a spanning set and linearly independent in D.

Im starting to hate math :( ...
 
  • #11
Okay... i have a VERY related question that has to do with this stuff...

--If U and W are subspaces of V and dimU =2, show that
either U⊆W or dim(U ∩ W) ≤ 1

This is how i went about it.

Let dimU = m and dimW = k
Then any basis of U = {u1 u2} and is a set of independent vectors of 2! thus, m = 2
Also, any basis of W ={w1 ... wk} and is a set of independent vectors of k.
Also, the dimV ≥ 2

But... U ∩ W is all the vectors that are in U and all the vectors that are in W. so the dim(U ∩ W) HAS to be ≤ 2, unless dimW =1, then dim(U ∩ W) HAS to be ≤ 1. If this were the case, then dimU > dimW and its possible for W⊆U

But... if U⊆W, then dimW > dimU ... which this just contraticts what i said above?

What am i doing wrong? why are the two contradicting?
 
  • #12
rad0786 said:
Hey I am so sorry... i made a typo... yes i did mean that m=n.p
This is getting weird! You don't seem to be understanding my comments at all. All you know about B and D is that they are sets of independent vectors in U and V respectively and that U and V have only the 0 vector in common. You have absolutely no reason to believe that m= n. That was my point.

I was trying to arrive at this conclusion:

Let U = a1u1 + a2u2 + ... + anun which is in B
Let V = a1v1 + a2v2 + ...+ anvn which is in D
Vectors in both B and D = 0
This makes no sense at all. U is not a vector, it is a vector space! Did you mean, say, u= a1u1 + a2u2 + ... + anun where u is a vector in U? But even then "which is in B" is wrong. The individual vectors u1, u2,... etc. are in B. A linear combination of them is in U but not necessarily in B.
Same comments apply to the next line.

"Vectors in B and D= 0". Actually, you know that 0 is not in B or D since any set of vectors containing the 0 vector can't be independent!

a1u1 + a2u2 + ... + anun + a1v1 + a2v2 + ...+ anvn = 0
a1u1 + a1v1 + a2u2 + a2v2...+ anun + anvn = 0
a1(u1 + v1) + a2(u2 + v2) + ... + an(un + vn) = 0

and then show that a1 = a2 =... an = 0?

Yes i am assuming that B and D have n vectors... but now that i think of it... i can't make that assumption since we don't know the number of vectors in the bases of B and D
Now, you seem to be assuming that the coefficients of the u's and the v's are the same- there is no reason to assume that.

So speaking of what you had... since B is a basis with n vectors and D is a basis with m vectors... then the n vectors are a spanning set and linearly indepednet in B and the m vectors are a spanning set and linearly independent in D.
No, nothing is said about "spanning". The only information that you are given is that B and D are linearly independent sets of vectors in U and V respectively and that the only vector in both U and V is the 0 vector.

Im starting to hate math :( ...
I don't think it is the math that is the problem! I think you need to read more carefully! Go back and read: (1) the definition of "independent" in your book, (2) the precise statement of the problem, (3) my responses to your questions. I've pretty much given you the complete solution in my 2nd response.
 
  • #13
rad0786 said:
But... U ∩ W is all the vectors that are in U and all the vectors that are in W.
No, it's all the vectors in U that are also in W. You've described the union, not the intersection.

so the dim(U ∩ W) HAS to be ≤ 2, unless dimW =1, then dim(U ∩ W) HAS to be ≤ 1.
Both clauses are true (though unproved) but the qualifier "unless" is incorrect. x<1 does not violate x<2.
 
  • #14
--If U and W are subspaces of V and dimU =2, show that
either U⊆W or dim(U ∩ W) ≤ 1
If dim(U∩W)= 2 then U∩W must be U!
 

Related to Vector Spaces, Subspaces, Bases etc

1. What is a vector space?

A vector space is a mathematical structure that consists of a set of vectors and a set of operations that can be performed on those vectors, such as addition and scalar multiplication. It is a fundamental concept in linear algebra and is used to represent and manipulate physical quantities in many areas of science and engineering.

2. What is a subspace?

A subspace is a subset of a vector space that satisfies all the properties of a vector space, including closure under addition and scalar multiplication. It is essentially a smaller vector space contained within a larger one.

3. How do you determine if a set of vectors is a basis for a vector space?

A set of vectors is a basis for a vector space if it is linearly independent (no vector can be written as a linear combination of the other vectors) and spans the entire vector space (every vector in the space can be written as a linear combination of the basis vectors).

4. Can a vector space have more than one basis?

Yes, a vector space can have multiple bases. In fact, any set of linearly independent vectors that span the vector space can be considered a basis.

5. How are vector spaces used in scientific research?

Vector spaces are used in many fields of science, such as physics, engineering, and computer science, to represent and manipulate physical quantities and data. They are especially useful in data analysis and machine learning, where large datasets can be represented as vectors in high-dimensional spaces.

Similar threads

  • Calculus and Beyond Homework Help
Replies
0
Views
495
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
793
  • Calculus and Beyond Homework Help
Replies
15
Views
942
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
710
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
19
Views
2K
Back
Top