- #1
Greger
- 46
- 0
If A and B are vector spaces over ℝ or ℂ show that a sequence (a_n, b_n) in A×B converges to (a,b) in A×B only if a_n converges to a in A and b_n converges to b in B as n tends to infinity.
To me this statement sounds pretty intuitive but I have been having trouble actually proving it properly.
My first attempt was to assume that a_n converges to a in A and b_n converges to b in B then it was kind of easy to see that (a_n, b_n) converges to (a,b) but showing that the converse is true seems to be a bit trickier.
To me it seems like if you have a sequence (a_n, b_n) where a_n is in A and b_n is in B that it is 'obvious' that it converges to (a,b) only if if the individual sequences converge in their space. I mean like, if it didn't converge in it's space, how could it converge in the Cartesian product?
Does anyone have any ideas on finishing off this proof? (assuming that I started with the correct idea)
To me this statement sounds pretty intuitive but I have been having trouble actually proving it properly.
My first attempt was to assume that a_n converges to a in A and b_n converges to b in B then it was kind of easy to see that (a_n, b_n) converges to (a,b) but showing that the converse is true seems to be a bit trickier.
To me it seems like if you have a sequence (a_n, b_n) where a_n is in A and b_n is in B that it is 'obvious' that it converges to (a,b) only if if the individual sequences converge in their space. I mean like, if it didn't converge in it's space, how could it converge in the Cartesian product?
Does anyone have any ideas on finishing off this proof? (assuming that I started with the correct idea)