Vector Spaces - The Exchange Theorem - Cooperstein Theorem 1.16

In summary, linear independence means that a finite set of vectors is linearly independent if and only if there exists a set of non-zero scalars such that:$c_1v_1 + \cdots + c_kv_k = 0_V$
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading Bruce Cooperstein's book: Advanced Linear Algebra ... ...

I am focused on Section 1.6 Bases and Finite-Dimensional Vector Spaces ...

I need help with the proof of Theorem 1.16 ...

Theorem 1.16 and its proof reads as follows:
View attachment 5134
Question 1


In the second paragraph of above proof we read the following:

"... ... Since \(\displaystyle y_1 \neq 0\) it follows ... ... Can someone please explain exactly why \(\displaystyle y_1 \neq 0\)?Question 2

In the second paragraph of above proof we read the following:

"... ... By reordering the vectors of \(\displaystyle X\), we can assume that \(\displaystyle x_n\) is a linear combination of \(\displaystyle Z_1 = ( y_1, x_1, \ ... \ ... \ , x_{n-1}\). ... ... "

Can someone explain (in some detail) exactly why \(\displaystyle x_n\) is a linear combination of \(\displaystyle Z_1 = ( y_1, x_1, \ ... \ ... \ , x_{n-1}) \)?
Hope someone can help ... ...

Help will be much appreciated ... ...

Peter
NOTE: The text from Cooperstein displayed above mentions Theorem 1.14 ... so I am providing the statement of Theorem 1.14 as follows:View attachment 5135
 
Physics news on Phys.org
  • #2
Hi Peter,

Peter said:
Can someone please explain exactly why \(\displaystyle y_1 \neq 0\)?

In the proof, we are assuming that $Y$ is linearly independent, so it can't contain null vectors.

Peter said:
Can someone explain (in some detail) exactly why \(\displaystyle x_n\) is a linear combination of \(\displaystyle Z_1 = ( y_1, x_1, \ ... \ ... \ , x_{n-1}\)?

Once you have noted that $y_{1}\in Span(X)$ you hav that $y_{1}=\displaystyle\sum_{i=0}^{m}c_{i}x_{i}$ for some $c_{i}\in K$ (being $K$ the field under your vector space $V$).
Now, since $y_{1}\neq 0$ there is at least one non zero coefficient in the above sum, say $c_{j}$.
Can you then imagine how $x_{j}$ is a linear combimation of $x_{i}$, with $i\neq j$ and $y_{1}$?
If you can do that, the rest is just taking a different order in the indices.
 
  • #3
Peter said:
Can someone please explain exactly why \(\displaystyle y_1 \neq 0\)?

By assumption you start with a lin. ind. set, therefore no vector in it can be zero. (Or show that if you have a list of vectors, and a zero vector is among them, then that list is automatically lin. dep.)
 
  • #4
It is crucial, when talking about bases, to thoroughly wrap one's mind around the concept of linear independence, and linear dependence.

A finite set $S = \{v_1,v_2,\dots,v_k\} \subset V$ of vectors in a vector space $V$ over a field $F$ is said to be linear dependent if there exist scalars $c_1,c_2,\dots,c_k \in F$ not all zero such that:

$c_1v_1 + c_2v_2 + \cdots + c_kv_k = 0_V$

In particular, if $c_j \neq 0_F$, then:

$v_j = -\dfrac{c_1}{c_j}v_1 - \cdots -\dfrac{c_{j-1}}{c_j}v_{j-1} - \dfrac{c_{j+1}}{c_j}v_{j+1}-\cdots-\dfrac{c_k}{c_j}v_k$

that is, $v_j \in \text{span}(S - \{v_j\})$, so we have some "redundancy" in $S$ as a spanning set of $\text{span}(S)$.

As an easy consequence, we can prove, straight from the definition:

for any finite set $S \subset V$ such that $0_V \in S$, $S$ is a linearly dependent set.

Proof: re-label the set $S$ like so: $\{v_1,v_2,\dots,v_{k-1},0_V\}$. Choose $\{c_1,c_2,\dots,c_{k-1},c_k\}$ in the following way:

$c_1 = c_2 = \cdots = c_{k-1} = 0_F$, and $c_k = 1_F$.

Since $F$ is a field, $0_F \neq 1_F$, and our set of $c$'s is not all zeros (it has one non-zero element).

Then $c_1v_1 + \cdots + c_{k-1}v_{k-1} + c_k 0_V = 0_Fv_1 + \cdots + 0_Fv_{k-1} + 1_F0_V$

$= 0_V +\cdots + 0_V + 1_F0_V = 1_F0_V = 0_V$.

Linear independence is the negation of the property of linear dependence. Since linear dependence asserts the existence of a certain set of non-zero scalars that produce a 0 linear combination, linear independence asserts that for all such non-zero sets, they never produce a 0 linear combination. We can restate this as:

A finite set $S = \{v_1,\dots,v_k\} \subset V$ is linearly independent if:

$c_1v_1 + \cdots + c_kv_k = 0_V \implies c_1 = \cdots = c_k = 0_F$.

If a (finite) set $S$ is linearly independent, *all* the elements of $S$ are necessary to generate $\text{span}(S)$. For if $v_j \in S$, there is no way to realize $v_j$ as a linear combination of the other elements of $S$ (for if so, if we could write:

$v_j = d_1v_1 + \cdots + d_{j-1}v_{j-1} + d_{j+1}v_{j+1} + \cdots + d_kv_k$

then we would have:

$d_1v_1 + \cdots + d_{j-1}v_{j-1} - 1_Fv_j + d_{j+1}v_{j+1} + \cdots + d_kv_k = 0_V$, and the set:

$\{d_1,\dots,-1_F,d_{j+1},\dots,d_k\}$ is not all zero elements of $F$ ($-1_F$ cannot be zero, do you see why?). This contradicts the linear independence of $S$).

These ideas can be extended to infinite sets, as well, if we only allow *finite* linear combinations. But there are certain subtleties involved which are better left alone, for now (for finite-dimensional vector spaces, you will not need to examine infinite generating sets).
 
  • #5
ThePerfectHacker said:
By assumption you start with a lin. ind. set, therefore no vector in it can be zero. (Or show that if you have a list of vectors, and a zero vector is among them, then that list is automatically lin. dep.)
Thanks Fallen Angel and ThePerfectHacker ... appreciate the help ...

Peter

- - - Updated - - -

Deveno said:
It is crucial, when talking about bases, to thoroughly wrap one's mind around the concept of linear independence, and linear dependence.

A finite set $S = \{v_1,v_2,\dots,v_k\} \subset V$ of vectors in a vector space $V$ over a field $F$ is said to be linear dependent if there exist scalars $c_1,c_2,\dots,c_k \in F$ not all zero such that:

$c_1v_1 + c_2v_2 + \cdots + c_kv_k = 0_V$

In particular, if $c_j \neq 0_F$, then:

$v_j = -\dfrac{c_1}{c_j}v_1 - \cdots -\dfrac{c_{j-1}}{c_j}v_{j-1} - \dfrac{c_{j+1}}{c_j}v_{j+1}-\cdots-\dfrac{c_k}{c_j}v_k$

that is, $v_j \in \text{span}(S - \{v_j\})$, so we have some "redundancy" in $S$ as a spanning set of $\text{span}(S)$.

As an easy consequence, we can prove, straight from the definition:

for any finite set $S \subset V$ such that $0_V \in S$, $S$ is a linearly dependent set.

Proof: re-label the set $S$ like so: $\{v_1,v_2,\dots,v_{k-1},0_V\}$. Choose $\{c_1,c_2,\dots,c_{k-1},c_k\}$ in the following way:

$c_1 = c_2 = \cdots = c_{k-1} = 0_F$, and $c_k = 1_F$.

Since $F$ is a field, $0_F \neq 1_F$, and our set of $c$'s is not all zeros (it has one non-zero element).

Then $c_1v_1 + \cdots + c_{k-1}v_{k-1} + c_k 0_V = 0_Fv_1 + \cdots + 0_Fv_{k-1} + 1_F0_V$

$= 0_V +\cdots + 0_V + 1_F0_V = 1_F0_V = 0_V$.

Linear independence is the negation of the property of linear dependence. Since linear dependence asserts the existence of a certain set of non-zero scalars that produce a 0 linear combination, linear independence asserts that for all such non-zero sets, they never produce a 0 linear combination. We can restate this as:

A finite set $S = \{v_1,\dots,v_k\} \subset V$ is linearly independent if:

$c_1v_1 + \cdots + c_kv_k = 0_V \implies c_1 = \cdots = c_k = 0_F$.

If a (finite) set $S$ is linearly independent, *all* the elements of $S$ are necessary to generate $\text{span}(S)$. For if $v_j \in S$, there is no way to realize $v_j$ as a linear combination of the other elements of $S$ (for if so, if we could write:

$v_j = d_1v_1 + \cdots + d_{j-1}v_{j-1} + d_{j+1}v_{j+1} + \cdots + d_kv_k$

then we would have:

$d_1v_1 + \cdots + d_{j-1}v_{j-1} - 1_Fv_j + d_{j+1}v_{j+1} + \cdots + d_kv_k = 0_V$, and the set:

$\{d_1,\dots,-1_F,d_{j+1},\dots,d_k\}$ is not all zero elements of $F$ ($-1_F$ cannot be zero, do you see why?). This contradicts the linear independence of $S$).

These ideas can be extended to infinite sets, as well, if we only allow *finite* linear combinations. But there are certain subtleties involved which are better left alone, for now (for finite-dimensional vector spaces, you will not need to examine infinite generating sets).
Thank you, Deveno, for a particularly helpful post ... really appreciate your help ...

Peter
 
  • #6
Peter said:
Thanks Fallen Angel and ThePerfectHacker ... appreciate the help ...

Peter

- - - Updated - - -Thank you, Deveno, for a particularly helpful post ... really appreciate your help ...

Peter

There is another point that I am not quite clear on regarding Cooperstein's proof ... so for convenience of MHB readers I will provide the text of the Exchange Theorem and its proof again ...https://www.physicsforums.com/attachments/5138In the above text, we read the following:

"... ... Since we are assuming that \(\displaystyle x_n \in \text{Span} (Z_1)\), it follows that \(\displaystyle \text{Span} (Z_1) = \text{Span} (X) = V\) ... ... Can someone please explain exactly why \(\displaystyle x_n \in \text{Span} (Z_1)\) implies that \(\displaystyle \text{Span} (Z_1) = \text{Span} (X) = V\)?

Help will be appreciated ... ...

Peter
 
  • #7
Peter said:
There is another point that I am not quite clear on regarding Cooperstein's proof ... so for convenience of MHB readers I will provide the text of the Exchange Theorem and its proof again ...In the above text, we read the following:

"... ... Since we are assuming that \(\displaystyle x_n \in \text{Span} (Z_1)\), it follows that \(\displaystyle \text{Span} (Z_1) = \text{Span} (X) = V\) ... ... Can someone please explain exactly why \(\displaystyle x_n \in \text{Span} (Z_1)\) implies that \(\displaystyle \text{Span} (Z_1) = \text{Span} (X) = V\)?

Help will be appreciated ... ...

Peter

If $\{y_1,x_1,\dots,x_n\}$ is linearly dependent, there is some NOT ALL ZERO $c_j$ for $j = 1,2,\dots,n,n+1$ such that:

$c_1x_1 + c_2x + \cdots + c_nx_n + c_{n+1}y_1 = 0$

Now, if $c_j = 0$ for all $j = 1,2,\dots,n$ (only the first $n$), we are left with:

$c_{n+1}y_1 = 0$, and since our $c_j$ were assumed not ALL zero, we would have $c_{n+1} \neq 0$, forcing $y_1 = 0$.

But $0$ cannot be part of any linearly independent set (see my previous post), so this possibility violates our assumption that $Y = \{y_1,\dots,y_{n+1}\}$ is linearly independent.

So for $Y$ to be linearly independent, we have to have one of the "earlier" $c_j$ be non-zero. It doesn't really matter which one, for the sake of ease in writing the subscripts, we assume without loss of generality it is $c_n$ (if it isn't actuallty $c_n$, we can just re-arrange the $x_i$).

But then this gives:

$c_nx_n = -c_1x_1 - \cdots - c_{n-1}x_{n-1} - c_{n+1}y_1$

and since $c_n \neq 0$, we can divide both sides by it, to explicitly get $x_n$ as a linear combination of $\{y_1,x_1,\dots,x_{n-1}\}$.

Thus any element of $\text{span}(X)$, that is:

$a_1x_1 + \cdots + a_nx_n$ is equal to:

$a_1x_1 + \cdots + a_{n-1}x_{n-1} + a_n\left( -\dfrac{c_1}{c_n}x_1 - \cdots - \dfrac{c_{n-1}}{c_n}x_{n-1} -\dfrac{ c_{n+1}}{c_n}y_1\right)$

$= \left(a_1 - \dfrac{a_nc_1}{c_n}\right)x_1 + \cdots + \left(a_{n-1} - \dfrac{a_nc_{n-1}}{c_n}\right)x_{n-1} - \dfrac{a_nc_{n+1}}{c_n}y_1$,

which is an element of $\text{span}(Z_1)$, so $\text{span}(X) \subseteq \text{span}(Z_1)$.

The other inclusion is obvious, since $\text{span}(X) = V$, and any linear combination of elements of $V$ is contained in $V$, by the closure axioms of a vector space.
 
  • #8
Deveno said:
If $\{y_1,x_1,\dots,x_n\}$ is linearly dependent, there is some NOT ALL ZERO $c_j$ for $j = 1,2,\dots,n,n+1$ such that:

$c_1x_1 + c_2x + \cdots + c_nx_n + c_{n+1}y_1 = 0$

Now, if $c_j = 0$ for all $j = 1,2,\dots,n$ (only the first $n$), we are left with:

$c_{n+1}y_1 = 0$, and since our $c_j$ were assumed not ALL zero, we would have $c_{n+1} \neq 0$, forcing $y_1 = 0$.

But $0$ cannot be part of any linearly independent set (see my previous post), so this possibility violates our assumption that $Y = \{y_1,\dots,y_{n+1}\}$ is linearly independent.

So for $Y$ to be linearly independent, we have to have one of the "earlier" $c_j$ be non-zero. It doesn't really matter which one, for the sake of ease in writing the subscripts, we assume without loss of generality it is $c_n$ (if it isn't actuallty $c_n$, we can just re-arrange the $x_i$).

But then this gives:

$c_nx_n = -c_1x_1 - \cdots - c_{n-1}x_{n-1} - c_{n+1}y_1$

and since $c_n \neq 0$, we can divide both sides by it, to explicitly get $x_n$ as a linear combination of $\{y_1,x_1,\dots,x_{n-1}\}$.

Thus any element of $\text{span}(X)$, that is:

$a_1x_1 + \cdots + a_nx_n$ is equal to:

$a_1x_1 + \cdots + a_{n-1}x_{n-1} + a_n\left( -\dfrac{c_1}{c_n}x_1 - \cdots - \dfrac{c_{n-1}}{c_n}x_{n-1} -\dfrac{ c_{n+1}}{c_n}y_1\right)$

$= \left(a_1 - \dfrac{a_nc_1}{c_n}\right)x_1 + \cdots + \left(a_{n-1} - \dfrac{a_nc_{n-1}}{c_n}\right)x_{n-1} - \dfrac{a_nc_{n+1}}{c_n}y_1$,

which is an element of $\text{span}(Z_1)$, so $\text{span}(X) \subseteq \text{span}(Z_1)$.

The other inclusion is obvious, since $\text{span}(X) = V$, and any linear combination of elements of $V$ is contained in $V$, by the closure axioms of a vector space.
Thanks for for the help Deveno ... much appreciated ...

... working through the the detail of your post now ...

Thanks again,

Peter
 

Related to Vector Spaces - The Exchange Theorem - Cooperstein Theorem 1.16

1. What is a vector space?

A vector space is a mathematical structure that consists of a set of objects called vectors, which can be added together and multiplied by scalars (numbers). The vectors in a vector space must also follow certain properties, such as closure under addition and multiplication, and the existence of a zero vector and additive inverse.

2. What is the Exchange Theorem in vector spaces?

The Exchange Theorem, also known as the Steinitz Exchange Lemma, is a fundamental result in linear algebra that states that any two bases of a vector space must have the same number of elements. This means that the dimension of a vector space is well-defined and is independent of the choice of basis.

3. What is the Cooperstein Theorem 1.16?

The Cooperstein Theorem 1.16, also known as the Dimension Theorem, is a result that states that the dimension of a vector space is equal to the maximum number of linearly independent vectors in that space. This means that the dimension of a vector space can be thought of as the "size" of the space in terms of the number of vectors needed to span it.

4. How can the Exchange Theorem and Cooperstein Theorem 1.16 be applied?

These theorems are essential tools in linear algebra and can be used to prove many results, such as the existence of a basis for any finite-dimensional vector space and the uniqueness of the dimension of a vector space. They are also used in many applications, such as in optimization problems and in the study of systems of linear equations.

5. Are there any other important theorems related to vector spaces?

Yes, there are many other important theorems in linear algebra that are related to vector spaces, such as the Rank-Nullity Theorem, which relates the dimensions of the null space and the range of a linear transformation, and the Fundamental Theorem of Linear Algebra, which states that every vector space has a basis and that any two bases have the same number of elements.

Similar threads

  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
984
  • Linear and Abstract Algebra
Replies
5
Views
2K
Back
Top