Welcome to our community

Be a part of something great, join today!

Linear dependency

Yankel

Active member
Jan 27, 2012
398
Let a,b,c be 3 vectors in R3.

Let A be a 3X3 matrix with a,b,c being it's columns. It is known that there exist x such that:

[tex]A^{17}\cdot \begin{pmatrix} 1\\ 2\\ x \end{pmatrix}= \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}[/tex]

Which statement is the correct one:

1) a,b and c are linearly independent
2) a,b and c are linearly dependent
3) transpose((1,2,x)) is linear combination of a,b,c
4) the system:
[tex]A\cdot \begin{pmatrix} 1\\ 2\\ x \end{pmatrix}[/tex]
has a non trivial solution

The correct answer is (2), but I don't understand why it is correct...

thanks.
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
if a,b,c are linearly independent, then rank(A) = 3.

this means, in particular that:

3 = dim(ker(A)) + 3, so:

dim(ker(A)) = 0, that is, the null space of A is {(0,0,0)}.

but if A17(x,y,z) = (0,0,0), then:

A16(x,y,z) = (0,0,0), and thus:

A15(x,y,z) = A14(x,y,z) =....= A(x,y,z) = (0,0,0),

so that (x,y,z) = (0,0,0).

since (1,2,x) ≠ (1,0,0) (no matter what we choose for x),

the columns of A cannot be linearly independent. this means (1) is not true.

let's look at (3). suppose that:

$A = \begin{bmatrix}0&0&0\\0&0&0\\0&0&1 \end{bmatrix}$

then for x = 0, we have:

A(1,2,0) = (0,0,0), so certainly A17(1,2,0) = (0,0,0), but (1,2,x) is not in im(A) (which it would be if it were in the column space of A).

note that it IS possible to have SOME A such that:

A17(1,2,0) = (0,0,0), with (1,2,x) in the column space of A. let:

$A = \begin{bmatrix}-2&1&0\\-4&2&0\\-2x&x&0 \end{bmatrix}$

clearly A(0,1,0) = (1,2,x) so that (1,2,x) = b (and is thus a linear combination of a,b, and c). but an easy calculation shows that:

A2 = 0, for any choice of x, so that A17(x,y,z) = A15(A2(x,y,z)) = A15(0,0,0) = (0,0,0).

so (3) isn't ALWAYS true, but it MIGHT be true.

you have some typo in (4), as you haven't defined a system of equations (no "equals" sign), so until you rectify this, i cannot give a proper argument. however, the argument for (1) shows that indeed, {a,b,c} cannot be linearly independent, so must be linearly dependent.
 

Yankel

Active member
Jan 27, 2012
398
thanks for your help

I never studies transformations (yet), so I am struggling with im() and ker()...

I do understand why the columns of A^17 are dependent, the only part I got missing is why if the columns of A^17 are dependent, the columns of A are also dependent...

I need an explanation that doesn't use linear transformations knowledge...thanks !!
 
Last edited:

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
fix a basis for Rn, and another one for Rm. then there is a unique matrix relative to those bases for any linear transformation T, and every such matrix corresponds to some linear transformation T.

loosely, matrices and linear transformations are "the same things", they're just "written" differently.

you have probably studied null spaces and column spaces belonging to a matrix $A$. these ARE the direct analogues (for a linear transformation $v \to Av$) of ker(T) and im(T) for a general linear transformation T. there's nothing mysterious about this:

kernels are what maps to 0.
images are the entirety of what gets mapped TO.

kernels (or null spaces) measure "how much shrinkage we get". images measure "how big what's left goes to". there's a natural trade-off, here: bigger image means smaller kernel, and smaller image means bigger kernel. the way we keep score is called "dimension".

linear independence is related to kernels
spanning is related to images

what this means for matrices is:

a matrix is 1-1 if the nullspace is {0}, which means ALL its columns are linearly independent. for square matrices, this means the matrix is invertible.

a matrix is onto if it has as many independent columns as the dimension of its co-domain (target space). in particular if it is an mxn matrix with m < n, the columns will be linearly dependent.

linear transformations (think: matrices with a "fancy name". this is not quite accurate, but close enough for visualization) change one vector space to another. they preserve "vector-space-ness": that is they preserve sums:

T(u+v) = T(u) + T(v)

and scalar multiples:

T(cv) = c(T(v)).

since they are functions, they can't "enlarge" a vector space:

dim(T(V)) ≤ dim(V)

but "good ones" preserve dimension:

dim(T(V)) = T(V) <---these are invertible.

********

for any linear transformation T:V-->W, the set ker(T) = {u in V: T(u) = 0} is a SUBSPACE of V. this boils down to the following facts:

1) if u is in ker(T) and v is in ker(T), then:

T(u+v) = T(u) + T(v) (since T is linear)
= 0 + 0 = 0 (since T(u) = 0, and T(v) = 0).

2) if u is in ker(T), so is cu:

T(cu) = c(T(u)) = c(0) = 0

3) 0 is always in ker(T):

T(0) = T(0+0) = T(0) + T(0)
0 = T(0) (subtracting T(0) from both sides).

if T:V-->W is a linear transformation, then the set:

im(T) = T(V) = {w in W: w = T(v) for some v in V} is a subspace of W.

1) suppose w,x are in im(T).

then w = T(u), x = T(v) for some u,v in V.

thus w+x = T(u) + T(v) = T(u+v), so w+x is in im(T).

2) if w is in W, so is cw:

since w = T(u), cw = c(T(u)) = T(cu), so cw is in im(T).

3) 0 is in im(T):

0 = T(0), and 0 is always in V, for any vector space.

*********
now, basis vectors are useful: they let us use COORDINATES (numbers) to represent vectors. but bases aren't UNIQUE, we can have several different coordinate systems on the same space. so it's better not to get "too attached" to any particular basis, dimension is one of those things that stay the same no matter which basis we use. so theorems that say something about dimension are more POWERFUL than theorems which rely on numerical calculation.