# Difference Between Lines and Vectors (Linear Combination Problem)

#### esis

##### New member
Hello Everyone!

I am working my way through Gilbert Strang's book on linear algebra. I came across a set of solutions to the problems whose solutions are not listed in the book here. However, as I was working on the following question:

I became confused by the solution provided in the link above.

I solved the problem by setting c = d, which results in a line that passes right through origo (0,0) and straight through the first and third quadrants (i.e. (-)(-) and (+)(+)).

However, in the list of solution above, they state that:

"All vectors cv + dw are on the line passing through (0,0)...".

I am confused by this, as, for example, the vector 1/3v + w, or any combination of c and d where c does not equal d would not pass through this line. Indeed, I can imagine an unlimited amount of lines that pass through origo (0,0), but which are not on the line where c = d.

I suppose I am misunderstanding the difference between lines and vectors somehow?

Thank you!

#### Evgeny.Makarov

##### Well-known member
MHB Math Scholar
I solved the problem by setting c = d
I am not sure what $d$ is. It does not appear in your post up to this point or in the solution to problem 17 in the link.

which results in a line that passes right through origo origin (0,0) and straight through the first and third quadrants (i.e. (-)(-) and (+)(+)).
This is not a very good description of a line. There is an infinite number of lines passing through (0, 0) and the first and third quadrants. The description in the solution is better: "the line passing through (0, 0) and $u=(1/2)v+(1/2)w$". Though it relies on the identification of points and vectors (by having vectors start at the origin), which has to be posited in advance. After all, vectors and points are not exactly the same thing. I would say, "the line that passes through (0, 0) and (1, 1)".

However, in the list of solution above, they state that:

"All vectors cv + dw are on the line passing through (0,0)...".
I don't see this phrase in the solution to problem 17. The solution mentions $cv + cw$ where both coefficients are the same.

#### Deveno

##### Well-known member
MHB Math Scholar
Note that by the rules of vector spaces, we have:

$cu + cw = c(u + w)$.

Ask yourself, what does the set:

$\{c(u+w): c \in \Bbb R\}$ look like?

(Two "obvious" values for $c$ to consider first are $c = 0$ and $c = 1$. What about $c = -1$?. What relationship do (for a particular real number $c$) $c(u+w)$ and $(-c)(u+w)$ have, geometrically?).

#### esis

##### New member
Aha ok! I apologize, I'll try to be more clear. I'm a bit new to writing about math...

By c and d, I meant scalars of vectors v and w:

I did'nt realize that the full explanation in the solution set of the line was:

"All vectors cv + dw are on the line passing through (0,0) and u = 1/2v + 1/2w..." (i)

And that "All vectors cv + dw" meant all linear combinations of v and w. Now that I realized this it makes much more sense. I looked at vectors v and w and thought, "but hey, they don't pass through the line!".

Later, as I was working through questions 20-22 (same concepts, except in R3), I realized that the scalars only determine the magnitude of the resulting cv + dw linear combination vector.

My mistake was in thinking that unequal values for c and d would change the direction of the line cv + dw. I think I might have confused the concept of coefficients in a line, which change the direction of lines...

eg: y = -1x is flatter than y = -3x.

... with the effect that scalars have on the magnitude of vectors.

But thank you for your reply, it helped me understand the difference between scalars and coefficients! (assuming my understanding is now correct)

EDIT: Thanks Deveno! (I was working on my post while you posted) Your hint shows the problem way better than my post, I'll try working through your examples!

#### esis

##### New member
Oh and Deveno. I tried your example and got the following results for u = (1, 2) and w = (4, 2):

So am I correct in realizing that the line when c is positive is valid only for c >= 0? And vice versa for negative values of c? Are they the each other's... inverse?

#### Deveno

##### Well-known member
MHB Math Scholar
Your drawing is not quite correct, but yes: $c(u+w)$ and $(-c)(u+w)$ are of equal length, and in opposite directions. If our vector $u+w$ was the resultant of two forces, then $c(u+w)$ and $(-c)(u+w)$ would be "equally balanced forces" resulting in equilibrium, that is, no change at all (0 net force applied).

To identify (geometric) vectors with "points in space", we have to put all their "tails" at the origin. In other words we identify the vector PQ, which "starts at P" and "goes to Q" with the point that has coordinates for:

$P = (x,y)$
$Q = (x',y')$

of $(x'-x,y'-y)$ (sometimes written $Q - P$).

This underscores a dualism which pervades linear algebra: on the one hand, vector spaces just "are", and vectors don't care which coordinate system you use to describe them. On the other, once you've chosen a coordinate system, everything you'd like to know about vectors can be described in terms of that coordinate system. This lets us use ARITHMETIC to do GEOMETRY.

Put another way, a plane is an object, whole and indivisible. IF we distinguish some point, and call it the origin, the various ways we have of locating a point's position relative to that origin essentially constitute a "coordinate system". It turns out that any such coordinate system has to provide TWO independent pieces of information encoded by scalars (numbers).

For example: polar coordinates first specify "which" line we're on (this is theta, the angle of incidence), and then "r" tells us how far along that line we are. Rectangular coordinates tell us us "how far over (left/right)" and "how far up/down".

It turns out that the rectangular point of view is more profitable in generalizing to more than 2 dimensions, because of the symmetric roles each dimension plays (we can, for example, "switch the roles" of $x$ and $y$, and still get a plane which behaves the same way).

Note that for the linear combination $cu + dw$ to lie on the line:

$L = \{(t,t): t \in \Bbb R\}$

(this is also the line $y = x$) we must have $c = d$.

I would shy away from terms like "positive" and "negative" when talking about lines: lines don't have "directionality", only vectors do. The fact that in the picture you originally posted $u,w$ lie in the first quadrant is somewhat misleading, we might have chosen $u,w$ to be ANY two points, and the "quadrant" classification is not as useful as you might think.

#### esis

##### New member
Fascinating! This really opens up a new way of thinking for me.

With my background in Sociology, I am usually more inclined to think of theoretical concepts in relation to concrete empirical observations in a continuous hermeneutic process of theorizing, observation. I think that's why social scientists love statistical methods, with its hypothesis tests and focus on variation found in empirical observation. Keep. Your. Feet. On. The. Ground!

By my understanding of your description of the problem, we are instead dealing with pure ideas, which do not need to be rooted in empirical measurements (although I'm sure they could be! I bet I'll learn about that soon...). In other words, I don't need to think in terms of (-)(-) and (+)(+) quadrants.

I love that! I think one of the major issues for social science is that in our preoccupation with empirical observation and data, we've lost sight of the creative aspect of just thinking about abstract ideas, rooted in logic, to see where they take us. Not that empirical observation is'nt a vital component of producing truth, but I think the path forward is'nt necessary to build on canonical ideas which, to be honest, social scientists really don't have the methods or data to adequately prove at the moment.

Anyway, excuse my free associations. The point is that this got me thinking about the vectors as associated with planes, lines, and other shapes rather than with empirically rooted coordinate systems I'm used to using in statistics, and it made me aware of new assumptions for me to take while thinking about vectors. Very exciting! Thanks!

#### Deveno

##### Well-known member
MHB Math Scholar
Yes, there is and always will be a certain dynamic tension between "internal" structure dictated by logic, and "external" structure determined from empirical observation.

One thing should be clear: our models will carry with them the limits of our own imaginations, while the "outside" world will continue what it's doing, whether we understand it or not.

Assumption of linearity is, in fact, a HUGE assumption, and one that doesn't always work very well. However, for a "stable" system, it is often profitable to assume it is "locally linear", and this tends to work out very well.

Vectors have two salient features:

1. We can add them together
2. We can stretch or shrink them by a "scaling factor"

We expect these features to obey certain "expected" rules (such as $u+v = v+u$) so that "addition" and "scaling" conform to what we "intuitively" expect. The "vector space axioms" are explicit codifications of such expectations.

If empirical observed quantities appear to have these features, an attempt at "linearization" is warranted, if for nothing else, to see what empirical results the internal logic may lead us to, which we can then test for agreement.

I think your insight that geometry might be gainfully employed to give "shape" to raw statistic data is a healthy one. When collecting information about social science statistics, such as population, fertility, territorial growth, etc, it seems to me to be VERY valuable to know which of these statistics are linearly independent from the others. In linear algebra, such a process is called "a decomposition", and allows us to study a space in terms of its subspaces. Smaller chunks of information are almost always easier to digest.