Proving dependent columns when the rows are dependent

In summary, the conversation discusses a problem in which it is required to prove that if a certain condition is satisfied, then a certain matrix is a multiple of another matrix. The solution provided so far involves making two assumptions, but the second assumption is the fact that needs to be proved. Instead, the first assumption should be used, which is given in the problem. From there, it can be shown that the first column of the matrix is a constant multiple of the second column, thus proving the given condition. The conversation also mentions the importance of finding counterexamples and working with nonzero values to avoid incorrect assumptions.
  • #1
kostoglotov
234
6
I feel like I almost understand the solution I've come up with, but a step in the logic is missing. I'll post the question and my solution in LaTeX form.

Paraphrasing of text question below in LaTeX. Text question can be seen in its entirety via this imgur link: http://i.imgur.com/41fvDRN.jpg

[tex]
\ if \begin{pmatrix}
a\\
b
\end{pmatrix} \ is \ a \ multiple \ of \begin{pmatrix}
c\\
d
\end{pmatrix} \ with \ abcd \neq 0, show \ that \begin{pmatrix}
a\\
c
\end{pmatrix} \ is \ a \ multiple \ of \begin{pmatrix}
b\\
d
\end{pmatrix}
[/tex]

My solution so far

[tex]
assume \ \lambda \begin{pmatrix}
c\\
d
\end{pmatrix} = \begin{pmatrix}
a\\
b
\end{pmatrix} \rightarrow \begin{matrix}
a = \lambda c\\
b = \lambda d
\end{matrix}
[/tex]

[tex]
now \ assume \ \gamma \begin{pmatrix}
b\\
d
\end{pmatrix} = \begin{pmatrix}
a\\
c
\end{pmatrix} \rightarrow \begin{matrix}
a = \gamma b\\
c = \gamma d
\end{matrix}
[/tex]

So I'm making two assumptions

Let's take the assumptions and put them into a system of four equations

[tex]1: \ a = \lambda c \ \ \ \ 2: \ b = \lambda d \\ 3: \ a = \gamma b \ \ \ \ 4: \ c = \gamma d[/tex]

Now if we sub 3 into 1 to get A, and sub 2 into A to get B and then sub 4 into B to get C

[tex]C \rightarrow \lambda \gamma d = \lambda \gamma d[/tex]

Similarly if we, sub 2 into 3 to get A, and 1 into A to get B, and 4 into B to get C

[tex]C \rightarrow \lambda \gamma d = \lambda \gamma d[/tex]

I want to stop trying all the possible ways to get C now, because I want to look for a generalized way to show that they will all end up at the same point.

But more than this...what is the step of logic that connects the final equation C to proving the first two assumptions. I feel like this should prove the assumptions, but I don't know how exactly, or how exactly to express it.

Thanks :)
 
Physics news on Phys.org
  • #2
[itex]\gamma = \frac {c}{d}[/itex]
Then [itex] \frac {a}{b} =\frac {\lambda c}{\lambda d} = \frac{c}{d}[/itex].
 
  • #3
kostoglotov said:
So I'm making two assumptions
You have to prove those assumptions.
And it is hard to prove things that are wrong...
Actually, there are just two special cases where this assumption holds.

If you think you need "assumptions", look for counterexamples first. They are easy to find here, and they save a lot of work.
 
  • #4
kostoglotov said:
My solution so far

[tex]
assume \ \lambda \begin{pmatrix}
c\\
d
\end{pmatrix} = \begin{pmatrix}
a\\
b
\end{pmatrix} \rightarrow \begin{matrix}
a = \lambda c\\
b = \lambda d
\end{matrix}
[/tex]

[tex]
now \ assume \ \gamma \begin{pmatrix}
b\\
d
\end{pmatrix} = \begin{pmatrix}
a\\
c
\end{pmatrix} \rightarrow \begin{matrix}
a = \gamma b\\
c = \gamma d
\end{matrix}
[/tex]
With this second assumption, you are assuming the fact that you are supposed to prove! That won't get you anywhere. Instead, work with the first assumption, which is given: ##(a,b)## is a multiple of ##(c,d)##. So that means there is some constant ##\lambda## such that ##a = \lambda c## and ##b = \lambda d##.

By the way, note that the condition ##abcd \neq 0## means that all four of ##a,b,c,d## are nonzero, and therefore ##\lambda## is also nonzero.

Now we can rewrite the matrix as
$$\begin{pmatrix}
a & b \\
c & d \\
\end{pmatrix} =
\begin{pmatrix}
\lambda c & \lambda d \\
c & d \\
\end{pmatrix}$$
From this, you can easily see that first column is a constant multiple of the second column. (What is the constant?)
 
  • Like
Likes kostoglotov
  • #5
jbunniii said:
With this second assumption, you are assuming the fact that you are supposed to prove! That won't get you anywhere. Instead, work with the first assumption, which is given: ##(a,b)## is a multiple of ##(c,d)##. So that means there is some constant ##\lambda## such that ##a = \lambda c## and ##b = \lambda d##.

By the way, note that the condition ##abcd \neq 0## means that all four of ##a,b,c,d## are nonzero, and therefore ##\lambda## is also nonzero.

Now we can rewrite the matrix as
$$\begin{pmatrix}
a & b \\
c & d \\
\end{pmatrix} =
\begin{pmatrix}
\lambda c & \lambda d \\
c & d \\
\end{pmatrix}$$
From this, you can easily see that first column is a constant multiple of the second column. (What is the constant?)

Nice one! Very clear explanation, thanks :)
 

Related to Proving dependent columns when the rows are dependent

What does it mean for columns to be dependent when the rows are dependent?

Dependent columns refer to a situation where the values in one column can be determined by the values in another column. This is often seen in situations where the data in one column is directly related to the data in another column.

How can you prove that columns are dependent when the rows are dependent?

To prove that columns are dependent when the rows are dependent, you can use a mathematical method such as Gaussian elimination or calculating the determinant of the matrix. These methods will help identify if the columns are related and dependent on each other.

What are some real-world examples of dependent columns when the rows are dependent?

One example of dependent columns when the rows are dependent is in a sales report where the number of units sold and the total sales revenue are columns that are dependent on each other. Another example is in a survey where the responses to different questions may be related and therefore, the columns representing those questions are dependent.

What is the significance of proving dependent columns when the rows are dependent?

Proving dependent columns when the rows are dependent is important in many fields such as mathematics, statistics, and data analysis. It helps in identifying patterns and relationships between variables, which can be useful in making predictions and drawing conclusions.

Can dependent columns exist when the rows are independent?

Yes, it is possible for dependent columns to exist when the rows are independent. This can happen when the values in the columns are related to each other, but not necessarily to the values in the rows. In this case, the columns can be dependent, but the rows are still independent of each other.

Similar threads

  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
920
Back
Top