Solutions to Ax=b lie in row space?

In summary: I guess the title says it all :)In summary, your textbook is saying that when solving for x in the system:x_0= -2n_1= 1n_2= 0the LU factorisation is:A= -1U= 1x_0= 0
  • #1
engineer_ja
16
0
Hi,

My textbooks say that when a solution, x, is found to Ax=b it has a particular solution, x_0, such that A*x_0=b which is then combined with other solutions from the null space, n_i, such that A*n_i=0.

However, when playing about with this I seem to have come across a problem.

for the system:

|1 3 3 2| |u| |1|
|2 6 9 5| |v| = |5|
|-1 -3 3 0| |w| |5|
|y|

I get an LU factorisation of A to be:

| 1 0 0 | | 1 3 3 2 |
| 2 1 0 | | 0 0 3 1 |
|-1 2 1 | | 0 0 0 0 |

when solved for x this gives x_0 as:

|-2 |
| 0 |
| 1 |
| 0 |

and n_1 and n_2 as:

|-3 | | -1 |
| 1 | | 0 |
| 0 | |-1/3|
| 0 | | 1 |

Using the rows of U as a basis for the row space of A, the particular solution, x_0, cannot be formed, so does not lie in the row space of A as it should.

Have I done something wrong or is my understanding incorrect (is the "row space component of x" my books talk about not the same as x_0...)?

Many hanks in advance ;)
 
Physics news on Phys.org
  • #2
engineer_ja said:
Hi,

My textbooks say that when a solution, x, is found to Ax=b it has a particular solution, x_0, such that A*x_0=b which is then combined with other solutions from the null space, n_i, such that A*n_i=0.

However, when playing about with this I seem to have come across a problem.

for the system:

|1 3 3 2| |u| |1|
|2 6 9 5| |v| = |5|
|-1 -3 3 0| |w| |5|
|y|

I get an LU factorisation of A to be:

| 1 0 0 | | 1 3 3 2 |
| 2 1 0 | | 0 0 3 1 |
|-1 2 1 | | 0 0 0 0 |

when solved for x this gives x_0 as:

|-2 |
| 0 |
| 1 |
| 0 |

and n_1 and n_2 as:

|-3 | | -1 |
| 1 | | 0 |
| 0 | |-1/3|
| 0 | | 1 |

Using the rows of U as a basis for the row space of A, the particular solution, x_0, cannot be formed, so does not lie in the row space of A as it should.

Have I done something wrong or is my understanding incorrect (is the "row space component of x" my books talk about not the same as x_0...)?

Many hanks in advance ;)

Your vector:

|-2 |
| 0 |
| 1 |
| 0 |

is not just composed of a row space part. It is the sum of a row space part:

[ -.211009174311 ]
[ -.633027522933 ]
[ .963302752291 ]
[ .110091743118 ]

and a null space part:

[ -1.78899082569 ]
[..633027522933 ]
[ .036697247709 ]
[ -.110091743118 ]
 
  • #3
Hi,

Thanks very much, I thought it was something like that.
So am I correct in saying the row space part lies only in the row space, and the null space part of x_0 is just a linear combination of the null space basis that gives zero (i.e. is redundant) when multiplied, but produces "nice" numbers in our x_0?

Also, when solving, why do we get the "nice" numbers and not the row space part only?

How do I find these two separate parts? Do I take a dot product with something...or solve more simultaneous equations...

Thanks for your help!
The books don't seem too clear on this matter!
 
  • #4
I'm going to lunch right now and don't have time for much of a post, but I'll post more when I get back.

In the mean time, tell me what software you are using to calculate your LU decomposition?

I'm an engineer myself (EE) and I've been using linear algebra more in recent years. I'm curious; your handle suggests you may be an engineer? Is that the case? Why the interest in linear algebra?
 
  • #5
Hi, thanks!

Im using Matlab (But did the above example by hand without partial pivoting), and yes, I'm an engineering student (general currently).
Im studying linear algebra as part of a maths course, applicable to the general solving of simultaneous equations and computer programming etc. the main application so far is presented as fluids.
What is the electrical/electronic application of linear algebra?
 
  • #6
engineer_ja said:
Hi,

Thanks very much, I thought it was something like that.
So am I correct in saying the row space part lies only in the row space, and the null space part of x_0 is just a linear combination of the null space basis that gives zero (i.e. is redundant) when multiplied, but produces "nice" numbers in our x_0?

Yes, that's pretty much the case.

Solving a matrix equation Ax=b is, in effect, asking what linear combination of the columns of A will give b? One can solve the problem from that point of view with a solver as found in many modern mathematical software packages. For example, here is the solution given by Mathematica. We're asking, what values of a, b, c and d will provide linear combinations of the columns of A equal to b:

attachment.php?attachmentid=57232&stc=1&d=1364521046.png


The solution is saying that we may pick a and b arbitrarily and then c and d will be determined as shown in the solution. Notice that we may pick integers a and b such that c and d will also be integers. There are your "nice" solutions. And, of course, we can see why there are an infinite number of solutions.

engineer_ja said:
Also, when solving, why do we get the "nice" numbers and not the row space part only?

Elimination methods, such as Gaussian elimination, or the Gauss-Jordan method don't just naturally give a solution that excludes a null space component. The solution is nice looking because you are starting with integer coefficients and doing rational arithmetic to obtain a solution.

engineer_ja said:
How do I find these two separate parts? Do I take a dot product with something...or solve more simultaneous equations...

To answer this question will, I suspect, go beyond the scope of the course you're taking.

My favorite way to do this is to use the singular value decomposition, which you will have to find out about elsewhere.

Your A matrix is not square; you have fewer equations than variables, so right away there is the possibility of an infinite number of solutions. Yours is a 3x4 A matrix, so the rank of the matrix (you may have to look up the word "rank") can't be greater than 3, but in fact the rank is only 2. This means that only 2 vectors are needed to span the row space and 2 more will span the null space.

Here is the singular value decomposition obtained with the help of Mathematica. The two (orthogonal) column vectors in green span the row space (providing a row space basis), and the two (orthogonal) red column vectors span the null space (providing a null space basis). You can see the result of the premultiplication by your A matrix; two non-zero columns (from the row space basis) and two zero columns (from the null space basis).

Given these basis vectors, you can decompose any solution into its component parts.

attachment.php?attachmentid=57233&stc=1&d=1364520368.png


As I said, this is probably beyond the scope of your coursework, but if it piques your interest, good for you!

engineer_ja said:
I'm using Matlab (But did the above example by hand without partial pivoting)

Well, that answers a question I had. As far as I know, the LU decomposition is only applicable to square matrices. See if you get an error when you ask Matlab to perform the LU decomposition on your A matrix. I use Mathematica, and I get an error if I try to apply the LU decomposition to your A matrix.

engineer_ja said:
What is the electrical/electronic application of linear algebra?

In EE linear algebra is used for network analysis. See this thread:

https://www.physicsforums.com/showthread.php?t=674884

for an example.
 

Attachments

  • MatrixSolution.png
    MatrixSolution.png
    1.6 KB · Views: 815
  • MatrixSVD.png
    MatrixSVD.png
    28.3 KB · Views: 803
  • #7
Excellent, Thank you very much!

Matlab does perform LU on this matrix, but uses partial pivoting to give PA=LU

I just checked, and singular value decomposition is covered at the end of his terms course. ;)

Thanks once again for the clear detailed answers.
 
Last edited:

Related to Solutions to Ax=b lie in row space?

What is the definition of "Solutions to Ax=b lie in row space"?

"Solutions to Ax=b lie in row space" refers to the concept in linear algebra where the solution to a system of linear equations can be expressed as a linear combination of the rows of the coefficient matrix A.

How do you determine if a solution to Ax=b lies in row space?

A solution to Ax=b lies in row space if and only if the vector b is a linear combination of the rows of the matrix A. This can be determined by reducing the augmented matrix [A|b] to its row-echelon form and checking if the last column is a pivot column.

Why is it important for solutions to Ax=b to lie in row space?

It is important for solutions to Ax=b to lie in row space because it allows for a more efficient and systematic way of finding solutions to systems of linear equations. It also provides a deeper understanding of the relationship between the coefficients and the solutions of the system.

Can a system of linear equations have multiple solutions in row space?

Yes, a system of linear equations can have infinitely many solutions in row space. This occurs when there are more variables than equations, resulting in free variables that can take on any value.

Are there any cases where a solution to Ax=b does not lie in row space?

No, if a solution to Ax=b exists, then it must lie in row space. This is because the solution can always be expressed as a linear combination of the rows of the matrix A, regardless of the specific values of the variables.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
961
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
957
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
914
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
839
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
1K
Back
Top