# linear system-coset

#### mathmari

##### Well-known member
MHB Site Helper
Hi! I am stuck at an exercise.
I am asked to describe the set of solutions of a linear system as a coset of an appropriate subspace.
Could you explain me what I have to do?

#### caffeinemachine

##### Well-known member
MHB Math Scholar
Hi! I am stuck at an exercise.
I am asked to describe the set of solutions of a linear system as a coset of an appropriate subspace.
Could you explain me what I have to do?
You must already know that any system of linear equations can be expressed in the matrix form $Ax=B$, where $A$ is an $m\times n$ matrix, $B$ is an $n\times 1$ column vector and $x$ is the unknown. We need to find all such $x$ which satisfy $Ax=B$. Assume that the entries of $A$ and $B$ come from a field $F$ (which might be $\mathbb R$ or $\mathbb C$ most likely).
Let $x_0$ be a solution to $Ax=B$. Let $S$ be the set of all the vectors $y$ which satisfy $Ay=0$, that is, $S=\{y:Ay=0\}$. Show that $S$ is a subspace of the vector space $F^n$. Note that $x_0+S$ is a coset of $S$ all of whose elements are solutions of $Ax=B$. Can there be any other solutions?

#### mathmari

##### Well-known member
MHB Site Helper
So knowing a solution of the system, $$x_{0}$$, I have to say that the set of solutions as a coset of an subspace is $$x_{0}+S$$ ?

#### caffeinemachine

##### Well-known member
MHB Math Scholar
So knowing a solution of the system, $$x_{0}$$, I have to say that the set of solutions as a coset of an subspace is $$x_{0}+S$$ ?
Note that $x_0+S=\{x_0+z:z\in S\}$. Yes, if $x_0$ is a solution then $x_0+S$ is the set of all the solutions. Can you prove this?

#### mathmari

##### Well-known member
MHB Site Helper
We know that $$Ax_{0}=B$$ and $$Az=0$$, $$z \epsilon S$$ .
We want to prove that $$x_{0}+S$$ is the set of all the solutions, so
$$A(x_{0}+z)=Ax_{0}+Az=B+0=B$$.
Is that right??

#### caffeinemachine

##### Well-known member
MHB Math Scholar
We know that $$Ax_{0}=B$$ and $$Az=0$$, $$z \epsilon S$$ .
We want to prove that $$x_{0}+S$$ is the set of all the solutions, so
$$A(x_{0}+z)=Ax_{0}+Az=B+0=B$$.
Is that right??
Not entirely.

What you have proved is that each member of $x_0+S$ is a solution of $Ax=B$. But you have not shown that every solution of $Ax=B$ lies in $x_0+S$. Try it.

#### mathmari

##### Well-known member
MHB Site Helper
I'm stuck right now...I don't know how to prove this... Could you give me a hint??

#### mathmari

##### Well-known member
MHB Site Helper
Is it maybe like that:
Let $$y_{0}$$ be another solution of the system , $$Ay_{0}=B$$ , then $$A(x_{0}-y_{0})=0$$ and $$x_{0}-y_{0}=z$$ is a solution of $$Ax=0$$. So, $$y_{0}=x_{0}+z$$ .That means that every solution of $$Ax=B$$ lies in $$x_{0}+S$$ ?

#### caffeinemachine

##### Well-known member
MHB Math Scholar
Is it maybe like that:
Let $$y_{0}$$ be another solution of the system , $$Ay_{0}=B$$ , then $$A(x_{0}-y_{0})=0$$ and $$x_{0}-y_{0}=z$$ is a solution of $$Ax=0$$. So, $$y_{0}=x_{0}+z$$ .That means that every solution of $$Ax=B$$ lies in $$x_{0}+S$$ ?
Perfect! See why I wasn't giving away the solution?

#### mathmari

##### Well-known member
MHB Site Helper
Great!!! Thank you very much!!!

MHB Math Scholar

#### mathmari

##### Well-known member
MHB Site Helper
To clarify something:
When I have to describe the set of the solutions as a conset of the solutions, it's $$x_{0}+S$$???

#### caffeinemachine

##### Well-known member
MHB Math Scholar
To clarify something:
When I have to describe the set of the solutions as a conset of the solutions, it's $$x_{0}+S$$???
Umm.. I don't know what you mean by that. The set of all the solutions as a coset of a subspace is $x_0+S$ (symbols have meanings borrowed from previous posts.) 'Coset of solutions' is not making sense to me.

#### mathmari

##### Well-known member
MHB Site Helper
Umm.. I don't know what you mean by that. The set of all the solutions as a coset of a subspace is $x_0+S$ (symbols have meanings borrowed from previous posts.) 'Coset of solutions' is not making sense to me.

The set of all the solutions is described as a coset of the subspace S???

#### caffeinemachine

##### Well-known member
MHB Math Scholar
The set of all the solutions is described as a coset of the subspace S???
Yes. $x_0+S$ is a coset of $S$.

- - - Updated - - -

The set of all the solutions is described as a coset of the subspace S???
Replace 'as' with 'is'. Was a typo. Sorry.

MHB Site Helper
Ok! Thank you!!!

#### Deveno

##### Well-known member
MHB Math Scholar
There is something very deep going on, here.

We are used to thinking of a system of linear equations such as:

$$\displaystyle Ax = b$$

as "something we solve for $$\displaystyle x$$" given the matrix $$\displaystyle A$$ of coefficients, and the constant vector $$\displaystyle b$$.

In this vein, what you have just shown is sometimes expressed as:

"general solution = particular solution + homogeneous solution".

Here, the solution set of the homogeneous system $$\displaystyle Ax = 0$$ is the space $$\displaystyle S$$, and $$\displaystyle x_0$$ is some "particular" vector for which $$\displaystyle Ax_0 = b$$.

But we can look at this another way: given m equations in n unknowns, we can think of the associated matrix of coefficients $$\displaystyle A$$ as something that takes an n-vector as input, and gives an m-vector as output. In other words, a linear transformation (since matrices are linear transformations...in some sense the linear transformations).

The set of n-vectors $$\displaystyle x$$ that $$\displaystyle A$$ "kills" (maps to the 0 m-vector), the space $$\displaystyle S$$, is the null space or kernel of $$\displaystyle A$$.

If the system $$\displaystyle Ax = b$$ HAS a solution, this means that $$\displaystyle b$$ lies in the image (or range​) of $$\displaystyle A$$. In fact, we can say something stronger:

There is a 1-1 correspondence between the elements $$\displaystyle b$$ in the image of $$\displaystyle A$$, and the distinct cosets $$\displaystyle x_0 + S$$. We can use the vector space structure of the image to induce a vector space structure on the cosets. The precise statement of this is known as the Rank-Nullity theorem:

For a linear transformation $$\displaystyle A$$:

$$\displaystyle \text{dim}(\text{dom}(A)) = \text{dim}(\text{im}(A)) + \text{dim}(\text{ker}(A))$$

In other words the range of A is isomorphic (as a vector space), to the quotient space $$\displaystyle \text{dom}(A)/\text{ker}(A)$$.

If we call the domain of $$\displaystyle A,\ V$$, and the kernel of $$\displaystyle A,\ S$$, we can express this more succinctly as:

$$\displaystyle A(V) \cong V/S$$.

This says the the space of POSSIBLE solutions to $$\displaystyle Ax = b$$, acts very much like $$\displaystyle V$$ (our space of n-vectors) except "shrunk down" by a factor of $$\displaystyle \text{dim}(S)$$ (since $$\displaystyle A$$ kills every n-vector in $$\displaystyle S$$).

It turns out that the set of cosets $$\displaystyle V/S$$ of the form $$\displaystyle x + S$$, can be made into a vector space in a pretty "obvious" way:

$$\displaystyle (x_1 + S) + (x_2 + S) = (x_1 + x_2) + S$$
$$\displaystyle a(x_1 + S) = ax_1 + S$$ (for a scalar $$\displaystyle a$$, and vectors $$\displaystyle x_1,x_2 \in V$$).

And this space is "smaller" than what we started with, so can be easier to work with.

In more concrete terms, when one solves a system of linear equations by row-reduction (to find the rank, or the dimension of the range, of the system), the dimension "left-over" (the number of "free variables", or parameters) is precisely the size of the basis of the null space of the system (null space = associated homogeneous system).

A baby example:

Suppose we have the equation:

$$\displaystyle 2x + 3y = 4$$.

The rank of this system is clearly 1. Since our domain is the Euclidean plane, our null space (of the matrix:

$$\displaystyle A = \begin{bmatrix}2&3 \end{bmatrix}$$)

is the subspace of the euclidean plane:

$$\displaystyle L = \{(x,y) \in \Bbb R^2: 2x+3y = 0\}$$

perhaps more clearly recognizable as the line through the origin:

$$\displaystyle y = -\frac{2}{3}x$$

Thus the solution set of our system is the line in $$\displaystyle \Bbb R^2$$ parallel to $$\displaystyle L$$ passing through the point (2,0), that is the line:

$$\displaystyle y = -\frac{2}{3}x + 2$$.

As noted above, there is a 1-1 correspondence between the lines parallel to $$\displaystyle L$$ in the plane, and the real numbers, we can just send each coset (parallel line) to twice its y-intercept:

the line $$\displaystyle (x_0,0) + L$$ is the solution set to:

$$\displaystyle 2x + 3y = 2x_0$$, or in perhaps more familiar form, the solution space to:

$$\displaystyle 2x + 3y = b$$ is:

$$\displaystyle \{(x,y) \in \Bbb R^2: (x,y) = \left(\frac{b-3t}{2},t \right) = \left(\frac{b}{2},0 \right) + t\left(-\frac{3}{2},1 \right), t \in \Bbb R\} = \left(\frac{b}{2},0\right) + L$$

(as one can see here, the vector (-3/2,1) forms a basis for the null space $$\displaystyle L$$).

"Chop the plane into parallel lines, and what you get 'acts like a line' (you can use a line crossing all the parallel lines to determine WHICH parallel line you're at)".

#### mathmari

##### Well-known member
MHB Site Helper
Nice!! Thank you very much!!!