# Algebra presentations

#### Fermat

##### Active member
Explain how C is generated as an R algebra subject to the relation i^2=-1. I've looked at notes on this and really can't get my head round it.

#### Deveno

##### Well-known member
MHB Math Scholar
Re: algebra presentations

Imagine that we have a set with a single element, which in order to enforce the analogy with the complex numbers, we will call $\{i\}$.

We want to create an $\Bbb R$-algebra from this set in the most general way possible.

The first step, is to create some form of "multiplication" based on this set. This is going to give us a free monoid. So what we are going to do is just use the most basic multiplication we can come up with: concatenation.

So: $i \ast i = ii$.

To save on space, we will use the abbreviation(s):

$i\ast i = ii = i^2$
$(i \ast i)\ast i = i\ast(i\ast i) = iii = i^3$, etc.

Which are just "words in the single letter $i$". Following convention, we create an identity $e$ by using the "empty word": (a blank space). Let's call this structure $M$ (just to give it a name).

The next step is to somehow turn this very simple structure into a vector space over the field $\Bbb R$. We do this by taking all finite formal $\Bbb R$-linear combinations of elements of $M$.

So a typical element looks like:

$a_0e + a_1i + a_2i^2 + \cdots + a_ni^n$ (some of the $a_i$'s might be 0).

We create the vector addition by adding the coefficients of two "vectors" together:

So, if $m \geq n$:

$(a_0e + a_1i + a_2i^2 + \cdots + a_ni^n) + (b_0e + b_1i + b_2i^2 + \cdots + b_mi^m)$

$= (a_0+b_0)e + (a_1+b_1)i + (a_2 + b_2)i^2 + \cdots + (a_n + b_n)i^n + b_{n+1}i^{n+1} + \cdots + b_mi^n$

For example:

$(2e + 4i + i^3) + (1e + i^2 + i^3 + 2i^4) = 3e + 4i + i^2 + 2i^3 + 2i^4$.

Now the element $e$ is really just "the empty word", so there's no real reason to keep writing it, so we will just drop it and write a typical element as:

$a_0 + a_1i + a_2i^2 \cdots + a_ni^n$.

This shouldn't cause any confusion as in the multiplication in $M$, $e$ "doesn't do anything" (it's a multiplicative identity).

For a scalar mutliplication, we will just use:

$c(a_0 + a_1i + \cdots + a_ni^n) = ca_0 + (ca_1)i + (ca_2)i^2 + \cdots + (ca_n)i^n$.

It is routine to verify this indeed forms a real vector space (of infinite dimension), with basis:

$\{i^k: k \in \Bbb N\}$ (using the convention $i^0 = e$).

All we lack to make this an $\Bbb R$-algebra is a ring multiplication. We also do this "in the most general way possible" by just multiplying every term in each formal sum together, and adding them up (collecting like terms, if any exist).

So:

$(a_0 + a_1i + a_2i^2 + \cdots + a_ni^n)(b_0 + b_1i + b_2i^2 + \cdots + b_mi^m)$

$= a_0b_0 + (a_0b_1 + a_1b_0)i + (a_0b_2 + a_1b_1 + a_2b_0)i^2 + \cdots + a_mb_ni^{n+m}$

$\displaystyle = \sum_{k = 0}^{m+n}\left(\sum_{i+j = k} a_ib_j\right)i^k$

For example:

$(2 + 4i + i^3)(1 + i^2 + i^3 + 2i^4)$

$= (2)(1) + [(2)(0) + (1)(4)]i + [(2)(1) + (4)(0) + (0)(1)]i^2 + [(2)(1) + (4)(1) + (0)(0) + (1)(1)]i^3 + [(2)(2) + (4)(1) + (0)(1) + (1)(0) + (0)(1)]i^4 + [(2)(0) + (4)(2) + (0)(1) + (1)(1) + (0)(0) + (0)(1)]i^5 + [(2)(0) + (4)(0) + (0)(2) + (1)(1) + (0)(1) + (0)(0) + (0)(1)]i^6 + [(1)(2)]i^7$

$= 2 + 4i + 2i^2 + 7i^3 + 8i^4 + 9i^5 + i^6 + 2i^7$

Equivalently (and perhaps easier to follow):

$(2 + 4i + i^3)(1 + i^2 + i^3 + 2i^4)$

$= 2 + 2i^2 + 2i^3 + 4i^4 + 4i + 4i^3 + 4i^4 + 8i^5 + i^3 + i^5 + i^6 + 2i^7$

$= 2 + 4i + 2i^2 + 7i^3 + 8i^4 + 9i^5 + i^6 + 2i^7$

(we get the same result either way).

As is not surprising, this is really just "polynomials in $i$". It is routine to verify this gives us an $\Bbb R$-algebra (verify that the algebra axioms are satisfied). By identifying $\Bbb R$ with the elements $a_0 = a_0e$ we can consider $\Bbb R$ as a subring AND subspace of this algebra, and thus a subalgebra. This algebra is (unlike many algebras) commutative (mostly due to the fact that $i$ commutes with itself), so it is trivial that $\Bbb R$ lies in its center.

Now what might we MEAN when we say: "subject to the relation $i^2 = -1$"?

Well, loosely speaking, it means we are going to set a number of elements of this algebra equal to 0. Formally, what we are going to do is this:

We say that:

$u = a_0 + a_1i + \cdots + a_ni^n$ is equivalent to

$v = b_0 + b_1i + \cdots + b_mi^m$ IF:

there exists some $w = c_0 + c_1i + \cdots c_ri^r$

such that: $u - v = w(1 + i^2)$.

This defines an equivalence relation on our $\Bbb R$-algebra, and moreover, it turns out that:

$+ [v] = [u+v]$
$[v] = [uv]$
$c = [cu]$

To see what this actually does, let's look at an example.

Suppose $u = 1 + 2i + 3i^2 + 4i^3$.

We can write this as:

$u = 1 + -2i + 4i + 3i^2 + 4i^3 = 1 - 2i + 3i^2 + (4i)(1 + i^2)$

$= 3 - 2 - 2i + 3i^2 + (4i)(1 + i^2) = -2 - 2i + 3(1 + i^2) + (4i)(1 + i^2)$

$= -2 - 2i + (3 + 4i)(1 + i^2)$.

So if $v = -2 - 2i$, we have:

$u - v = (3 + 4i)(1 + i^2)$, and by taking $w = 3 + 4i$, we see that:

$= [v]$.

We can carry this out for ANY element $u$, reducing it to a UNIQUE equivalent one of the form:

$a_0 + a_1i$.

(The way we determine this is just by using "polynomial long division").

This means that instead of considering "polynomials in $i$", we only need to consider equivalence classes of the form $[a_0 + a_1i]$.

We are especially interested in $$(the case where a_0 = 0,a_1 = 1). It is natural to ask: What is  = [i^2]? Well: i^2 = 0 + i^2 = -1 + 1 + i^2, so we can take w = 1, showing that: [i^2] = [-1]. It should be clear that: [a_0 + a_1i] + [b_0 + b_1i] = [(a_0+b_0) + (a_1+b_1)i] that is, the addition in our set of equivalences acts pretty much like the addition in our original algebra, except we only have two coefficients to keep track of now. What is more exciting is what happens to multiplication: [a_0 + a_1i][b_0 + b_1i] = [a_0b_0 + (a_0b_1 + a_1b_0)i + (a_2b_2)i^2] = [a_0b_0 + (a_0b_1 + a_1b_0)i] + [a_2b_2i^2] = [a_0b_0 + (a_0b_1 + a_1b_0)i] + [a_2b_2][i^2] = [a_0b_0 + (a_0b_1 + a_1b_0)i] + [a_2b_2][-1] = [a_0b_0] + [a_0b_1 + a_1b_0] + [-a_2b_2] = [a_0b_0 - a_2b_2] + [a_0b_1 + a_1b_0]  = [(a_0b_0 - a_2b_2) + (a_0b_1 + a_1b_0)i] which is exactly what we get when we multiply the complex numbers: (a_0 + a_1i)(b_0 + b_1i) (I admit this is quite a bit to take in, and I have glossed over some important steps, so feel free to ask questions). #### topsquark ##### Well-known member MHB Math Helper Re: algebra presentations -Dan #### Fermat ##### Active member Re: algebra presentations Thanks for taking such a lot of time expaining this. #### Fermat ##### Active member Re: algebra presentations Can you explain how this means that \frac{R[X]}{X^2+1} is isomorphic to C. #### Deveno ##### Well-known member MHB Math Scholar Re: algebra presentations Well, loosely speaking, it means we are going to take every single polynomial that has X^2 + 1 as a factor, and set it to 0. Before we go there, however, let's look at a less complicated example: Algebraically, the integer 0 has 2 unique properties other integers do not have: 0 + 0 = 0 k * 0 = 0 Now...suppose we wanted to find some subset J of the integers that also had these properties: J + J = J \Bbb ZJ = J. OK, well, first of all, how do we even "add" sets?!? So we have to give some MEANING to J + J. Here is what we will do: J + J = \{k + m: k,m \in J\} That seems straight-forward enough, yes? So when we say: J + J = J, we mean two things: 1) J is closed under addition, 2) every element in J is the sum of two other elements in J. Similarly, by \Bbb ZJ we mean the set \{ab: a\in \Bbb Z, b\in J\}. One possibility that immediately springs to mind is the set of all multiples of k, for any given integer k. Clearly it satisfies (1), since: km + kn = k(m+n) and since 0 = k0, we can write km = km + k0, both of which are multiples of k. Clearly this set "absorbs" the integers much like 0 does, if we multiply any multiple of k by any integer, we still wind up with a multiple of k: n(km) = k(nm). so this set does the second thing we want, as well. So such a set could possibly be a "zero-like object" in some "integer-like" system. What would treating every multiple of k as "some kind of 0-thing" do to our integer system? Well, counting would go like this: 0,1,2,...,k-1,k (back to 0) k+1 (same as 1),k+2 (same as 2),....,2k-1 (same as k-1), 2k (back to 0) 2k+1 (same as 1), 2k+2 (same as 2)....etc. In other words, the "line" of integers, has turned into a "cycle" which repeats every k "clicks". Formally, what we do is this: Suppose R is a commutative ring, with a subset J such that: 1) (J,+) is an additive subgroup of (R,+) (such a subgroup satisfies J + J = J, by the closure property of group operations) 2) If a \in R and x \in J, then ax \in J (this means that RJ = J, the "absorbing property"). We call such a set J an IDEAL of R (it's a strange name, but let's just go with it). Here is where things get interesting: We can define an equivalence relation on R by: a \sim b \iff a - b \in J. Let's verify this is indeed an equivalence relation. Is a \sim a? Well, a - a = 0, and since J is an additive subgroup, it surely contains the additive identity, so yes, \sim is reflexive. Suppose a \sim b. Is it always true that b \sim a? a \sim b means that a - b \in J. Since J is an additive group, this means -(a - b) = b - a \in J as well, so yes!, this relation is symmetric, too. Finally, suppose a \sim b and b \sim c. This means a - b \in J, and b - c \in J. Since J is closed under addition, we have: (a - b) + (b - c) = a - c \in J, so \sim is transitive, as well. So we really do have a bona-fide equivalence relation. That said, what does the equivalence class of a given a \in R, [a] look like? Well, suppose x \in J is any element of J. Then a + x - a = x \in J, so: a \sim a+x. So clearly we have: a + J = \{a + x: x \in J\} \subseteq [a]. On the other hand, suppose b is any element of [a]. Since: b = a + b - a, and b \sim a, b - a \in J so b \in a + J. This means that the equivalence class of a IS the set a + J. Now, one question is...can we turn these equivalence classes into a ring themselves? To do that, we need to come up with some way of adding them, and multiplying them, and then we need to check that the ring axioms hold. Well, it may or may not seem obvious, but one way is to define: [a] + = [a + b], or equivalently, (a + J) + (b + J) = (a+b) + J. [a] = [ab], or: (a + J)(b + J) = (ab) + J. There is one small problem, however: [a] may have a LOT of elements in it, and so we could have: [a] = [a'] (and likewise for$$)

So if we're going to use $a$ and $b$ to define the sum and product of the equivalence classes, we ought to make sure that if:

$[a] = [a'], = [b']$

that $[a+b] = [a'+b']$ and $[ab] = [a'b']$, or our definition isn't going to be CONSISTENT.

Now $[a] = [a']$ means that $a \sim a'$, which is to say $a - a' \in J$. Similarly $= [b']$ means that $b - b' \in J$.

So $a + b - (a' + b') = (a - a') + (b - b') \in J$, since the sum of two elements of $J$ is always again in $J$ (see how that $J + J = J$ rule is working for us, now?). This means that $[a + b] = [a' + b']$, whenever we have $[a] = [a'], = [b']$, so our sum doesn't depend on the particular $a,b$ we use, just the equivalence class. That's reassuring.

Now suppose $a \sim a', b \sim b'$. Then:

$ab - a'b' = ab - ab' + ab' - a'b' = a(b - b') + (a - a')b'$

Now $b - b' \in J$ (since $b \sim b'$), and since anything multiplied by something in $J$ is again in $J$, we have:

$a(b - b') \in J$.

Similarly, $(a - a')b' = b'(a - a') \in J$, since $a - a' \in J$.

So $ab - a'b'$ is the sum of two elements of $J$, so must ALSO be in $J$. This means we can trust that our product of equivalence classes also makes sense, no matter which "representatives" we use to calculate it.

Now...this is all a bit abstract, so let's go back to our integer example:

$J = k\Bbb Z = \{kn: n \in \Bbb Z\}$.

We see that our equivalence classes are:

$k\Bbb Z = \{0,k,-k,2k,-2k,3k,-3k,\dots\}$
$1 + k\Bbb Z = \{1,k+1,-k+1,2k+1,-2k+1,3k+1,-3k+1,\dots\}$
......
$k-1 + k\Bbb Z = \{k-1,2k-1,-1,3k-1,-k-1,4k-1,-2k-1,\dots\}$

and that our equivalence class $[m]$ is just the integer $m$ modulo $k$.

*********

Now polynomials are sort of like "grown-up integers", the algebra for them is VERY similar. Instead of using an ideal generated by an integer $k$ like we did for the integers, we're going to use an ideal generated by the polynomial $X^2 + 1$.

Now there's a LOT of multiples of $X^2 + 1$, like:

$X^3 + X$
$X^3 + 2X^2 + X + 2 = (X + 2)(X^2 + 1)$ etc.

We're going to take ALL these multiples and call them....(I hope you can guess this) $J$.

So given a polynomial...any old polynomial, instead of considering the polynomial $p(X)$, we're going to look at the equivalence class:

$[p(X)] = p(X) + J$.

Well, we're going to need to write some of these equivalence classes down, in order to compute with them, and we don't want to have to use really BIG polynomials (that would be HARD). So what can we do? Hmm....

Suppose, just suppose, we divided $p(X)$ by $X^2 + 1$, and we got something like:

$p(X) = q(X)(x^2 + 1) + r(X)$.

Since $r(X)$ is a remainder, we could keep going until we got a remainder with degree less than the degree of $X^2 + 1$, that is:

$r(X) = aX + b$.

Now:

$p(X) - r(X) = q(X)(X^2 + 1)$, and guess what? $q(X)(X^2 + 1) \in J$.

This means:

$[p(X)] = [r(X)]$.

So we catch a break, here...instead of working with ARBITRARY polynomials, we can just use their REMAINDERS upon division by $X^2 + 1$, and these remainders are going to be SMALL polynomials, which makes everything a LOT easier.

Now suppose:

$[p_1(X)] = [r_1(X)] = [a + bX]$ and:
$[p_2(X)] = [r_2(X)] = [c + dX]$.

We know that:

$[p_1(X)] + [p_2(X)] = [r_1(X)] + [r_2(X)] = [a + bX] + [c + dX]$

$= [a + bX + c + dX] = [(a + c) + (b + d)X]$

so adding in this system of "equivalent" polynomials is pretty easy. What about multiplying?

We'll try to use our "simple" representatives to make the math easier:

$[p_1(X)][p_2(X)] = [r_1(X)][r_2(X)] = [a + bX][c + dX]$

$= [(a + bX)(c + dX)] = [ac + (ad)X + (bc)X + (bd)X^2]$

$= [ac + (ad + bc)X + bdX^2]$.

Huh.

Our product has an $X^2$ term, which is kinda gumming up the works. We should probably find out what the equivalence class of $X^2$ is, since it's of degree higher than 1. Ok.

Now, $X^2 = X^2 + 0 = X^2 + 1 - 1 = -1 + (X^2 + 1)$.

This tells us: $X^2 + J = -1 + J$, that is: $[X^2] = [-1]$. This is helpful.

So:

$[p_1(X)][p_2(X)] = [ac + (ad + bc)X + (bd)X^2]$

$= [ac] + [(ad + bc)X] + [(bd)X^2] = [ac] + [(ad + bc)X] + [bd][X^2]$

$= [ac] + [(ad + bc)X] + [bd][-1] = [ac] + [(ad + bc)X] + [-bd]$

$= [ac - bd] + [(ad + bc)X] = [(ac - bd) + (ad + bc)X]$.

There! now we have the "reduced" form of a product.

So how is this related to $\Bbb C$?

What we do is map the equivalence class:

$[p(X)] = [a + bX] \mapsto a+bi \in \Bbb C$.

This is essentially "setting all multiples of $X^2 + 1$ equal to 0" since:

$J = 0 + J = [X^2 + 1] =  = [0 + 0X] \mapsto 0 + 0i = 0$.

($X^2 + 1$ has a remainder of 0 when divided by itself, right?)

What happens to $[X]$ under this map?

It gets sent to the complex number $i$, and $i^2 + 1 = 0$

(in other words, the equivalence class of $X$ relative to the multiples of $X^2 + 1$ actually is a SOLUTION to the polynomial:

$X^2 + 1$

since:

$[X]^2 +  = [X^2 + 1] = $).

Geometrically (although this is not quite accurate), you can imagine the polynomial space $\Bbb R[X]$ as a big blob, that we slice into "spaghetti strings" of translates of the multiples of $X^2 + 1$. The strings containing just $[X]$ terms map to points on the $y$-axis, the strings containing just $[a]$ terms (the polynomials equivalent to constant polynomials) map to points on the $x$-axis (the reals...after all, constant term polynomials act just like the reals), so the complex numbers are like a cross-section through this mass of spaghetti strings (which are pretty thin, by the way), so that we get exactly one complex number per string. The string containing $X^2 + 1$ goes through the origin, of course.