Welcome to our community

Be a part of something great, join today!

Sums of Ideals - R. Y. Sharp "Steps in Commutative Algebra"

Peter

Well-known member
MHB Site Helper
Jun 22, 2012
2,918
In R. Y. Sharp "Steps in Commutative Algebra", Section 2.23 on sums of ideals reads as follows:

------------------------------------------------------------------------------
2.23 SUMS OF IDEALS. Let \(\displaystyle ( {I_{\lambda})}_{\lambda \in \Lambda} \) be a family of ideals of the commutative ring \(\displaystyle R \). We define the sum \(\displaystyle {\sum}_{\lambda \in \Lambda} I_{\lambda} \) of this family to be the ideal generated by \(\displaystyle {\cup}_{\lambda \in \Lambda}I_{\lambda} \) :

Thus \(\displaystyle {\sum}_{\lambda \in \Lambda} I_{\lambda} = ( {\cup}_{\lambda \in \Lambda}I_{\lambda} ) \)

In particular if \(\displaystyle \Lambda = \emptyset \) then \(\displaystyle {\sum}_{\lambda \in \Lambda} I_{\lambda} = 0 \).

Since an arbitrary ideal of R is closed under addition and under scalar multiplication by arbitrary elements of R, it follows from 2.18 that, in the case where \(\displaystyle \Lambda \ne 0 \), an arbitrary element can be expressed in the form \(\displaystyle {\sum}_{i=1}^{n} {c_{\lambda}}_i \), where \(\displaystyle n \in \mathbb{N} , {\lambda}_1, {\lambda}_2, ... \ ... , {\lambda}_n \in \Lambda \) and \(\displaystyle {c_{\lambda}}_i \in {I_{\lambda}}_i \) for each \(\displaystyle i = 1,2, ... \ ... , n \)

------------------------------------------------------------------------

Now 2.18 states the following:

Let \(\displaystyle \emptyset \ne H \subseteq R \). We define the ideal of R generated by H, denoted by (H) or RH or HR, to be the intersection of the family of all ideals of R which contain H.

Then Sharp shows that ...

\(\displaystyle (H) = \{ {\sum}_{i=1}^{n} r_ih_i \ | \ n \in \mathbb{N}, r_1, r_2, ... \ ... r_n \in R, h_1, h_2, ... \ ... h_n \in H \} \)

My problem is as follows:

Given the above, and in particular:

" ... ... an arbitrary element can be expressed in the form \(\displaystyle {\sum}_{i=1}^{n} {c_{\lambda}}_i \), where \(\displaystyle n \in \mathbb{N} , {\lambda}_1, {\lambda}_2, ... \ ... , {\lambda}_1 \in \Lambda \) and \(\displaystyle {c_{\lambda}}_i \in {I_{\lambda}}_i \) for each \(\displaystyle i = 1,2, ... \ ... , n \) ... ... "

why does each \(\displaystyle {c_{\lambda}}_i \in {I_{\lambda}}_i \) ... why for example, cannot each \(\displaystyle {c_{\lambda}}_i \) come from, say, \(\displaystyle {I_{\lambda}}_1 \)


To emphasize the point consider \(\displaystyle I_1 \cup I_2 \). Following the form of the expression for (H) given above we would have

\(\displaystyle I_1 \cup I_2 = \{ {\sum}_{i=1}^{n} s_it_i \ | \ n \in \mathbb{N}, s_1, s_2, ... \ ... s_n \in R , \ t_1, t_2, ... \ ... t_n \in I_1 \cup I_2 \} \)

Now in this expression all of the \(\displaystyle t_i \in I_1 \cup I_2 \) could conceivably come from \(\displaystyle I_1 \) which again seems at odds with Sharp's claim above that

" ... ... an arbitrary element can be expressed in the form \(\displaystyle {\sum}_{i=1}^{n} {c_{\lambda}}_i \), where \(\displaystyle n \in \mathbb{N} , {\lambda}_1, {\lambda}_2, ... \ ... , {\lambda}_1 \in \Lambda \) and \(\displaystyle {c_{\lambda}}_i \in {I_{\lambda}}_i \) for each \(\displaystyle i = 1,2, ... \ ... , n \) ... ... "

Can someone please clarify this issue?

Peter
 
Last edited:

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,703
In R. Y. Sharp "Steps in Commutative Algebra", Section 2.23 on sums of ideals reads as follows:

------------------------------------------------------------------------------
2.23 SUMS OF IDEALS. Let \(\displaystyle ( {I_{\lambda})}_{\lambda \in \Lambda} \) be a family of ideals of the commutative ring \(\displaystyle R \). We define the sum \(\displaystyle {\sum}_{\lambda \in \Lambda} I_{\lambda} \) of this family to be the ideal generated by \(\displaystyle {\cup}_{\lambda \in \Lambda}I_{\lambda} \) :

Thus \(\displaystyle {\sum}_{\lambda \in \Lambda} I_{\lambda} = ( {\cup}_{\lambda \in \Lambda}I_{\lambda} ) \)

In particular if \(\displaystyle \Lambda = \emptyset \) then \(\displaystyle {\sum}_{\lambda \in \Lambda} I_{\lambda} = 0 \).

Since an arbitrary ideal of R is closed under addition and under scalar multiplication by arbitrary elements of R, it follows from 2.18 that, in the case where \(\displaystyle \Lambda \ne 0 \), an arbitrary element can be expressed in the form \(\displaystyle {\sum}_{i=1}^{n} {c_{\lambda}}_i \), where \(\displaystyle n \in \mathbb{N} , {\lambda}_1, {\lambda}_2, ... \ ... , {\lambda}_n \in \Lambda \) and \(\displaystyle {c_{\lambda}}_i \in {I_{\lambda}}_i \) for each \(\displaystyle i = 1,2, ... \ ... , n \)

------------------------------------------------------------------------

Now 2.18 states the following:

Let \(\displaystyle \emptyset \ne H \subseteq R \). We define the ideal of R generated by H, denoted by (H) or RH or HR, to be the intersection of the family of all ideals of R which contain H.

Then Sharp shows that ...

\(\displaystyle (H) = \{ {\sum}_{i=1}^{n} r_ih_i \ | \ n \in \mathbb{N}, r_1, r_2, ... \ ... r_n \in R, h_1, h_2, ... \ ... h_n \in H \} \)

My problem is as follows:

Given the above, and in particular:

" ... ... an arbitrary element can be expressed in the form \(\displaystyle {\sum}_{i=1}^{n} {c_{\lambda}}_i \), where \(\displaystyle n \in \mathbb{N} , {\lambda}_1, {\lambda}_2, ... \ ... , {\lambda}_1 \in \Lambda \) and \(\displaystyle {c_{\lambda}}_i \in {I_{\lambda}}_i \) for each \(\displaystyle i = 1,2, ... \ ... , n \) ... ... "

why does each \(\displaystyle {c_{\lambda}}_i \in {I_{\lambda}}_i \) ... why for example, cannot each \(\displaystyle {c_{\lambda}}_i \) come from, say, \(\displaystyle {I_{\lambda}}_1 \)
If several of the elements $c_{\lambda_i}$ (or maybe all of them) come from the same ideal ${I_{\lambda}}_1$ then their sum will also be an element of ${I_{\lambda}}_1$. So you can lump them all together as a single element of ${I_{\lambda}}_1$. In that way, each ideal needs to occur only once in the sum \(\displaystyle {\sum}_{i=1}^{n} {c_{\lambda}}_i \). Sharp's notation implicitly assumes that that is the case.

To emphasize the point consider \(\displaystyle I_1 \cup I_2 \). Following the form of the expression for (H) given above we would have

\(\displaystyle (I_1 \cup I_2) = \{ {\sum}_{i=1}^{n} s_it_i \ | \ n \in \mathbb{N}, s_1, s_2, ... \ ... s_n \in R , \ t_1, t_2, ... \ ... t_n \in I_1 \cup I_2 \} \)
In fact, there are only two ideals here, so the sum only needs to contain two terms, $s_1t_1 + s_2t_2$, where $t_1\in I_1$ and $t_2\in I_2$. Also, since $I_1$ is an ideal it follows that $s_1t_1\in I_1$, and similarly $s_2t_2\in I_2$. So if we write $c_i = s_it_i$ (f0r $i=1,\,2$) then $s_1t_1 + s_2t_2 = c_1+c_2$, which is of the form required by Sharp's claim.
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
I find it easier to keep track of these things like so:

Any ring $R$ can be viewed as an $R$-module over itself. In this view, an ideal of the ring is nothing more (or less) than an $R$-submodule.

Now $R$-modules are "pretty much like vector spaces" ($F$-modules, where $F$ is a field), except that we don't have division by scalars. So it is helpful to think of things in an ideal of $R$ as certain $R$-linear combinations of things (typically the generating elements of the ideal).

The fact that an ideal is an $R$-submodule, means that for an ideal $I$, and $a \in R, x \in I$, we have $ax \in I$ (closure under scalar multiplication). This effectively means we don't have to keep writing some scalar (that is, some element of $R$) "out in front", because ANY such element is already in $I$.

So the main thing to look out for, is making sure that when we have a set $S$, that we want to check is an ideal, that we have closure under addition, that is:

$S + S = S$.

This is why $I + J = \{x + y: x \in I, y \in J\}$, and not, for example:

$\displaystyle I + J = \{\sum_i^n ax_i + by_i: a,b \in R, x_i \in I, y_i \in J,\ n \in \Bbb N\}$ which is what we would have to have if $I$ and $J$ were merely generating sets.

Again, I reiterate my earlier suggestion that to get a "better feel" for what is happening in rings, it really is useful to know a bit of linear algebra, because the behavior of ideals in rings in many cases mimics the behavior of vector subspaces (that is especially true in the case where the ring is commutative, because commutative rings are $\Bbb Z$-algebras, and the $\Bbb Z$-module of that definition is especially close to a vector space (Euclidean rings are "almost fields" in a certain sense)).

It is a peculiar consequence of algebraic objects in general, that if we want the sub-thingy that is minimal but contains sub-thingies $A$ and $B$, we usually have to go "quite a bit bigger" than $A \cup B$ (closure under a binary operation usually dictates this). On the other hand, if we want the biggest sub-thingy contained in both $A$ and $B$, usually $A \cap B$ will do the trick. The nature of the algebraic operations we have typically determines the FORM of typical elements so generated.

In rings, addition is king. It is so well-behaved (abelian groups are very nicely well-structured) that we often take its properties more or less for granted...it is the multiplication which gives us trouble. Not all subgroups of the additive groups form ideals ($\Bbb Z$ is a notable exception...this is to be expected if one considers that $\Bbb Z$ considered as a $\Bbb Z$-module over itself, pretty much already guarantees closure under multiplication). For counter-examples to things we might "hope" are true, it usually it is a good "reality check" to look at polynomial rings.

For example the subset $S = \{ax: a\in R\}$ is an additive subgroup of $R[x]$. It is NOT an ideal, because we have $x \in R[x]$, but $x^2 = x(x) \not\in S$. SO obviously, if we want the smallest ideal containing $\{x\}$, we need to include at LEAST all polynomials of the form:

$f(x) = (g(x))(x)$

It turns out that for this ring, this is enough. It is actually easier to think of this ideal as:

"all polynomials with 0 constant term", which means of course that $f(0) = 0$ (evaluation maps are handy in figuring out what homomorphisms annihilate an ideal, ideals are "ring-kernels" that is, they kill stuff (send their elements to 0 in the quotient)).

It is helpful, in many realms of algebra, to think of quotient objects and surjective homomorphisms (and we can make any homomorphism surjective by restricting our co-domain to the image set) as exactly the same things. Rings (or groups, or other such beasts) are usually not studied one-by-one in a vacuum, rather, we are interested in how well they play with others. Similarly, it is helpful to think of (in many cases), sub-objects and injective homomorphisms as the same things (this is what we do when we "extend" an integral domain to its field of quotients).

This point of view takes some getting used to: often we find ourselves resorting to "typical elements" and asking: "what kind of elements does such-and-such contain?". But is is far more powerful, and productive to instead ask: "what sort of properties does such-and-such have?".

An example: an ideal $P$ of a ring $R$ is prime if and only if $R/P$ is an integral domain. This means instead of looking at an element $ab \in P$, we look at the behavior of ALL of $R/P$, which is to say, we look at what kind of map the quotient map $R \to R/P$ is. In the quotient ring, we're not looking at "local elements" (like an $a$ or $b$ in the ring $R$) but "global properties" of an entire structure. Often, we are searching for what KIND of ideal we have to have to get a "nice" quotient, which we work in instead, because it's nicer.
 
Last edited: