# Structure of R[X] and an exercise on ring adjunction

#### Peter

##### Well-known member
MHB Site Helper
I am reading R.Y. Sharp's book "Steps in Commutative Algebra"

At the moment I am trying to achieve a full understanding of the mechanics and nature of LEMMA 1.11 and am reflecting on Exercise 1.12 which follows it.

LEMMA 1.11 reads as follows: (see attachment)

---------------------------------------------------------------------------------------
Let $$\displaystyle S$$ be a subring of the ring $$\displaystyle R$$, and let $$\displaystyle \Gamma$$ be a subset of R. Then $$\displaystyle S[ \Gamma ]$$ is defined to be the intersection of all subrings of R which contain both $$\displaystyle S$$ and $$\displaystyle \Gamma$$. (There certainly is one such subring, namely $$\displaystyle R$$ itself).

Thus $$\displaystyle S[ \Gamma ]$$ is a subring of R which contains both $$\displaystyle S$$ and $$\displaystyle \Gamma$$, and it is the smallest such subring of R in the sense that it is contained in every other subring of R that contains both $$\displaystyle S$$ and $$\displaystyle \Gamma$$.

In the special case in which $$\displaystyle \Gamma$$ is a finite set $$\displaystyle \{ \alpha_1, \alpha_2, ... \ ... , \alpha_n \}$$, we write $$\displaystyle S[ \Gamma ]$$ as $$\displaystyle S[\alpha_1, \alpha_2, ... \ ... , \alpha_n] [$$.

In the special case in which $$\displaystyle S$$ is commutative, and $$\displaystyle \alpha \in R$$ is such that $$\displaystyle \alpha s = s \alpha$$ for all $$\displaystyle s \in S$$ we have

$$\displaystyle S[ \alpha ] = \{ {\sum}_{i=0}^{t} s_i {\alpha}^i \ : \ t \in \mathbb{N}_0, s_1, s_2, ... \ ... , s_t \in S \}$$

------------------------------------------------------------------------------------

Now a couple of issues/problems ...

Issue/Problem 1

Given that $$\displaystyle S[ \Gamma ]$$ is the intersection of all subrings of $$\displaystyle R$$ which contain both $$\displaystyle S$$ and $$\displaystyle \Gamma$$, it should be equal to the subring generated by the union of $$\displaystyle S$$ and $$\displaystyle \Gamma$$ [Dummit and Foote establish this equivalence for ideals in Section 7.4 page 251 - so it should work for subrings]. Similarly, $$\displaystyle S[ \alpha ]$$ (the same situation restricted to one variable) should be equal to the subring generated by $$\displaystyle S$$ and $$\displaystyle \alpha$$.

So the subring $$\displaystyle S[ \alpha ]$$ should contain all finite sums of terms of the form $$\displaystyle s_i \alpha^i , i = 1, 2, ...$$. But we can write $$\displaystyle s^i = s_i$$ for some element $$\displaystyle s_i \in S$$ since $$\displaystyle S$$ is a subring. Therefore the terms in $$\displaystyle S[ \alpha ]$$ can be expressed $$\displaystyle \sum s_i \alpha^i$$

Can someone please confirm that the above reasoning is sound ... or not ...

Problem/Issue 2

I am trying to make a start on Exercise 1.12 which follows and is related to LEMMA 1.11, but not making any significant headway ...

Exercise 1.12 reads as follows: (see attachment)
------------------------------------------------------------------------------------

Let S be a subring of the commutative ring R, and let $$\displaystyle \Gamma, \Delta$$ be subsets of R

Show that $$\displaystyle S[\Gamma \cup \Delta] = S[\Gamma] [\Delta]$$ and

$$\displaystyle S[\Gamma] = \underset{\Omega \subseteq \Gamma , \ | \Omega | \lt \infty}{\bigcup} S[ \Omega]$$

-------------------------------------------------------------------------------------

Can someone help me to make a significant start on this exercise?

Would appreciate some help

Peter

Last edited:

#### Deveno

##### Well-known member
MHB Math Scholar
I am not sure what your issue #1 actually is.

In the power set of $R$, the union of $S$ and $T$, where $S,T$ are two subsets of $R$, is the smallest subset of $R$ containing $S,T$. To see this, note that if:

$r \in S \cup T$, and $S \subseteq U, T \subseteq U$, we have:

$r \in S \implies r \in U$,
$r \in T \implies r \in U$,

so since $S \cup T = \{r \in U: r\in S, \text{ or } r\in T\}$, we have in either of the two cases, $r \in U$, so that $S \cup T \subseteq U$.

This is the precise meaning of what we mean by "smallest subset containing $S$ and $T$".

Now suppose we require instead that a subset of $R$ be a subring. Then the subring generated by $S \cup T$ is the same as the subring generated by $\{S,T\}$.

If $S$ is already a subring, it is clear that the subring generated by the SET $S$ is the subring $S$ itself.

#### Turgul

##### Member
The problem with issue #1 is in the case when $R$ is not commutative and $\alpha$ is chosen such that $\alpha s \neq s \alpha$ for some $s \in S$. Then the element $\alpha s \in S[\alpha]$ may not be able to be written as $\sum s_i\alpha^i$. That is to say there may be no way of moving the $s$ "past" the $\alpha$ to write $s \alpha$ in a way that has $\alpha$'s on the right side of each monomial with a left coefficient in $S$.

#### Peter

##### Well-known member
MHB Site Helper
I am not sure what your issue #1 actually is.

In the power set of $R$, the union of $S$ and $T$, where $S,T$ are two subsets of $R$, is the smallest subset of $R$ containing $S,T$. To see this, note that if:

$r \in S \cup T$, and $S \subseteq U, T \subseteq U$, we have:

$r \in S \implies r \in U$,
$r \in T \implies r \in U$,

so since $S \cup T = \{r \in U: r\in S, \text{ or } r\in T\}$, we have in either of the two cases, $r \in U$, so that $S \cup T \subseteq U$.

This is the precise meaning of what we mean by "smallest subset containing $S$ and $T$".

Now suppose we require instead that a subset of $R$ be a subring. Then the subring generated by $S \cup T$ is the same as the subring generated by $\{S,T\}$.

If $S$ is already a subring, it is clear that the subring generated by the SET $S$ is the subring $S$ itself.
Re-checking what I posted re issue 1, I think I was basically looking for a confirmation regarding what I wrote ... which I must admit was pretty basic ... I was also somewhat anxious that I may be missing something ...

You then write:

"In the power set of $R$, the union of $S$ and $T$, where $S,T$ are two subsets of $R$, is the smallest subset of $R$ containing $S,T$. To see this, note ... ... etc "

I am a bit confused here (maybe I am not following you fully) ... but are you making a general point ... or are you giving me a hint regarding the exercise I asked about - that is:

Let S be a subring of the commutative ring R, and let $$\displaystyle \Gamma, \Delta$$ be subsets of R

Show that $$\displaystyle S[\Gamma \cup \Delta] = S[\Gamma] [\Delta]$$ and

$$\displaystyle S[\Gamma] = \underset{\Omega \subseteq \Gamma , \ | \Omega | \lt \infty}{\bigcup} S[ \Omega]$$

Peter

#### Peter

##### Well-known member
MHB Site Helper
Re-checking what I posted re issue 1, I think I was basically looking for a confirmation regarding what I wrote ... which I must admit was pretty basic ... I was also somewhat anxious that I may be missing something ...

You then write:

"In the power set of $R$, the union of $S$ and $T$, where $S,T$ are two subsets of $R$, is the smallest subset of $R$ containing $S,T$. To see this, note ... ... etc "

I am a bit confused here (maybe I am not following you fully) ... but are you making a general point ... or are you giving me a hint regarding the exercise I asked about - that is:

Let S be a subring of the commutative ring R, and let $$\displaystyle \Gamma, \Delta$$ be subsets of R

Show that $$\displaystyle S[\Gamma \cup \Delta] = S[\Gamma] [\Delta]$$ and

$$\displaystyle S[\Gamma] = \underset{\Omega \subseteq \Gamma , \ | \Omega | \lt \infty}{\bigcup} S[ \Omega]$$

Peter
Sorry Deveno, but as I was submitting the last post to you on this thread, I realised (too late) that you were actually giving me the solution to the exercise:

"Let S be a subring of the commutative ring R, and let $$\displaystyle \Gamma, \Delta$$ be subsets of R

Show that $$\displaystyle S[\Gamma \cup \Delta] = S[\Gamma] [\Delta]$$ and

$$\displaystyle S[\Gamma] = \underset{\Omega \subseteq \Gamma , \ | \Omega | \lt \infty}{\bigcup} S[ \Omega]$$"

Translating your hint into the terms of the exercise:

We have that, by Sharp's definitions:

$$\displaystyle S [ \Gamma ] [ \Delta]$$ = intersection of all subrings which contain $$\displaystyle S [ \Gamma ]$$ and $$\displaystyle \Delta$$ ... ... ... (1)

= the subring generated by the elements of $$\displaystyle S [ \Gamma ]$$ and $$\displaystyle \Delta$$ keeping in mind that $$\displaystyle S [ \Gamma ]$$ is a subring (LEMMA 1.11, Chapter 1 of Sharp's book) and, further, that the subring generated by the set $$\displaystyle S [ \Gamma ]$$ is the subring $$\displaystyle S [ \Gamma ]$$ itself.

BUT

$$\displaystyle S [ \Gamma ]$$ is the set of all subrings that contain $$\displaystyle S$$ and $$\displaystyle \Gamma$$ (and thus is also the generated by the elements of $$\displaystyle S$$ and $$\displaystyle \Gamma$$)

Therefore, (1) can be expressed as follows:'

$$\displaystyle S [ \Gamma ] [ \Delta]$$ = intersection of all subrings which contain $$\displaystyle S$$ and $$\displaystyle \Gamma$$ and $$\displaystyle \Delta$$

= $$\displaystyle S[\Gamma \cup \Delta$$

Now thinking about the second part of the exercise, namely

$$\displaystyle S[\Gamma] = \underset{\Omega \subseteq \Gamma , \ | \Omega | \lt \infty}{\bigcup} S[ \Omega]$$

Have not yet made much progress. Can you help with this part of the exercise?

Peter

Last edited:

#### Deveno

##### Well-known member
MHB Math Scholar
Whenever you have an equality between two sets, a typical approach is to take a "typical" element (or "arbitrary" element) on one side, and show it belongs to the other side.

Now an element of $S[\Gamma]$ is a finite (this is important) sum of elements of a certain form (I'm being deliberately vague to help jolt your thinking). Now if each TERM of the sum could be shown to be in ONE of the sets of the big union on the left, then certainly the ENTIRE sum would lie in the union. This would show that:

$$S[\Gamma] \subseteq \bigcup_{\Omega \subseteq \Gamma,|\Omega| < \infty} S[\Omega]$$

To start on the other containment, ask yourself, is it true that:

$S[\Omega] \subseteq S[\Gamma]$?

**********

Alternately, and perhaps more elegantly, can we not write:

$$\Gamma = \bigcup_{\Omega \subseteq \Gamma, |\Omega| = 1}$$ and then apply part (1)?

(The trick here is to decide if part (1) holds for ARBITRARY unions, or only FINITE unions...if all the sets in consideration are finite, one needn't worry about this distinction...you may find it helpful to consider this example:

$\Bbb Q = \Bbb Z[\Gamma]$

where: $$\Gamma = \{1/n: n \in \Bbb Z^+\} = \bigcup_{k \in \Bbb Z^+} \{1/k\}$$

this is an infinite union of singleton subsets). Is this equal to:

$$\bigcup_{k \in \Bbb Z^+} \Bbb Z[1/k]$$...?

Does any rational number have an integer denominator?)

#### Peter

##### Well-known member
MHB Site Helper
Whenever you have an equality between two sets, a typical approach is to take a "typical" element (or "arbitrary" element) on one side, and show it belongs to the other side.

Now an element of $S[\Gamma]$ is a finite (this is important) sum of elements of a certain form (I'm being deliberately vague to help jolt your thinking). Now if each TERM of the sum could be shown to be in ONE of the sets of the big union on the left, then certainly the ENTIRE sum would lie in the union. This would show that:

$$S[\Gamma] \subseteq \bigcup_{\Omega \subseteq \Gamma,|\Omega| < \infty} S[\Omega]$$

To start on the other containment, ask yourself, is it true that:

$S[\Omega] \subseteq S[\Gamma]$?

**********

Alternately, and perhaps more elegantly, can we not write:

$$\Gamma = \bigcup_{\Omega \subseteq \Gamma, |\Omega| = 1}$$ and then apply part (1)?

(The trick here is to decide if part (1) holds for ARBITRARY unions, or only FINITE unions...if all the sets in consideration are finite, one needn't worry about this distinction...you may find it helpful to consider this example:

$\Bbb Q = \Bbb Z[\Gamma]$

where: $$\Gamma = \{1/n: n \in \Bbb Z^+\} = \bigcup_{k \in \Bbb Z^+} \{1/k\}$$

this is an infinite union of singleton subsets). Is this equal to:

$$\bigcup_{k \in \Bbb Z^+} \Bbb Z[1/k]$$...?

Does any rational number have an integer denominator?)
Thanks for the help Deveno

Sorry for the slow reply ... but my day job intervened Working through your ideas now ...

By the way, I was reflecting on this part of the exercise ... do you think this result has any important ramifications/implications for polynomial or power series rings ...

Peter

#### Peter

##### Well-known member
MHB Site Helper
Whenever you have an equality between two sets, a typical approach is to take a "typical" element (or "arbitrary" element) on one side, and show it belongs to the other side.

Now an element of $S[\Gamma]$ is a finite (this is important) sum of elements of a certain form (I'm being deliberately vague to help jolt your thinking). Now if each TERM of the sum could be shown to be in ONE of the sets of the big union on the left, then certainly the ENTIRE sum would lie in the union. This would show that:

$$S[\Gamma] \subseteq \bigcup_{\Omega \subseteq \Gamma,|\Omega| < \infty} S[\Omega]$$

To start on the other containment, ask yourself, is it true that:

$S[\Omega] \subseteq S[\Gamma]$?

**********

Alternately, and perhaps more elegantly, can we not write:

$$\Gamma = \bigcup_{\Omega \subseteq \Gamma, |\Omega| = 1}$$ and then apply part (1)?

(The trick here is to decide if part (1) holds for ARBITRARY unions, or only FINITE unions...if all the sets in consideration are finite, one needn't worry about this distinction...you may find it helpful to consider this example:

$\Bbb Q = \Bbb Z[\Gamma]$

where: $$\Gamma = \{1/n: n \in \Bbb Z^+\} = \bigcup_{k \in \Bbb Z^+} \{1/k\}$$

this is an infinite union of singleton subsets). Is this equal to:

$$\bigcup_{k \in \Bbb Z^+} \Bbb Z[1/k]$$...?

Does any rational number have an integer denominator?)
Hi Deveno,

In the above, early on you write:

"Now an element of $S[\Gamma]$ is a finite (this is important) sum of elements of a certain form"

But working from Sharp's definition, we have the following:

$$\displaystyle S[ \Gamma ]$$ is the intersection of all subrings of R which contain both $$\displaystyle S$$ and $$\displaystyle \Gamma$$.

Given this , I am not sure of the nature or form of the sum you refer to in the above statement. Can you help?

Further, I am not at all sure why the sum should be finite ... especially since $$\displaystyle \Gamma$$ may not be finite ...

Can you clarify this for me?

Peter

Last edited:

#### Deveno

##### Well-known member
MHB Math Scholar
In mathematics (in general), the construction:

The intersection of all (substructures) that contain (given set) ,

is held to be equivalent to the construction:

The smallest (or minimal) (substructure) containing (given set).

So, for example, suppose our ring is $R[[x]]$ the set of all formal power series in $x$ with coefficients in $R$. Is it not the case that $R[x]$ (polynomials in $x$ with coefficients in $R$) is a subring of $R[[x]]$ containing $R$ and $x$ which is strictly smaller than $R[[x]]$?

As another analogy: certainly the set of all "infinite words" in the letters $a$ and $b$, form a semi-group, but this is NOT the free semi-group generated by the set $\{a,b\}$ which consists of all FINITE words in $a$ and $b$. Why? Because finite things are smaller than infinite things, and still are closed under binary operations (even if we have an infinite number of finite things).

Put another way: polynomials have evaluation maps, power series do not (necessarily). This means the evaluation homomorphisms guaranteed to exist for polynomials may not exist for power series (this leads to questions of convergence....we need an additional notion of when two power series are "close" to one another, which is not, strictly speaking, a purely algebraic notion).

Now..there are whole realms of algebraic topology where such questions of defining "nearness" for algebraic objects ARE dealt with...however, usually we want some meaningful definition of "nearness" on the ring $R$ itself, and this is problematic for a general ring $R$ (for certain finite rings, for example, only the discrete topology may be "consistent" with the ring operations, this is much like giving up and creating a ring out of an abelian group $G$ by declaring that $ab = 0$ for all $a,b \in G$...such a structure IS a ring, but it is a "bad ring"...everything is a zero divisor, and there is no hope whatsoever in making it a field...it has no unity, and only ONE ideal, the entire ring, so any quotient ring is trivial).

In the equality you have to prove, note that the cardinality of the subsets you are unioning over is taken to be finite. It is important to understand why this is done.

Suppose that $\Gamma = \Bbb N = \{0,1,2,3,\dots\}$.

If our subsets are of finite cardinality, we can order first by size, and then "alphabetically" (or lexicographically) by listing each element of each individual subset in increasing order (the usual order on the natural numbers), and declaring that for two sets of the same size:

$(a_1,a_2,\dots,a_k) < (b_1,b_2,\dots,b_k)$

if $a_1 < b_1$, or if $a_1 = b_2$, then if $a_2 < b_2$, and so on. For example:

$(1,2,3,5,7) < (1,2,3,5,8)$.

We can apply a "diagonal" argument to show then that the union is countable, so:

$$\Bbb N = \bigcup_{\Omega \subseteq \Bbb N, |\Omega| < \infty} \Omega$$

If we allow $\Omega$ to range over ARBITRARY subsets of $\Bbb N$, it turns out the right hand set is uncountable, so is strictly LARGER than $\Bbb N$ in terms of cardinality, so we cannot have equality (this is a deep result, and is basically a slimmed-down version of cantor's "second diagonal argument").

In other words, creating $R[]$ instead of $R$ gives us a ring which WAY TOO BIG to be minimal, even if $S$ is finite. Finiteness is a sort of "minimal closure property" when speaking about algebraic operations. This does not mean infinite sums or products could not be considered, just that the resulting objects we get from allowing them are MUCH bigger than we want them to be.

Put yet another way: arbitrary (even infinite) unions of finite things behave substantially DIFFERENT than arbitrary unions of infinite things. If we have some "finiteness" condition somewhere, we have a way to leverage this into calculation, if not...things can get quite unmanageable.

No doubt your textbook is really not concerned with foundational issues of cardinality: you are studying algebra. However, this ugliness will rear its head again, if you ever consider the difference between infinite direct sums of $R$-modules and infinite direct products of $R$-modules.

I hope someone a little better at this than I am can come along and confirm what I've written here, to be honest, I've never been all that comfortable with considering "big bad infinities".

#### Turgul

##### Member
A note about sums (of elements): as an algebraic operation, it fundamentally only makes sense to add finitely many things together at once. In and of itself, something like $\sum_{i=0}^\infty x^n$ has no meaning in a ring. This is ultimately why $A[x]$ is a polynomial ring and not a power series ring.

Why then does a power series ring make any sense? Whenever we are thinking about infinite things (or really anything at all!!!), there is a topology lurking in the background. Sometimes the topologies are discreet and thus hold no extra data, but when we want to make sense of infinite interaction, we require the topology to really inform what is going on.

Note that $\sum_{n=1}^\infty \frac{1}{n^2}$ does not make sense algebraically, but in the real numbers, we make sense of these symbols by defining the expression to mean the limit $\lim_{m \rightarrow \infty} \sum_{n=1}^m \frac{1}{n^2}$. The series converges to $\frac{\pi^2}{6}$ and this is the actual ring element we mean by this series. Now, before we knew what the series converged to, the only way we had to represent the number was by this series. You can get to understand a lot about an element by looking at a series representation, but in the back of your head, you should always be remembering that a series is just some representation of an actual element in some ring.

With nice enough series (say absolutely convergent in the case of $\mathbb{R}$), you can manipulate them in algebraically pleasing ways; you can rewrite the order of the partial sums, if you add two such series together, you can pairwise add equally "ordered" terms (ie $\sum_{i=1}^\infty a_i + \sum_{i=1}^\infty b_i = \sum_{i=1}^\infty (a_i+b_i)$), you can even multiply the series in a nice way (the standard convolution expression; exactly as with formal power series). But none of these mean that infinite sums make sense, by themselves, algebraically. Instead, this means that all of the associated series happen to converge to the same element in the real numbers.

To return to a formal power series ring $A[[x]]$, what then do we mean by $1 + x + x^2 + \cdots$? There is an underlying topology on this ring with the feature that "$x$ is small" so that higher and higher powers becomes smaller and smaller and the infinite sum above is actually a convergent sequence of partial sums in this topology. It is important to note that the series $1 + (1+x) + (1+x)^2 + \cdots$ does not converge in this topology and thus is not an element of $A[[x]]$ (though it is in $A[[1+x]]$) despite all of the partial sums being polynomial, hence elements of $A[[x]]$. The problem is that in the native topology on $A[[x]]$, $1+x$ is not small so the size of the sum explodes.

We can manipulate elements of a formal power series ring just like we can manipulate absolutely convergent series in the real numbers so they end up getting a nice algebraic feel. Even better, unlike with $\mathbb{R}$, there is a unique(!) way of expressing any element of $A[[x]]$ as a series so most of the algebraic carelessness we've been using really is justified as everything we need ends up converging and there can be no confusion regarding whether results depended on the choice of series we used to represent a given element.

On the other hand, we can observe that $A[x] \subseteq A[[x]]$, so we can view $A[x]$ as a topological space with the subspace topology inherited from $A[[x]]$. Under this topology, $A[x]$ is not a complete space, but in fact $A[[x]]$ is the analytic completion of $A[x]$ under this topology, just as $\mathbb{R}$ is the completion of $\mathbb{Q}$ under the standard topology.
As a word of caution, there are many topologies I can put on $A[x]$ that are like this (I get a different for every prime polynomial), each with different power-series-like completions.

I fear the above may have gotten complicated, but the takeaway message should be that in order to make sense of infinite sums, you have to have a notion of convergence; algebraically, all you are allowed to do is take finite sums. Thus even in $\mathbb{Z}[x_1,x_2,\ldots]$, elements will be finite sums of monomials (each a finite product of the $x_i$ with some coefficient in $\mathbb{Z}$).

-----------------

A caution regarding what Deveno said: it actually turns out that the union of subsets of $\mathbb{N}$ will always be $\mathbb{N}$, so long as each element of $\mathbb{N}$ is in at least one of the subsets of the union; the cardinality of the subsets (or the number of subsets) is irrelevent. This is because a union of things from a set is always a subset of the set.

There are nonetheless many strange things that can happen with infinite sets. Consider $\mathbb{F}_p[x]$ and $\mathbb{F}_p[[x]]$ where $\mathbb{F}_p$ is the field with $p$ elements (ie $\mathbb{Z}/p\mathbb{Z}$). For any given power of $x$, there are $p$ choices for coefficient. Hence there are $p^n$ polynomials of degree less than $n$ (things of the form $a_0 + a_1x + \cdots + a_{n-1}x^{n-1}$). Since $\mathbb{F}_p[x]$ has polynomials of arbitrarily high degree, there are infinitely many different polynomials in $\mathbb{F}_p[x]$. How many exactly? It turns out $\mathbb{F}_p[x]$ is countable (ie has the same size as $\mathbb{N}$).

What about $\mathbb{F}_p[[x]]$? Because we are allowed infinitely many nonzero coefficients, it turns out that $\mathbb{F}_p[[x]]$ has the same size as $\mathbb{R}$, a strictly larger set than $\mathbb{F}_p[x]$. This is the difference between the number of ways of picking "as many as you would like" (while finite) from a set and picking "infinitely many."

-----------------

Hopefully all this clears up some confusion, though I fear it may have made more. At the very least I hope was interesting.

Last edited:

#### Deveno

##### Well-known member
MHB Math Scholar
A note about sums (of elements): as an algebraic operation, it fundamentally only makes sense to add finitely many things together at once. In and of itself, something like $\sum_{i=0}^\infty x^n$ has no meaning in a ring. This is ultimately why $A[x]$ is a polynomial ring and not a power series ring.

Why then does a power series ring make any sense? Whenever we are thinking about infinite things (or really anything at all!!!), there is a topology lurking in the background. Sometimes the topologies are discreet and thus hold no extra data, but when we want to make sense of infinite interaction, we require the topology to really inform what is going on.

Note that $\sum_{n=1}^\infty \frac{1}{n^2}$ does not make sense algebraically, but in the real numbers, we make sense of these symbols by defining the expression to mean the limit $\lim_{m \rightarrow \infty} \sum_{n=1}^m \frac{1}{n^2}$. The series converges to $\frac{\pi^2}{6}$ and this is the actual ring element we mean by this series. Now, before we knew what the series converged to, the only way we had to represent the number was by this series. You can get to understand a lot about an element by looking at a series representation, but in the back of your head, you should always be remembering that a series is just some representation of an actual element in some ring.

With nice enough series (say absolutely convergent in the case of $\mathbb{R}$), you can manipulate them in algebraically pleasing ways; you can rewrite the order of the partial sums, if you add two such series together, you can pairwise add equally "ordered" terms (ie $\sum_{i=1}^\infty a_i + \sum{i=1}^\infty b_i = \sum_{i=1}^\infty (a_i+b_i)$), you can even multiply the series in a nice way (the standard convolution expression; exactly as with formal power series). But none of these mean that infinite sums make sense, by themselves, algebraically. Instead, this means that all of the associated series happen to converge to the same element in the real numbers.

To return to a formal power series ring $A[[x]]$, what then do we mean by $1 + x + x^2 + \cdots$? There is an underlying topology on this ring with the feature that "$x$ is small" so that higher and higher powers becomes smaller and smaller and the infinite sum above is actually a convergent sequence of partial sums in this topology. It is important to note that the series $1 + (1+x) + (1+x)^2 + \cdots$ does not converge in this topology and thus is not an element of $A[[x]]$ (though it is in $A[[1+x]]$) despite all of the partial sums being polynomial, hence elements of $A[[x]]$. The problem is that in the native topology on $A[[x]]$, $1+x$ is not small so the size of the sum explodes.

We can manipulate elements of a formal power series ring just like we can manipulate absolutely convergent series in the real numbers so they end up getting a nice algebraic feel. Even better, unlike with $\mathbb{R}$, there is a unique(!) way of expressing any element of $A[[x]]$ as a series so most of the algebraic carelessness we've been using really is justified as everything we need ends up converging and there can be no confusion regarding whether results depended on the choice of series we used to represent a given element.

On the other hand, we can observe that $A[x] \subseteq A[[x]]$, so we can view $A[x]$ as a topological space with the subspace topology inherited from $A[[x]]$. Under this topology, $A[x]$ is not a complete space, but in fact $A[[x]]$ is the analytic completion of $A[x]$ under this topology, just as $\mathbb{R}$ is the completion of $\mathbb{Q}$ under the standard topology.
As a word of caution, there are many topologies I can put on $A[x]$ that are like this (I get a different for every prime polynomial), each with different power-series-like completions.

I fear the above may have gotten complicated, but the takeaway message should be that in order to make sense of infinite sums, you have to have a notion of convergence; algebraically, all you are allowed to do is take finite sums. Thus even in $\mathbb{Z}[x_1,x_2,\ldots]$, elements will be finite sums of monomials (each a finite product of the $x_i$ with some coefficient in $\mathbb{Z}$).

-----------------

A caution regarding what Deveno said: it actually turns out that the union of subsets of $\mathbb{N}$ will always be $\mathbb{N}$, so long as each element of $\mathbb{N}$ is in at least one of the subsets of the union; the cardinality of the subsets (or the number of subsets) is irrelevent. This is because a union of things from a set is always a subset of the set.

There are nonetheless many strange things that can happen with infinite sets. Consider $\mathbb{F}_p[x]$ and $\mathbb{F}_p[[x]]$ where $\mathbb{F}_p$ is the field with $p$ elements (ie $\mathbb{Z}/p\mathbb{Z}$). For any given power of $x$, there are $p$ choices for coefficient. Hence there are $p^n$ polynomials of degree less than $n$ (things of the form $a_0 + a_1x + \cdots + a_{n-1}x^{n-1}$). Since $\mathbb{F}_p[x]$ has polynomials of arbitrarily high degree, there are infinitely many different polynomials in $\mathbb{F}_p[x]$. How many exactly? It turns out $\mathbb{F}_p[x]$ is countable (ie has the same size as $\mathbb{N}$).

What about $\mathbb{F}_p[[x]]$? Because we are allowed infinitely many nonzero coefficients, it turns out that $\mathbb{F}_p[[x]]$ has the same size as $\mathbb{R}$, a strictly larger set than $\mathbb{F}_p[x]$. This is the difference between the number of ways of picking "as many as you would like" (while finite) from a set and picking "infinitely many."

-----------------

Hopefully all this clears up some confusion, though I fear it may have made more. At the very least I hope was interesting.

Well I thought it was interesting. Yes, of course, you're right...the union is STILL $\Bbb N$, I was trying to indicate that we couldn't prove this by establishing a one-to one correspondence between elements on the left, and subsets on the right (but then...we usually can't do that ANYWAY). Taking adjunctions, however, changes the situation in a fundamental way.

I have seen people debate: what are "formal power series" in the absence of some notion of convergence? I think that the answer lies in the fact that any desired sum or product can be evaluated term-wise (convergent or not), by a finite algorithm. As to whether or not some algebraic construction has "meaning" until we can "evaluate" it for some substitution of "constant" for "variable", is to me, a philosophical (perhaps ontological) consideration...these, by and large, do not interest me.

I suppose that ONE interpretation of formal power series would be to consider $x$ to be an "infinitesimal element of the hypperreals" (where we sum over a hyperinteger index). In this view, when we "evaluate" the series, we just get the constant term. This isn't totally satisfactory, in my view, because we would like results we prove "formally" to have implications for numbers where $x$ is real, but "small enough", such as:

$\displaystyle 2 = \frac{1}{1 - \frac{1}{2}} = 1 + \frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2^k} + \cdots$

Of course, one can prove this in the real numbers using limits, but one has the feeling something deeper is going on.

#### Peter

##### Well-known member
MHB Site Helper
In mathematics (in general), the construction:

The intersection of all (substructures) that contain (given set) ,

is held to be equivalent to the construction:

The smallest (or minimal) (substructure) containing (given set).

So, for example, suppose our ring is $R[[x]]$ the set of all formal power series in $x$ with coefficients in $R$. Is it not the case that $R[x]$ (polynomials in $x$ with coefficients in $R$) is a subring of $R[[x]]$ containing $R$ and $x$ which is strictly smaller than $R[[x]]$?

As another analogy: certainly the set of all "infinite words" in the letters $a$ and $b$, form a semi-group, but this is NOT the free semi-group generated by the set $\{a,b\}$ which consists of all FINITE words in $a$ and $b$. Why? Because finite things are smaller than infinite things, and still are closed under binary operations (even if we have an infinite number of finite things).

Put another way: polynomials have evaluation maps, power series do not (necessarily). This means the evaluation homomorphisms guaranteed to exist for polynomials may not exist for power series (this leads to questions of convergence....we need an additional notion of when two power series are "close" to one another, which is not, strictly speaking, a purely algebraic notion).

Now..there are whole realms of algebraic topology where such questions of defining "nearness" for algebraic objects ARE dealt with...however, usually we want some meaningful definition of "nearness" on the ring $R$ itself, and this is problematic for a general ring $R$ (for certain finite rings, for example, only the discrete topology may be "consistent" with the ring operations, this is much like giving up and creating a ring out of an abelian group $G$ by declaring that $ab = 0$ for all $a,b \in G$...such a structure IS a ring, but it is a "bad ring"...everything is a zero divisor, and there is no hope whatsoever in making it a field...it has no unity, and only ONE ideal, the entire ring, so any quotient ring is trivial).

In the equality you have to prove, note that the cardinality of the subsets you are unioning over is taken to be finite. It is important to understand why this is done.

Suppose that $\Gamma = \Bbb N = \{0,1,2,3,\dots\}$.

If our subsets are of finite cardinality, we can order first by size, and then "alphabetically" (or lexicographically) by listing each element of each individual subset in increasing order (the usual order on the natural numbers), and declaring that for two sets of the same size:

$(a_1,a_2,\dots,a_k) < (b_1,b_2,\dots,b_k)$

if $a_1 < b_1$, or if $a_1 = b_2$, then if $a_2 < b_2$, and so on. For example:

$(1,2,3,5,7) < (1,2,3,5,8)$.

We can apply a "diagonal" argument to show then that the union is countable, so:

$$\Bbb N = \bigcup_{\Omega \subseteq \Bbb N, |\Omega| < \infty} \Omega$$

If we allow $\Omega$ to range over ARBITRARY subsets of $\Bbb N$, it turns out the right hand set is uncountable, so is strictly LARGER than $\Bbb N$ in terms of cardinality, so we cannot have equality (this is a deep result, and is basically a slimmed-down version of cantor's "second diagonal argument").

In other words, creating $R[]$ instead of $R$ gives us a ring which WAY TOO BIG to be minimal, even if $S$ is finite. Finiteness is a sort of "minimal closure property" when speaking about algebraic operations. This does not mean infinite sums or products could not be considered, just that the resulting objects we get from allowing them are MUCH bigger than we want them to be.

Put yet another way: arbitrary (even infinite) unions of finite things behave substantially DIFFERENT than arbitrary unions of infinite things. If we have some "finiteness" condition somewhere, we have a way to leverage this into calculation, if not...things can get quite unmanageable.

No doubt your textbook is really not concerned with foundational issues of cardinality: you are studying algebra. However, this ugliness will rear its head again, if you ever consider the difference between infinite direct sums of $R$-modules and infinite direct products of $R$-modules.

I hope someone a little better at this than I am can come along and confirm what I've written here, to be honest, I've never been all that comfortable with considering "big bad infinities".

Thanks so much for that post, Deveno, most interesting and enlightening ... Still working through it and reflecting on what you wrote ...

Thanks again,

Peter

#### Peter

##### Well-known member
MHB Site Helper
A note about sums (of elements): as an algebraic operation, it fundamentally only makes sense to add finitely many things together at once. In and of itself, something like $\sum_{i=0}^\infty x^n$ has no meaning in a ring. This is ultimately why $A[x]$ is a polynomial ring and not a power series ring.

Why then does a power series ring make any sense? Whenever we are thinking about infinite things (or really anything at all!!!), there is a topology lurking in the background. Sometimes the topologies are discreet and thus hold no extra data, but when we want to make sense of infinite interaction, we require the topology to really inform what is going on.

Note that $\sum_{n=1}^\infty \frac{1}{n^2}$ does not make sense algebraically, but in the real numbers, we make sense of these symbols by defining the expression to mean the limit $\lim_{m \rightarrow \infty} \sum_{n=1}^m \frac{1}{n^2}$. The series converges to $\frac{\pi^2}{6}$ and this is the actual ring element we mean by this series. Now, before we knew what the series converged to, the only way we had to represent the number was by this series. You can get to understand a lot about an element by looking at a series representation, but in the back of your head, you should always be remembering that a series is just some representation of an actual element in some ring.

With nice enough series (say absolutely convergent in the case of $\mathbb{R}$), you can manipulate them in algebraically pleasing ways; you can rewrite the order of the partial sums, if you add two such series together, you can pairwise add equally "ordered" terms (ie $\sum_{i=1}^\infty a_i + \sum_{i=1}^\infty b_i = \sum_{i=1}^\infty (a_i+b_i)$), you can even multiply the series in a nice way (the standard convolution expression; exactly as with formal power series). But none of these mean that infinite sums make sense, by themselves, algebraically. Instead, this means that all of the associated series happen to converge to the same element in the real numbers.

To return to a formal power series ring $A[[x]]$, what then do we mean by $1 + x + x^2 + \cdots$? There is an underlying topology on this ring with the feature that "$x$ is small" so that higher and higher powers becomes smaller and smaller and the infinite sum above is actually a convergent sequence of partial sums in this topology. It is important to note that the series $1 + (1+x) + (1+x)^2 + \cdots$ does not converge in this topology and thus is not an element of $A[[x]]$ (though it is in $A[[1+x]]$) despite all of the partial sums being polynomial, hence elements of $A[[x]]$. The problem is that in the native topology on $A[[x]]$, $1+x$ is not small so the size of the sum explodes.

We can manipulate elements of a formal power series ring just like we can manipulate absolutely convergent series in the real numbers so they end up getting a nice algebraic feel. Even better, unlike with $\mathbb{R}$, there is a unique(!) way of expressing any element of $A[[x]]$ as a series so most of the algebraic carelessness we've been using really is justified as everything we need ends up converging and there can be no confusion regarding whether results depended on the choice of series we used to represent a given element.

On the other hand, we can observe that $A[x] \subseteq A[[x]]$, so we can view $A[x]$ as a topological space with the subspace topology inherited from $A[[x]]$. Under this topology, $A[x]$ is not a complete space, but in fact $A[[x]]$ is the analytic completion of $A[x]$ under this topology, just as $\mathbb{R}$ is the completion of $\mathbb{Q}$ under the standard topology.
As a word of caution, there are many topologies I can put on $A[x]$ that are like this (I get a different for every prime polynomial), each with different power-series-like completions.

I fear the above may have gotten complicated, but the takeaway message should be that in order to make sense of infinite sums, you have to have a notion of convergence; algebraically, all you are allowed to do is take finite sums. Thus even in $\mathbb{Z}[x_1,x_2,\ldots]$, elements will be finite sums of monomials (each a finite product of the $x_i$ with some coefficient in $\mathbb{Z}$).

-----------------

A caution regarding what Deveno said: it actually turns out that the union of subsets of $\mathbb{N}$ will always be $\mathbb{N}$, so long as each element of $\mathbb{N}$ is in at least one of the subsets of the union; the cardinality of the subsets (or the number of subsets) is irrelevent. This is because a union of things from a set is always a subset of the set.

There are nonetheless many strange things that can happen with infinite sets. Consider $\mathbb{F}_p[x]$ and $\mathbb{F}_p[[x]]$ where $\mathbb{F}_p$ is the field with $p$ elements (ie $\mathbb{Z}/p\mathbb{Z}$). For any given power of $x$, there are $p$ choices for coefficient. Hence there are $p^n$ polynomials of degree less than $n$ (things of the form $a_0 + a_1x + \cdots + a_{n-1}x^{n-1}$). Since $\mathbb{F}_p[x]$ has polynomials of arbitrarily high degree, there are infinitely many different polynomials in $\mathbb{F}_p[x]$. How many exactly? It turns out $\mathbb{F}_p[x]$ is countable (ie has the same size as $\mathbb{N}$).

What about $\mathbb{F}_p[[x]]$? Because we are allowed infinitely many nonzero coefficients, it turns out that $\mathbb{F}_p[[x]]$ has the same size as $\mathbb{R}$, a strictly larger set than $\mathbb{F}_p[x]$. This is the difference between the number of ways of picking "as many as you would like" (while finite) from a set and picking "infinitely many."

-----------------

Hopefully all this clears up some confusion, though I fear it may have made more. At the very least I hope was interesting.
Turgul,

Thank you for that fascinating post ... It was a real eye-opener and quite a learning experience for me ... ...

I am not sure I fully understand everything you said! however, and I am still reflecting on the contents ... Will look around to see what various texts say about these matters ... What reference do you suggest for such matters?

Peter

#### Turgul

##### Member
For yet another point of view: certainly one concrete way of viewing both $A[x]$ and $A[[x]]$ is as subsets of $A^\infty$ (really $A^\mathbb{N}$), ie infinite tuples $(a_0,a_1,a_2,\ldots)$ with $a_i \in A$, where $A[x]$ is taken to be such strings with only finltely many nonzero terms and $A[[x]]$ is taken to be all such strings. Then addition can be defined component-wise $(a_0,a_1,a_2,\ldots) + (b_0,b_1,b_2,\ldots) = (a_0+b_0,a_1+b_1,a_2+b_2,\ldots)$ and multiplication via the standard convolution formulas (exactly the formulas used to determine coefficients of products with "typical power series"). Then the additive identity is simiply the element $(0,0,0,\ldots)$ and the multiplicative identity is $(1,0,0,\ldots)$. It is straightforward enough to check that these two sets are made into commutative rings under these operations. It is also easy to see these are isomorphic to our "usual" notions since the entries were defined to behave exactly like the coefficients of the "usual" rings.

This point of view completely removes the need to use $x$ at all, and there are no infinite sums in sight. There is no moral ambiguity here, no questions of "meaning;" these are just rings built out of big sets. This perspective also makes quite clear the isomorphism between $A[[x]]$ and the ring of functions $f: \mathbb{N} \rightarrow A$ under pointwise addition and convolution while giving $A[x]$ the nice description as the subring of compactly supported functions. In fact, from a formal logic point of view, this is likely the nicest way to view these rings. We can even use this point of view to legitimize the "naive" approach in most algebra books, as there can be no mistaking whether these rings are well defined objects (so long as you believe in set theory in general).

Unfortunately, like viewing all finite groups soley as subgroups of $S_n$, this point of view is quite combersome to think about exclusively (especially when you want to consider polynomial rings in several variables and you are looking at infinite tuples of infinite tuples). The informal use of $x$'s is simply more intuitive, making it easier to see what is going on.

----------------------

To make another jump into abstraction, the way that I prefer to view power series (and getting back to the ideas of my previous post a bit more) is through a tool called inverse limits. We will assume for now that we are happy with what a polynomial is (in particular we are happy with using $x$'s). We get a collection of homomorphisms $A[x]/x^n \rightarrow A[x]/x^{n-1}$ by truncating each polynomial to be degree $\leq n-1$. This gives us a string of maps

$\cdots \rightarrow A[x]/x^4 \rightarrow A[x]/x^3 \rightarrow A[x]/x^2 \rightarrow A[x]/x \cong A$

The inverse limit of these maps $\displaystyle \lim_\leftarrow A[x]/x^n$ should be some ring living "at the very left" of this sequence. But what are the things built out of $x$'s that I can truncate to arbitrary degree? Well, these are exactly the power series in $A[[x]]$. One nice feature of this description is that taking inverse limits makes sense with topological spaces too, so if I view each ring $A[x]/x^n$ as a topological space with the discrete topology, I get a natural topology on the inverse limit (creatively called the inverse limit topology) which is not discrete! So even though I built my ring out of discrete topological rings, the inverse limit has a more interesting topology (this is precisely the topology in which taking limits of partial sums $1 + x + x^2 + \cdots + x^m$ will converge as we would like).

As a side note, doing this process with the rings $\mathbb{Z}/p^n\mathbb{Z}$ instead of $A[x]/x^n$ yields the $p$-adic integers, often denoted $\mathbb{Z}_p$ (not to be confused with the integers modulo $p$). Infinite Galois groups for infinite field extensions $K/F$ can be viewed as inverse limits of finite Galois groups$/F$ (the proof of this is closely tied to the underlying idea starting this thread in the first place); the inverse limit topology in this case has the special name of the Krull topology and it turns out that it is precisely the closed subgroups in this topology which correspond to subfields of $K$ (as opposed to all subgroups, as in the finite case), so algebraic data really is enclosed in the topology.

----------------------

The question of when does a formal power series represent a "real" function is a deep one. Certainly you can evaluate any power series in $A[[x]]$ at 0. But if you want your power series $f(x)$ to be a function, this is tied to wanting an "actual function" $g(t)$ with $g(0)$ equal to the constant term of $f(x)$, $g'(0)$ equal to the coefficient of the linear term of $f(x)$, etc. Significant care must be taken to make sense of the domain of such a function as well as with what one means by derivatives of $g(t)$ if $A$ is not a ring like $\mathbb{R}$ or $\mathbb{C}$. In some sense, this is a central question in deformation theory.

----------------------

Unfortunately, I know of no book or reference which treats this material in an elementary or comprehensive fashion. My perspective is one gained after several years of studying many parts of mathematics. Surely all of these ideas are treated in some level of detail in any book on commutative algebra, but these books all assume familiarity with algebra at the level of Dummit and Foote's book, and most of these books would expect you to develop many of these ideas on your own, anyway. You could surely do worse than to work through Sharp's book or do the exercises pertaining to power series in D&F. It may not be readily apparent why it is relevent, but you might also read the section in D&F on discrete valuation rings, especially the stuff about $p$-adic numbers.

It should be apparent that there is much to be said about formal power series. While my discussion has been pretty informal, hopefully it has helped paint a multifaceted picture (if rather rough around the edges) of how they might be viewed.

Last edited:

#### Peter

##### Well-known member
MHB Site Helper
For yet another point of view: certainly one concrete way of viewing both $A[x]$ and $A[[x]]$ is as subsets of $A^\infty$ (really $A^\mathbb{N}$), ie infinite tuples $(a_0,a_1,a_2,\ldots)$ with $a_i \in A$, where $A[x]$ is taken to be such strings with only finltely many nonzero terms and $A[[x]]$ is taken to be all such strings. Then addition can be defined component-wise $(a_0,a_1,a_2,\ldots) + (b_0,b_1,b_2,\ldots) = (a_0+b_0,a_1+b_1,a_2+b_2,\ldots)$ and multiplication via the standard convolution formulas (exactly the formulas used to determine coefficients of products with "typical power series"). Then the additive identity is simiply the element $(0,0,0,\ldots)$ and the multiplicative identity is $(1,0,0,\ldots)$. It is straightforward enough to check that these two sets are made into commutative rings under these operations. It is also easy to see these are isomorphic to our "usual" notions since the entries were defined to behave exactly like the coefficients of the "usual" rings.

This point of view completely removes the need to use $x$ at all, and there are no infinite sums in sight. There is no moral ambiguity here, no questions of "meaning;" these are just rings built out of big sets. This perspective also makes quite clear the isomorphism between $A[[x]]$ and the ring of functions $f: \mathbb{N} \rightarrow A$ under pointwise addition and convolution while giving $A[x]$ the nice description as the subring of compactly supported functions. In fact, from a formal logic point of view, this is likely the nicest way to view these rings. We can even use this point of view to legitimize the "naive" approach in most algebra books, as there can be no mistaking whether these rings are well defined objects (so long as you believe in set theory in general).

Unfortunately, like viewing all finite groups soley as subgroups of $S_n$, this point of view is quite combersome to think about exclusively (especially when you want to consider polynomial rings in several variables and you are looking at infinite tuples of infinite tuples). The informal use of $x$'s is simply more intuitive, making it easier to see what is going on.

----------------------

To make another jump into abstraction, the way that I prefer to view power series (and getting back to the ideas of my previous post a bit more) is through a tool called inverse limits. We will assume for now that we are happy with what a polynomial is (in particular we are happy with using $x$'s). We get a collection of homomorphisms $A[x]/x^n \rightarrow A[x]/x^{n-1}$ by truncating each polynomial to be degree $\leq n-1$. This gives us a string of maps

$\cdots \rightarrow A[x]/x^4 \rightarrow A[x]/x^3 \rightarrow A[x]/x^2 \rightarrow A[x]/x \cong A$

The inverse limit of these maps $\displaystyle \lim_\leftarrow A[x]/x^n$ should be some ring living "at the very left" of this sequence. But what are the things built out of $x$'s that I can truncate to arbitrary degree? Well, these are exactly the power series in $A[[x]]$. One nice feature of this description is that taking inverse limits makes sense with topological spaces too, so if I view each ring $A[x]/x^n$ as a topological space with the discrete topology, I get a natural topology on the inverse limit (creatively called the inverse limit topology) which is not discrete! So even though I built my ring out of discrete topological rings, the inverse limit has a more interesting topology (this is precisely the topology in which taking limits of partial sums $1 + x + x^2 + \cdots + x^m$ will converge as we would like).

As a side note, doing this process with the rings $\mathbb{Z}/p^n\mathbb{Z}$ instead of $A[x]/x^n$ yields the $p$-adic integers, often denoted $\mathbb{Z}_p$ (not to be confused with the integers modulo $p$). Infinite Galois groups for infinite field extensions $K/F$ can be viewed as inverse limits of finite Galois groups$/F$ (the proof of this is closely tied to the underlying idea starting this thread in the first place); the inverse limit topology in this case has the special name of the Krull topology and it turns out that it is precisely the closed subgroups in this topology which correspond to subfields of $K$ (as opposed to all subgroups, as in the finite case), so algebraic data really is enclosed in the topology.

----------------------

The question of when does a formal power series represent a "real" function is a deep one. Certainly you can evaluate any power series in $A[[x]]$ at 0. But if you want your power series $f(x)$ to be a function, this is tied to wanting an "actual function" $g(t)$ with $g(0)$ equal to the constant term of $f(x)$, $g'(0)$ equal to the coefficient of the linear term of $f(x)$, etc. Significant care must be taken to make sense of the domain of such a function as well as with what one means by derivatives of $g(t)$ if $A$ is not a ring like $\mathbb{R}$ of $\mathbb{C}$. In some sense, this is a central question in deformation theory.

----------------------

Unfortunately, I know of no book or reference which treats this material in an elementary or comprehensive fashion. My perspective is one gained after several years of studying many parts of mathematics. Surely all of these ideas are treated in some level of detail in any book on commutative algebra, but these books all assume familiarity with algebra at the level of Dummit and Foote's book, and most of these books would expect you to develop many of these ideas on your own, anyway. You could surely do worse than to work through Sharp's book or do the exercises pertaining to power series in D&F. It may not be readily apparent why it is relevent, but you might also read the section in D&F on discrete valuation rings, especially the stuff about $p$-adic numbers.
Thanks Turgul,.

Now working carefully through this really interesting post!

Peter

#### Peter

##### Well-known member
MHB Site Helper
For yet another point of view: certainly one concrete way of viewing both $A[x]$ and $A[[x]]$ is as subsets of $A^\infty$ (really $A^\mathbb{N}$), ie infinite tuples $(a_0,a_1,a_2,\ldots)$ with $a_i \in A$, where $A[x]$ is taken to be such strings with only finltely many nonzero terms and $A[[x]]$ is taken to be all such strings. Then addition can be defined component-wise $(a_0,a_1,a_2,\ldots) + (b_0,b_1,b_2,\ldots) = (a_0+b_0,a_1+b_1,a_2+b_2,\ldots)$ and multiplication via the standard convolution formulas (exactly the formulas used to determine coefficients of products with "typical power series"). Then the additive identity is simiply the element $(0,0,0,\ldots)$ and the multiplicative identity is $(1,0,0,\ldots)$. It is straightforward enough to check that these two sets are made into commutative rings under these operations. It is also easy to see these are isomorphic to our "usual" notions since the entries were defined to behave exactly like the coefficients of the "usual" rings.

This point of view completely removes the need to use $x$ at all, and there are no infinite sums in sight. There is no moral ambiguity here, no questions of "meaning;" these are just rings built out of big sets. This perspective also makes quite clear the isomorphism between $A[[x]]$ and the ring of functions $f: \mathbb{N} \rightarrow A$ under pointwise addition and convolution while giving $A[x]$ the nice description as the subring of compactly supported functions. In fact, from a formal logic point of view, this is likely the nicest way to view these rings. We can even use this point of view to legitimize the "naive" approach in most algebra books, as there can be no mistaking whether these rings are well defined objects (so long as you believe in set theory in general).

Unfortunately, like viewing all finite groups soley as subgroups of $S_n$, this point of view is quite combersome to think about exclusively (especially when you want to consider polynomial rings in several variables and you are looking at infinite tuples of infinite tuples). The informal use of $x$'s is simply more intuitive, making it easier to see what is going on.

----------------------

To make another jump into abstraction, the way that I prefer to view power series (and getting back to the ideas of my previous post a bit more) is through a tool called inverse limits. We will assume for now that we are happy with what a polynomial is (in particular we are happy with using $x$'s). We get a collection of homomorphisms $A[x]/x^n \rightarrow A[x]/x^{n-1}$ by truncating each polynomial to be degree $\leq n-1$. This gives us a string of maps

$\cdots \rightarrow A[x]/x^4 \rightarrow A[x]/x^3 \rightarrow A[x]/x^2 \rightarrow A[x]/x \cong A$

The inverse limit of these maps $\displaystyle \lim_\leftarrow A[x]/x^n$ should be some ring living "at the very left" of this sequence. But what are the things built out of $x$'s that I can truncate to arbitrary degree? Well, these are exactly the power series in $A[[x]]$. One nice feature of this description is that taking inverse limits makes sense with topological spaces too, so if I view each ring $A[x]/x^n$ as a topological space with the discrete topology, I get a natural topology on the inverse limit (creatively called the inverse limit topology) which is not discrete! So even though I built my ring out of discrete topological rings, the inverse limit has a more interesting topology (this is precisely the topology in which taking limits of partial sums $1 + x + x^2 + \cdots + x^m$ will converge as we would like).

As a side note, doing this process with the rings $\mathbb{Z}/p^n\mathbb{Z}$ instead of $A[x]/x^n$ yields the $p$-adic integers, often denoted $\mathbb{Z}_p$ (not to be confused with the integers modulo $p$). Infinite Galois groups for infinite field extensions $K/F$ can be viewed as inverse limits of finite Galois groups$/F$ (the proof of this is closely tied to the underlying idea starting this thread in the first place); the inverse limit topology in this case has the special name of the Krull topology and it turns out that it is precisely the closed subgroups in this topology which correspond to subfields of $K$ (as opposed to all subgroups, as in the finite case), so algebraic data really is enclosed in the topology.

----------------------

The question of when does a formal power series represent a "real" function is a deep one. Certainly you can evaluate any power series in $A[[x]]$ at 0. But if you want your power series $f(x)$ to be a function, this is tied to wanting an "actual function" $g(t)$ with $g(0)$ equal to the constant term of $f(x)$, $g'(0)$ equal to the coefficient of the linear term of $f(x)$, etc. Significant care must be taken to make sense of the domain of such a function as well as with what one means by derivatives of $g(t)$ if $A$ is not a ring like $\mathbb{R}$ or $\mathbb{C}$. In some sense, this is a central question in deformation theory.

----------------------

Unfortunately, I know of no book or reference which treats this material in an elementary or comprehensive fashion. My perspective is one gained after several years of studying many parts of mathematics. Surely all of these ideas are treated in some level of detail in any book on commutative algebra, but these books all assume familiarity with algebra at the level of Dummit and Foote's book, and most of these books would expect you to develop many of these ideas on your own, anyway. You could surely do worse than to work through Sharp's book or do the exercises pertaining to power series in D&F. It may not be readily apparent why it is relevent, but you might also read the section in D&F on discrete valuation rings, especially the stuff about $p$-adic numbers.

It should be apparent that there is much to be said about formal power series. While my discussion has been pretty informal, hopefully it has helped paint a multifaceted picture (if rather rough around the edges) of how they might be viewed.
Hi Turgul,

You write:

"For yet another point of view: certainly one concrete way of viewing both [FONT=MathJax_Math]A[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math]x[/FONT][FONT=MathJax_Main]][/FONT] and [FONT=MathJax_Math]A[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math]x[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main]][/FONT] is as subsets of [FONT=MathJax_Math]A[/FONT][FONT=MathJax_Main]∞[/FONT] (really [FONT=MathJax_Math]A[/FONT][FONT=MathJax_AMS]N[/FONT]), ie infinite tuples [FONT=MathJax_Main]([/FONT][FONT=MathJax_Math]a[/FONT][FONT=MathJax_Main]0[/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math]a[/FONT][FONT=MathJax_Main]1[/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Math]a[/FONT][FONT=MathJax_Main]2[/FONT][FONT=MathJax_Main],[/FONT][FONT=MathJax_Main]…[/FONT][FONT=MathJax_Main])[/FONT] with [FONT=MathJax_Math]a[/FONT][FONT=MathJax_Math]i[/FONT][FONT=MathJax_Main]∈[/FONT][FONT=MathJax_Math]A[/FONT], where [FONT=MathJax_Math]A[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math]x[/FONT][FONT=MathJax_Main]][/FONT] is taken to be such strings with only finltely many nonzero terms and [FONT=MathJax_Math]A[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math]x[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main]][/FONT] is taken to be all such strings... "

Yes, I am aware of this approach ,,, Joseph Rotman adopts the approach in his book Advanced Modern Algebra ...

Rotman and a number of other authors do not mention your point regarding convergence of power series but they do use the term formal power series. I take it that they are ignoring the finer points of the existence of elements of the ring of power series and simply manipulating the series as formal symbols ...

Mind you, Rotman soon defaults to the usual notation ... but the point regarding the nature of the elements of the power series and polynomial rings has been made.

Another interesting theorem/relationship you point to, I had no idea about - but it looks a really neat and interesting result - namely:

"his perspective also makes quite clear the isomorphism between [FONT=MathJax_Math]A[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math]x[/FONT][FONT=MathJax_Main]][/FONT][FONT=MathJax_Main]][/FONT] and the ring of functions [FONT=MathJax_Math]f[/FONT][FONT=MathJax_Main]:[/FONT][FONT=MathJax_AMS]N[/FONT][FONT=MathJax_Main]→[/FONT][FONT=MathJax_Math]A[/FONT] under pointwise addition and convolution while giving [FONT=MathJax_Math]A[/FONT][FONT=MathJax_Main][[/FONT][FONT=MathJax_Math]x[/FONT][FONT=MathJax_Main]][/FONT] the nice description as the subring of compactly supported functions. "

Peter

Last edited:

#### Peter

##### Well-known member
MHB Site Helper
For yet another point of view: certainly one concrete way of viewing both $A[x]$ and $A[[x]]$ is as subsets of $A^\infty$ (really $A^\mathbb{N}$), ie infinite tuples $(a_0,a_1,a_2,\ldots)$ with $a_i \in A$, where $A[x]$ is taken to be such strings with only finltely many nonzero terms and $A[[x]]$ is taken to be all such strings. Then addition can be defined component-wise $(a_0,a_1,a_2,\ldots) + (b_0,b_1,b_2,\ldots) = (a_0+b_0,a_1+b_1,a_2+b_2,\ldots)$ and multiplication via the standard convolution formulas (exactly the formulas used to determine coefficients of products with "typical power series"). Then the additive identity is simiply the element $(0,0,0,\ldots)$ and the multiplicative identity is $(1,0,0,\ldots)$. It is straightforward enough to check that these two sets are made into commutative rings under these operations. It is also easy to see these are isomorphic to our "usual" notions since the entries were defined to behave exactly like the coefficients of the "usual" rings.

This point of view completely removes the need to use $x$ at all, and there are no infinite sums in sight. There is no moral ambiguity here, no questions of "meaning;" these are just rings built out of big sets. This perspective also makes quite clear the isomorphism between $A[[x]]$ and the ring of functions $f: \mathbb{N} \rightarrow A$ under pointwise addition and convolution while giving $A[x]$ the nice description as the subring of compactly supported functions. In fact, from a formal logic point of view, this is likely the nicest way to view these rings. We can even use this point of view to legitimize the "naive" approach in most algebra books, as there can be no mistaking whether these rings are well defined objects (so long as you believe in set theory in general).

Unfortunately, like viewing all finite groups soley as subgroups of $S_n$, this point of view is quite combersome to think about exclusively (especially when you want to consider polynomial rings in several variables and you are looking at infinite tuples of infinite tuples). The informal use of $x$'s is simply more intuitive, making it easier to see what is going on.

----------------------

To make another jump into abstraction, the way that I prefer to view power series (and getting back to the ideas of my previous post a bit more) is through a tool called inverse limits. We will assume for now that we are happy with what a polynomial is (in particular we are happy with using $x$'s). We get a collection of homomorphisms $A[x]/x^n \rightarrow A[x]/x^{n-1}$ by truncating each polynomial to be degree $\leq n-1$. This gives us a string of maps

$\cdots \rightarrow A[x]/x^4 \rightarrow A[x]/x^3 \rightarrow A[x]/x^2 \rightarrow A[x]/x \cong A$

The inverse limit of these maps $\displaystyle \lim_\leftarrow A[x]/x^n$ should be some ring living "at the very left" of this sequence. But what are the things built out of $x$'s that I can truncate to arbitrary degree? Well, these are exactly the power series in $A[[x]]$. One nice feature of this description is that taking inverse limits makes sense with topological spaces too, so if I view each ring $A[x]/x^n$ as a topological space with the discrete topology, I get a natural topology on the inverse limit (creatively called the inverse limit topology) which is not discrete! So even though I built my ring out of discrete topological rings, the inverse limit has a more interesting topology (this is precisely the topology in which taking limits of partial sums $1 + x + x^2 + \cdots + x^m$ will converge as we would like).

As a side note, doing this process with the rings $\mathbb{Z}/p^n\mathbb{Z}$ instead of $A[x]/x^n$ yields the $p$-adic integers, often denoted $\mathbb{Z}_p$ (not to be confused with the integers modulo $p$). Infinite Galois groups for infinite field extensions $K/F$ can be viewed as inverse limits of finite Galois groups$/F$ (the proof of this is closely tied to the underlying idea starting this thread in the first place); the inverse limit topology in this case has the special name of the Krull topology and it turns out that it is precisely the closed subgroups in this topology which correspond to subfields of $K$ (as opposed to all subgroups, as in the finite case), so algebraic data really is enclosed in the topology.

----------------------

The question of when does a formal power series represent a "real" function is a deep one. Certainly you can evaluate any power series in $A[[x]]$ at 0. But if you want your power series $f(x)$ to be a function, this is tied to wanting an "actual function" $g(t)$ with $g(0)$ equal to the constant term of $f(x)$, $g'(0)$ equal to the coefficient of the linear term of $f(x)$, etc. Significant care must be taken to make sense of the domain of such a function as well as with what one means by derivatives of $g(t)$ if $A$ is not a ring like $\mathbb{R}$ or $\mathbb{C}$. In some sense, this is a central question in deformation theory.

----------------------

Unfortunately, I know of no book or reference which treats this material in an elementary or comprehensive fashion. My perspective is one gained after several years of studying many parts of mathematics. Surely all of these ideas are treated in some level of detail in any book on commutative algebra, but these books all assume familiarity with algebra at the level of Dummit and Foote's book, and most of these books would expect you to develop many of these ideas on your own, anyway. You could surely do worse than to work through Sharp's book or do the exercises pertaining to power series in D&F. It may not be readily apparent why it is relevent, but you might also read the section in D&F on discrete valuation rings, especially the stuff about $p$-adic numbers.

It should be apparent that there is much to be said about formal power series. While my discussion has been pretty informal, hopefully it has helped paint a multifaceted picture (if rather rough around the edges) of how they might be viewed.
Hi Turgul,

You write:

"To make another jump into abstraction, the way that I prefer to view power series (and getting back to the ideas of my previous post a bit more) is through a tool called inverse limits."

The idea of inverse limits looks really interesting ... I tied to skim through some textbooks looking for material ... did not find much but then Dummit and Foote have two interesting exercises (exercises 10 and 11, Section 7.6, page 269) which through multi-part exercises develop the ideas you mention ...

Mind you talking generally now, the value of posts like your is that they enhance the 'big picture' of algebra and help to draw threads together ... even threads one has not yet mastered ... so are really helpful to current and future learning ... few texts help in this way ... ...

Thanks again for the post!

Peter