- #1
winter85
- 35
- 0
Good day,
I am reading Stewart's Galois Theory (3rd ed). I'm up to chapter 8 where he starts tackling the issue of solubility by radicals.
The author considers independent complex variables [tex]t_1,t_2,...,t_n[/tex] and forms the polynomial [tex]F(t) = (t-t_1)(t-t_2)...(t-t_n)[/tex] which he calls the general polynomial of degree n. He says on p.95:
He let's [tex]L = \mathbb{C}(t_1,t_2,...,t_n)[/tex] be the field of all rational expressions in [tex]t_1,...,t_n[/tex] and defines K to be the subfield of L fixed by permutations of the roots. He remarks that in L the polynomial F(t) factorizes completely. He next defines solubility by "Ruffini radicals" as follows (he does mention it's not a standard definition): [tex]F(t)=0[/tex] is said to be solvable by Ruffini radicals if there exists a finite tower of subfields [tex]K = K_0 \subset K_1 \subset ... \subset K_r = L[/tex] where for [tex]j = 0,...,r-1[/tex] we have:
[tex]K_{j+1} = K_j(\alpha_j)[/tex] and [tex]\alpha_j^{n_j} \in K_j[/tex] for some positive integer [tex]n_j \geq 2[/tex]. The rest of the chapter is devoted to prove that no such tower exists when [tex]n \geq 5[/tex].
My question is this: it seems to me he does not answer the point raised in the paragraph I quoted above. So what is the EXACT statement of what Ruffini and Abel proved? did they only prove that the generic quintic is not soluble by radicals, ie, there is no one formula that works for all quintics? or did they prove that there are specific quintics over [tex]\mathbb{Q}[/tex] whose roots are not contained in any radical extension of [tex]\mathbb{Q}[/tex] ?
I know that the second case is true, and that it implies the first. But I want to know what Abel and Ruffini themselves proved, and whether the non solubility of the generic quintic implies that there are quintics over [tex]\mathbb{Q}[/tex] whose roots are not in a radical extension of [tex]\mathbb{Q}[/tex].
A second point, the book points out to the gap in Ruffini's proof that Abel filled by the Theorem on Natural Irrationalities. The book says:
I don't understand what is meant by that, or how the definition of "solvable by Ruffini radicals" is different from the standard definition of "solvable by radicals" which can be found in other algebra books, such as Fraleigh's and Jacobson's. If a polynomial over [tex]\mathbb{Q}[x][/tex] is solvable by radicals, how can its splitting field NOT be a radical extension? What is exactly the gap in Ruffini's proof and how did Abel fill it?
If anyone has Stewart's book or is familiar with this issue, I appreciate any help with clarifying these two points. Thank you.
I am reading Stewart's Galois Theory (3rd ed). I'm up to chapter 8 where he starts tackling the issue of solubility by radicals.
The author considers independent complex variables [tex]t_1,t_2,...,t_n[/tex] and forms the polynomial [tex]F(t) = (t-t_1)(t-t_2)...(t-t_n)[/tex] which he calls the general polynomial of degree n. He says on p.95:
The reason for this name is that this polynomial has a universal property. If we can solve F(t) = 0 by radicals, then we can solve any specific complex polynomial equation of degree n by radicals. [...] The converse, however, is not obvious. We might be able to solve every specific complex polynomial equation of degree n by radicals, but using a different formula each time. Then we would not be able to deduce a radical expression to solve F(t) = 0. So the adjective "general" is somewhat misleading; "generic" would be better, and is sometimes used.
He let's [tex]L = \mathbb{C}(t_1,t_2,...,t_n)[/tex] be the field of all rational expressions in [tex]t_1,...,t_n[/tex] and defines K to be the subfield of L fixed by permutations of the roots. He remarks that in L the polynomial F(t) factorizes completely. He next defines solubility by "Ruffini radicals" as follows (he does mention it's not a standard definition): [tex]F(t)=0[/tex] is said to be solvable by Ruffini radicals if there exists a finite tower of subfields [tex]K = K_0 \subset K_1 \subset ... \subset K_r = L[/tex] where for [tex]j = 0,...,r-1[/tex] we have:
[tex]K_{j+1} = K_j(\alpha_j)[/tex] and [tex]\alpha_j^{n_j} \in K_j[/tex] for some positive integer [tex]n_j \geq 2[/tex]. The rest of the chapter is devoted to prove that no such tower exists when [tex]n \geq 5[/tex].
My question is this: it seems to me he does not answer the point raised in the paragraph I quoted above. So what is the EXACT statement of what Ruffini and Abel proved? did they only prove that the generic quintic is not soluble by radicals, ie, there is no one formula that works for all quintics? or did they prove that there are specific quintics over [tex]\mathbb{Q}[/tex] whose roots are not contained in any radical extension of [tex]\mathbb{Q}[/tex] ?
I know that the second case is true, and that it implies the first. But I want to know what Abel and Ruffini themselves proved, and whether the non solubility of the generic quintic implies that there are quintics over [tex]\mathbb{Q}[/tex] whose roots are not in a radical extension of [tex]\mathbb{Q}[/tex].
A second point, the book points out to the gap in Ruffini's proof that Abel filled by the Theorem on Natural Irrationalities. The book says:
Ruffini tacitly assumed that if F(t) = 0 is soluble by radicals, then those radicals
are all expressible as rational functions of the roots [tex]t_1,..., t_n[/tex]. Indeed, this was the situation studied by his predecessor Lagrange in his deep but inconclusive research on the quintic. So Lagrange and Ruffini considered only solubility by Ruffini radicals. However, this is a strong assumption. It is entirely conceivable that a solution by radicals might exist for which the [tex]\alpha_j[/tex] constructed along the way do not lie in L, but in some extension of L.[...] However, the more we think about this possibility, the less likely it seems. Abel thought about it very hard and proved that if F(t) = 0 is soluble by radicals, then those radicals are all expressible as rational functions of the roots — they are Ruffini radicals after all. This step, historically called Abel's theorem, is more commonly referred to as the Theorem on Natural Irrationalities.
I don't understand what is meant by that, or how the definition of "solvable by Ruffini radicals" is different from the standard definition of "solvable by radicals" which can be found in other algebra books, such as Fraleigh's and Jacobson's. If a polynomial over [tex]\mathbb{Q}[x][/tex] is solvable by radicals, how can its splitting field NOT be a radical extension? What is exactly the gap in Ruffini's proof and how did Abel fill it?
If anyone has Stewart's book or is familiar with this issue, I appreciate any help with clarifying these two points. Thank you.
Last edited: