Question regarding root of Bring quintic not expressible with radicals

In summary, "solvable by radicals" means that a polynomial equation can be solved by a series of normal field extensions, each involving the addition of a primitive root. This corresponds to the definition of a solvable group, which is a group that can be broken down into a series of normal subgroups. However, for polynomials of degree 5 or higher, the entire group of automorphisms of the splitting field cannot be a normal subgroup and therefore cannot be solvable. Therefore, not all polynomial equations can be solved by radicals.
  • #1
Buzz Bloom
Gold Member
2,519
467
I became curious about the following problem from a discussion in another thread:
After a bit of study I concluded that the meaning of the assertion below regarding some specific real number rl P has the meaning which follows it.
Assertion: "r is not expressible in terms of radicals."
Meaning: r is not expressible in terms of a finite application of a collection of operators (+, -, ×, and /, together with any of the n-th roots (where n a positive integer) ), where these operators are applied to integer operands, or to expressions of the same kind.​
If this meaning is incorrect, I hope someone will correct it.

The article
contains the following text:
An example of a quintic whose roots cannot be expressed in terms of radicals is x5x + 1 = 0.​
This simple example of a Bring-Jerrard quintic equation has one real root, with an approximate value of
r = -1.16730397783.​
This can be seen in the attached PNG file.

My QUESTION IS:
What would a proof that "r is not expressible in terms of radicals" look like?​
I have no idea whatever how one would go about proving the non-radical nature of just this one example.

Any help would be much appreciated.
 

Attachments

  • Bring-Jerrard.PNG.png
    Bring-Jerrard.PNG.png
    53 KB · Views: 482
Physics news on Phys.org
  • #2
Buzz Bloom said:
I became curious about the following problem from a discussion in another thread:
After a bit of study I concluded that the meaning of the assertion below regarding some specific real number rl P has the meaning which follows it.
Assertion: "r is not expressible in terms of radicals."
Meaning: r is not expressible in terms of a finite application of a collection of operators (+, -, ×, and /, together with any of the n-th roots (where n a positive integer) ), where these operators are applied to integer operands, or to expressions of the same kind.​
If this meaning is incorrect, I hope someone will correct it.
It is correct. The starting point isn't any real number ##r## but a solution of ##p(r)=0## with a polynomial ##p(x) \in \mathbb{Q}[x]##, i.e. rational numbers are allowed. Polynomials of degree ##1## are trivial, of degree ##2## is what we learn at school (Vieta), and the formulas for degree ##3## and ##4## are a bit tricky, if not to say rather unpleasant (Cardano, Ferrari). And of course one has to be a bit more careful with a formal definition of "solvable by radicals", since expressions like ##\sqrt[6]{1}## are either useless or ambiguous. The first time it cannot be done is with polynomials of degree ##5##. Of course this isn't true for all polynomials, because some are solvable by radicals: we only needed to multiply ##(x-a_1)(x-a_2)(x-a_3)(x-a_4)(x-a_5)## and have a beautiful expression of roots. The point is, that there is no method, which works in all cases.
The article
contains the following text:
An example of a quintic whose roots cannot be expressed in terms of radicals is x5x + 1 = 0.​
This simple example of a Bring-Jerrard quintic equation has one real root, with an approximate value of
r = -1.16730397783.​
This can be seen in the attached PNG file.

My QUESTION IS:
What would a proof that "r is not expressible in terms of radicals" look like?​
I have no idea whatever how one would go about proving the non-radical nature of just this one example.

Any help would be much appreciated.
One way to prove it is:
Compute the splitting field of a given polynomial ##p(x)## with the help of complex numbers and formal roots. Then determine the (finite) group of automorphisms of this field, which leave all rational numbers unchanged. This is a subgroup of ##\mathcal{Sym}(\deg p)##. Finally show, that this group isn't solvable.

I'm not saying that it is the fastest method, because ##\mathcal{Sym}(5)## has already ##120## elements. Only a principle way of doing it. It also explains, why ##5## is the lowest degree, where we cannot solve ##p(x)=0## anymore: ##A_5 \subseteq \mathcal{Sym}(5)## is the smallest non-solvable group.

Given a certain polynomial, it might be faster to show that the group isn't solvable, by proving it indirectly and deduce a contradiction to some theorems or the solvability itself.
 
  • Like
Likes Buzz Bloom
  • #3
fresh_42 said:
One way to prove it is:
Compute the splitting field of a given polynomial ##p(x)## with the help of complex numbers and formal roots. Then determine the (finite) group of automorphisms of this field, which leave all rational numbers unchanged. This is a subgroup of ##\mathcal{Sym}(\deg p)##. Finally show, that this group isn't solvable.

That's a proof if any of those concepts can be related to definition that Buzz Bloom gave for "solvable by radicals". It would be service to world if someone would offer a simple explanation of the connection!
 
  • Like
Likes Buzz Bloom
  • #4
Stephen Tashi said:
That's a proof if any of those concepts can be related to definition that Buzz Bloom gave for "solvable by radicals". It would be service to world if someone would offer a simple explanation of the connection!
I'm not sure I understood this correctly.

The first part is to determine what "solvable by radicals" means. Basically it's a successive adjunction of primitive roots ##\sqrt[p]{r}##. Thus one gets a series of normal field extensions which corresponds to a series of normal subgroups of ##\mathcal{Sym}(\deg f(x))##, which is the definition of a solvable group. The correspondence is proven in Galois' theory. Now ##\mathcal{Sym}(n)\, , \,n \geq 5## isn't solvable anymore, so one only has to find an equation ##f(x)=0##, which actually has the entire group as symmetry group of its roots (Abel), i.e. all permutations define an automorphism of the splitting field.
 
  • #5
fresh_42 said:
I'm not sure I understood this correctly.

The first part is to determine what "solvable by radicals" means. Basically it's a successive adjunction of primitive roots ##\sqrt[p]{r}##.

That is the part where the world needs a good explanation!

I don't think its hard to explain what it means to adjoin something to a field. But how do we explain the connection between "solvable by radials" given above (i.e. using a restricted repertoire of operations on the coefficients of an equation) and the process of adjoining roots to a field?
 
  • #6
If we solve ##f(x)=0## by radicals, it means we look for an expression ##x=r_0 + \sqrt[p_1]{r_1 + \sqrt[p_2]{r_2}+ \ldots}## Since we don't have, e.g. ##\sqrt{2}## in ##\mathbb{Q}##, we have to adjoin it in order to solve ##x^2-2=0##. One also wants to have all choices of ##\sqrt[p]{r}## in the field, ideally also as a solution, and that it always means the same choice, if written this way. In the end, radical is already the same word as root, only Latin. But you might be right. I've looked it up, how van der Waerden explained it and he, too, used a lot of text, examples and terms like "or similar". I start to understand what you might mean. Funnier is probably, how to connect compass and ruler constructions to square roots.
 
  • #7
It's easier to see how things go in the reverse direction - how the coefficients of an equation are be created from the roots using a "limited repertoire" of operations.

For example the equation ##(x - r_1)(x - r_1)(x - r_2) = 0## gives ##(1)x^3 + (-2r_1 - r_2) x^2 + (r_1^2 + 2r_1r_2)x + (-r_1^2r_2) = 0 ##

So a natural question is whether we can "undo" the things created by limited repertoire of operations by using another (possibly different) repertoire of operations to recover the roots.

Treatments of Lagrange/ Galois theory that make a concrete connection to polynomial equations always explain that the coefficients of a polynomial equation are symmetric functions of the roots.

Some connection to group theory is made from the fact that the symmetric functions are invariant under "permutation of the roots". I don't find it easy to explain exactly what that means!

"Symmetric functions" result from an equation with distinct "symbolic" roots such as ## (x-a)(x-b)(x-c) = 0 ## which says ##x^3 + (-a-b-c)x^2 + (ab + ac + cb)x - abc = 0 ##.

For example, the coefficent of ##x^2## is the function defined by ##f(a,b,c) = -a-b-c ## is invariant under permutation of ##a,b,c## in the sense that for each triple of real numbers ##f(a,b,c) = f(a,c,b) = f(b,a,c) = ## etc.

By contrast, in the first example, where two equal roots ##r_1## are assumed , the coefficent ##g(r_1,r_2) = -2r_1 - r_2 ## is not invariant under permutations of ##r_1## and ##r-2##.
 
Last edited:
  • #8
Stephen Tashi said:
Some connection to group theory is made from the fact that the symmetric functions are invariant under "permutation of the roots". I don't find it easy to explain exactly what that means!
What do you think about the following alternative (again van der Waerden, which I try to translate as close as I can)?

Given the situation ##\mathbb{K} \subseteq \mathbb{K}(\vartheta) \subseteq \mathbb{L}## and ##f(x) \in \mathbb{K}[x]## irreducible with ##f(\vartheta)=0## and ##\mathbb{L}## the splitting field of ##f(x)##.

The relative (fixing ##\mathbb{K}##) isomorphisms of ##\mathbb{K}(\vartheta)## can be indicated by their transformations of ##\vartheta## into its conjugates ##\vartheta_1 , \ldots , \vartheta_n## in ##\mathbb{L}##. Each element ##\varphi(\vartheta)=\sum a_\lambda \vartheta^\lambda## is thus transformed into ##\varphi(\vartheta_\nu) = \sum a_\lambda \vartheta_\nu^\lambda## which allows us to speak of substitutions ##\vartheta \rightarrow \vartheta_\nu## instead.

He also emphasizes that ##\vartheta## and ##\vartheta_\nu## are only auxiliaries to handle the isomorphisms, which by their nature are independent on the choice of ##\vartheta##. He proceeds in talking of substitutions instead of permutations which I always regarded as simply a bit of an old-fashioned way.
 
  • #9
fresh_42 said:
What do you think about the following alternative (again van der Waerden, which I try to translate as close as I can)?

I don't (yet) see it as an "alternative" - in the sense of something having the same direct relation to the solution of equations as the example of how permuting roots affects (or doesn't affect) the coefficients of a polynomial equation.

Ignoring that problem for a moment:

Given the situation ##\mathbb{K} \subseteq \mathbb{K}(\vartheta) \subseteq \mathbb{L}## and ##f(x) \in \mathbb{K}[x]## irreducible with ##f(\vartheta)=0## and ##\mathbb{L}## the splitting field of ##f(x)##.

I have (the usual!) comprehension problem with algebraic objects like ##\mathbb{K}(\vartheta)##. If we think of ##\vartheta## is "a symbol" or "an indeterminate" then we get a different structure than if we think of ##\vartheta## as something that can have more properties than a symbol. For example, if I "adjoin" the symbol ##w## to the field of rational numbers then "##1 + 3w^2##" is an element of the expanded structure and "##7##" is also an element of the expanded structure and ##7 \neq 1 + 3w^2##. - i.e. there is no doubt that "##7##" and "##1 + 3w^2##" are distinct elements. However if I "adjoin" ##\sqrt{2}## instead of the abstract symbol ##w## then we have ##7 = 1 + 3\sqrt{2}^2 ## and these are not distinct elements.

Is there a notational convention that distinguishes between these types of "adjoining"? - perhaps "##\mathbb{K}[\vartheta]##" vs "##\mathbb{K}(\vartheta)##" ?
The relative (fixing ##\mathbb{K}##) isomorphisms of ##\mathbb{K}(\vartheta)## can be indicated by their transformations of ##\vartheta## into its conjugates ##\vartheta_1 , \ldots , \vartheta_n## in ##\mathbb{L}##.

What would "conjugates" mean in this context? ( One guess is that a conjugate of ##\vartheta## would be ##\omega_j \vartheta## where ##\omega_j## is one of the "n-th roots of unity", which still leaves open the question of whether "##\vartheta##" is an "indeterminate" or something with more properties than an "indeterminate".)

Each element ##\varphi(\vartheta)=\sum a_\lambda \vartheta^\lambda## is thus transformed into ##\varphi(\vartheta_\nu) = \sum a_\lambda \vartheta_\nu^\lambda## which allows us to speak of substitutions ##\vartheta \rightarrow \vartheta_\nu## instead.

He also emphasizes that ##\vartheta## and ##\vartheta_\nu## are only auxiliaries to handle the isomorphisms, which by their nature are independent on the choice of ##\vartheta##.
I think I understand that, but it's not easy to say it precisely. The set of (field) automorphisms of ##\mathbb{K}(\vartheta)## that, when restricted to ##\mathbb{K}## are automorphisms on ##\mathbb{K}## can be put in 1-to-1 correspondence with set of 1-to-1 mappings from the set of conjugates of ##\vartheta## to itself. Does that convey the idea?
 
  • #10
Just for short, I'm going to read it more carefully later.

##\mathbb{K}(\vartheta)## is the quotient field of the ring ##\mathbb{K}[\vartheta]##, whether ##\vartheta## fulfills an equation or not. If it doesn't, it's simply the same as the field of all rational polynomials with coefficients in ##\mathbb{K}## in one variable ##x=\vartheta##.

If there is an equation, like ##f(x) = x^2-2## with ##f(w)=0## in the example, then ##\mathbb{K}(w) \cong \mathbb{K}[x] / \left( \mathbb{K}[x]\cdot f(x)\right) = \mathbb{K}[x]/(x^2-2)## a quotient or factor ring. So in general, if we have the polynomials ##\mathbb{K}[x]## then ##\mathbb{K}(\vartheta)## is always a field of the form ##\mathbb{K}[x] / \left(\mathbb{K}[x]\cdot f(x)\right)##. (If ##f(x)=0## which corresponds to the case of a "symbolic" (better: transcendental of degree ##1##) extension, we have to require it to be the quotient field of this ring, because ##f(x)=0## doesn't define a maximal ideal anymore, but the construction is given by this isomorphism of rings.
 
  • #11
A remark on the difference between ##\mathbb{K}(\vartheta)## and ##\mathbb{K}[\vartheta]##.
##\mathbb{K}(\vartheta)## usually denotes the quotient field of the integral domain (ring) ##\mathbb{K}[\vartheta]##.
In the case of complex numbers this means ##\mathbb{C}=\mathbb{R}(i)## and ##\mathbb{R}[ i ] = \mathbb{R}[x]/(x^2+1) = \mathbb{R}(i)## in this case, because ##\frac{1}{i} = -i##. The reason for this coincidence is the maximality of ##(x^2+1)## as ideal in the ring ##\mathbb{R}[x]##. As mentioned above, this is no longer true for a transcendental extension and the zero ideal.
Stephen Tashi said:
What would "conjugates" mean in this context?
It means the same as in the complex number field: its fellow roots, as ##-i## is the other root to ##i## in ##x^2+1##.
Stephen Tashi said:
The set of (field) automorphisms of ##\mathbb{K}(\vartheta)## that, when restricted to ##\mathbb{K}## are automorphisms on ##\mathbb{K}## can be put in 1-to-1 correspondence with set of 1-to-1 mappings from the set of conjugates of ##\vartheta## to itself. Does that convey the idea?
I guess so. Although if elements of ##Aut_\mathbb{K}(\mathbb{L})## are restricted to ##\mathbb{K}##, they'll be the identity (by definition) - only this one automorphism. Since all coefficients are thus fixed, such a field automorphism has to leave ##f(x)\in \mathbb{K}[x]## unchanged. In the splitting field - which might be larger than ##\mathbb{K}(\vartheta)##, where ##f(x)## can be written as ##f(x)=c_0\cdot (x-\vartheta_1)\cdot \ldots \cdot (x-\vartheta_n) = \sum c_\nu(\vartheta_1, \ldots , \vartheta_n)x^{\nu}##, the only possibility is therefore to switch between the ##\vartheta_\nu \, : \,##
$$f(x) = \varphi(f(x))= \sum c_0 \cdot (x-\varphi(\vartheta_1))\cdot \ldots \cdot (x-\varphi(\vartheta_n))=\sum c_\nu(\varphi(\vartheta_1), \ldots , \varphi(\vartheta_n))x^{\nu}$$
and the permutation drops out for free.
 
  • #12
fresh_42 said:
A remark on the difference between ##\mathbb{K}(\vartheta)## and ##\mathbb{K}[\vartheta]##.
...

Thank you clarifying that notation!

However, we are digressing from the question in the original post unless we can explain the connection between solving equations and the abstract algebra of field extensions.

Perhaps we need something like the discussion that begins on page 41 of this PDF:
http://pages.uoregon.edu/koch/Galois.pdf , but I don't fully comprehend it yet.
 
  • #13
I find Koch's presentation pretty detailed and good, although he avoids to give a precise meaning to "solvable by radicals", simply by defining it (p.56) as what can be proven in algebraic terms. This appears a little bit like cheating, in the sense that it leaves open the question between the algebraic formulation and the language theoretical formulation (using an alphabet). Nevertheless, his chapter 9 is quite illuminating.
 
  • #14
fresh_42 said:
although he avoids to give a precise meaning to "solvable by radicals", simply by defining it (p.56) as what can be proven in algebraic terms. This appears a little bit like cheating,

I agree. It's cheating! It appears to be very difficult to make a connection between the definition of "solvable by radials" in the sense of "solvable with a limited repertoire of constants and operations" and the definitions used in contemporary algebra.

The best attempt I've found online is an exposition of Abel's work: http://fermatslasttheorem.blogspot.com/2008_08_24_archive.html and (to me) the connection is still hazy.
 
  • #15
Stephen Tashi said:
The best attempt I've found online is an exposition of Abel's work
Yes, but I guess it is pretty much straight forward, so most (all?) authors avoid the translation. I mean "solvable by radicals" as being a formal expression over an alphabet ##\{+,-,* , \frac{*}{*}, \sqrt[\text{*}\,]{*},a \in \mathbb{F}\}## is only an expression with field operations and adjunction of roots, i.e. every word over this alphabet is contained in some Galois extension of ##\mathbb{F}##. The other direction is clear by the definition of a field extension. So the only critical part is that our symbol for roots isn't unique and we have to choose exactly one to be meant (as mentioned by van der Waerden).
(I think Abel lived before formal languages became a matter of mathematics, but I'm not sure.)
 
  • #16
Hi @fresh_42:

I am unsure about what
fresh_42 said:
a∈F
means. I assume F means a field. Does the "F" font you use mean any specific field? What does the "a" represent? I am guessing it represents any member of the field F, so that
a∈F​
represents the syntax to be used to say that a particular expression indicated by something that substitutes for "a" in "a∈F" is a member of the field F. Is that correct?

Regards,
Buzz
 
  • #17
Yes. ##\mathbb{F}## was meant to be the (any) field over which the expression "solvable by radicals" is defined. It is usually abbreviated by a bold ##F## (field) or a ##K## (Körper) or in similarity to ##\mathbb{Q}, \mathbb{R}, \mathbb{C}## as ##\mathbb{F}## or ##\mathbb{K}##. I was talking about a possible definition of this expression as a word in a formal language. Therefore all elements of the field are needed to be part of the alphabet as well as the arithmetic operators and the radicals = roots.
 
  • Like
Likes Buzz Bloom
  • #18
Hi to all of you who have participated in this thread:
I much appreciate all your contributions.

When I tried to look up various concepts used in various posts on the internet, mostly Wikipedia, I confess I got lost. In each article I read about a concept, it was explained in terms of other concepts I had to look up elsewhere. When the nesting of these concepts reached about seven, I gave up. At my advanced age, it was too much too much for me to try to keep these interrelationships of all these concepts in my head.

My original question in post #1 was:
Buzz Bloom said:
What would a proof that "r is not expressible in terms of radicals" look like?
It has taken me a while to figure out something like a sketch of a proof about how this question relates to proofs about the radical solvability of quintics and higher order equations. It goes like this:
If a fifth order equation has any root expressible in a radical form, then all the roots will be similarly expressible. This is because the single such root r can be used to reduce the quintic equation to a quartic equation with coefficients in the rational field extended by expressions using radical operations involving r. The solution of the quartic in terms of these these coefficients will then also be radical.
Therefore, if one proves that a particular quintic has a radical solution, this means that all of its roots have a radical expression.
This "proof" may fail to work for sixth and higher order equations.​
If the above is incorrect, I would much appreciate an explanation regarding my error(s).

Regards,
Buzz
 
  • #19
Yes, it's correct what you've said. Polynomial equations up to order four are solvable by radicals, i.e. can be expressed in the form discussed above.

Now in general if one has a polynomial of degree ##n##, say ##p(x) = a_0 + a_1x+\ldots+a_nx^n \in \mathbb{F}[x]## and a root ##r## in some field extension, say ##r \in \mathbb{G}## then ##(x-r)## divides ##p(x)##, i.e. ##p(x)=(x-r)q(x)## and ##q(x)## is of degree ##n-1## and ##q(x)## can be written in terms of the basis field and ##r##, i.e. ##q(x) \in \mathbb{G}[x]=\mathbb{F}(a)[x]##. (Often ##a=r##.)

In the special case ##n=5##, we would get ##\deg q(x)=4## which is solvable by radicals wherever ##q(x)## lives in. So in summation we could solve ##p(x)=0## with the radicals from the solution of ##q(x)=0## and those needed for the coefficients of ##q(x)## in ##\mathbb{G}##. Then everything can be written by radicals.

And you're also right that this argument fails for higher degrees, because in this case ##q(x)=0## may not be solvable by radicals as it is for polynomials of degree four.
 
  • Like
Likes Buzz Bloom
  • #20
Hi @fresh_42:

Thank you very much for your confirmation that my thinking was OK.

I am wondering if you know a source that shows the application of Galois theory to demonstrate in detail that anyone particular chosen example of a quintic equation has no radical solution. I am unable to understand Galois theory well enough to apply the general theory to make such a demonstration on my own, but I think I might be able to understand some one else's detailed description of such a single example demonstration.

Regards,
Buzz
 
  • #21
I haven't gone into deep but my https://www.amazon.com/dp/0387406247/?tag=pfamazon01-20starts with a polynomial equation ##p(x)=x^n - u_1 x^{n-1}+u_2x^{n-2}-+\ldots(-1)^{n}u_n=0## where the ##u_i## are indeterminates being adjoint to the given field ##\mathbb{F}## and shows that the Galois group of the splitting field of ##p(x) \in \mathbb{F}(u_1,\ldots u_n)[x]## is the entire permutation group and therefore ##p(x)## isn't solvable by radicals for ##n \geq 5##.

I assume that one can choose some independent ##u_i## like ##\sqrt[p]{2}## with primes ##p## for a specific example: $$p(x)=x^5-\sqrt{2}\,x^4+\sqrt[3]{2}\,x^3-\sqrt[5]{2}\,x^2+\sqrt[7]{2}\,x-\sqrt[11]{2} \in \mathbb{F}[x]:=\mathbb{Q}(\sqrt{2},\sqrt[3]{2},\sqrt[5]{2},\sqrt[7]{2},\sqrt[11]{2})[x]$$ and consider its splitting field.
 
Last edited by a moderator:
  • #22
Buzz Bloom said:
If the above is incorrect, I would much appreciate an explanation regarding my error(s).

What you proved doesn't answer your original question. The assertion "If one root of a quintic can be found with some solution method then all roots of the quintic can be found" doesn't answer your original question unless you also show "Not all roots of the quintic can be found".

fresh_42 said:
i.e. every word over this alphabet is contained in some Galois extension of ##\mathbb{F}##.

So we have a mapping between solutions by radicals and some field. Is it a 1-to-1 mapping? [Edit: It is a many-to-1 mapping, so my question should be whether the mapping is "onto"]

A solution "by radicals" for a root r of ## x^5 + ax + b ## could be written in the form ##r = f(a.b)## where ##f(a,b)## can be written as some explicit formula involving only the constants 1, 0, a, b (which are the coefficients of the equation ##1x^5 + 0x^4 + 0x^3 + 0x^2 + ax + b = 0 ##) and the operations of addition, subtraction, multiplication, division and taking positive roots.

The set of all possible expressions of that form "generates" a field of numbers that contains ##0,1,a,b##. If the expression works then the field also contains ##r##. Does every formula work for some equation?

Which brings up the question of what it means to say that "an" equation is or isn't solvable by radicals. For example does "the" equation ##x^5 + ax + b = 0## denote a set of equations whose elements are defined by specifying particular values of ##a## and ##b## ? If I say that an algorithm ##A## finds a root of "the" equation ##x^5 + ax + b = 0## then do I mean that a single algorithm ##A## finds a root of all equations of that form? (i.e. ##A## is a single algorithm that produces different answers due to the different input values for ##a,b##). Or do I mean that for each different equation there is a possibly different algorithm that finds a root of the equation and that all these different algorithms share some common property such as "using radicals" ?
 
Last edited:
  • #23
I have quite some difficulties to understand, so my answer might miss the point.
Stephen Tashi said:
So we have a mapping between solutions by radicals and some field. Is it a 1-to-1 mapping? [Edit: It is a many-to-1 mapping, so my question should be whether the mapping is "onto"]

A solution "by radicals" for a root r of ## x^5 + ax + b ## could be written in the form ##r = f(a.b)## where ##f(a,b)## can be written as some explicit formula involving only the constants 1, 0, a, b (which are the coefficients of the equation ##1x^5 + 0x^4 + 0x^3 + 0x^2 + ax + b = 0 ##) and the operations of addition, subtraction, multiplication, division and taking positive roots.

The set of all possible expressions of that form "generates" a field of numbers that contains ##0,1,a,b##. If the expression works then the field also contains ##r##. Does every formula work for some equation?
We are talking about finite algebraic extensions, and we may further consider simple ones, too: Every time we need a root to express a number, we can extend our field by this root. (By the way: Is the English term for it basis field, base field or ground field?) On the other hand our finite extension can be seen as successively adjoining root by root. So all comes down to steps of (normal) field extensions ##\mathbb{F} \subseteq \mathbb{F}(r)## where ##r=\sqrt[n]{c} \, , \, p(r) = 0## for a polynomial ##p(x)=c-x^n \in \mathbb{F}[x]##.

Then all elements ##s \in \mathbb{F}(r)## can be written as ##s=\frac{a_0+a_1r+\ldots+a_{k-1}r^{k-1}+r^k}{b_0+b_1r+\ldots+b_{l-1}r^{l-1}+r^l}\,##.

Now we proceed by the next radical term (root) we need - as long as it can be done.
Which brings up the question of what it means to say that "an" equation is or isn't solvable by radicals. For example does "the" equation ##x^5 + ax + b = 0## denote a set of equations whose elements are defined by specifying particular values of ##a## and ##b## ? If I say that an algorithm ##A## finds a root of "the" equation ##x^5 + ax + b = 0## then do I mean that a single algorithm ##A## finds a root of all equations of that form? (i.e. ##A## is a single algorithm that produces different answers due to the different input values for ##a,b##). Or do I mean that for each different equation there is a possibly different algorithm that finds a root of the equation and that all these different algorithms share some common property such as "using radicals" ?
In my understanding it all depends on the one single polynomial equation which we want to solve. It can be "solved by radicals" if and only if all, say complex numbers ##z## (##char \,\mathbb{F}=0## should do here) for which ##f(z)=0 \, , \, f(x) \in \mathbb{F}[x]## can be written in terms of elements of ##\mathbb{F}(r_1,\ldots,r_m) \supseteq \mathbb{F}(r_1,\ldots,r_{m-1}) \supseteq \ldots \supseteq \mathbb{F}(r_1) \supseteq \mathbb{F}## and all extensions are of the form described above. Thus we have all possible expressions with radicals that are needed to write ##z##.
And back to your first question: Of course does ##\mathbb{F}(r_1,\ldots,r_m)## contain more possible expressions than the one needed for ##z## (and its conjugates), but this doesn't matter - we are not after uniqueness.

To summarize: An expression by radicals needs a tower of field extensions as described, and on the other hand ##z \in \mathbb{F}(r_1,\ldots,r_m)## allows us the desired expression.

Now the opening question, how to disprove solvability by radicals is equivalent to the non-existence of such a tower of fields, which again corresponds to the non-exitence of a normal series of automorphism groups, which means the overall automorphism group ##Aut_\mathbb{F}(\mathbb{F}(r_1,\ldots,r_m))## isn't solvable. Therefore one possible way to disprove solvability by radicals (for one given polynomial equation) is to compute this (finite) group and then to prove it isn't a solvable group.
That only leaves the question where to take the ##r_i## from, if it's not possible to "solve" the equation. But here we know that all roots exist in the algebraic closure, e.g. ##\mathbb{C},## and we can work with symbols representing these roots (meant as solution, not as a radical ##\sqrt[*]{*}##), for which our given polynomial equation holds. Here other polynomials than ##x^n=c## may occur.

In the general case, one has to show polynomial equations such that their automorphism group actually is ##\mathcal{Sym}(n)## (the maximal possible group here) which aren't solvable for ##n \geq 5##.

As I said, I'm not sure whether this is an appropriate answer or we're talking from totally different points of view.
 
  • #24
fresh_42 said:
(By the way: Is the English term for it basis field, base field or ground field?)
I don't know. (It was over 20 years ago when I took algebra.)

On the other hand our finite extension can be seen as successively adjoining root by root. So all comes down to steps of (normal) field extensions ##\mathbb{F} \subseteq \mathbb{F}(r)## where ##r=\sqrt[n]{c} \, , \, p(r) = 0## for a polynomial ##p(x)=c-x^n \in \mathbb{F}[x]##.

The question (relevant to the original post) is: Why is it necessary to extend a field in order to find a root?

if we assume a root of the equation cannot be found by a expression that corresponds to an element ##a \in \mathbb{F}## then we have assumed the equation is "not solvable by radicals" over the smallest field that contains the coefficients. However, the question in the original post concerns how we know that we cannot solve the equation by radicals.

This might be a digression, but would it be useful to solve a "baby" version of this problem? Let the equation be ##x^2 + Ax + B = 0##. Let's restrict ourselves to the set ##S## consisting of expressions that are functions of the coefficients that involve only addition, subtraction, multiplication, and division. (i.e. We are not allowed to use radicals). Can we prove the equation (or set of equations) cannot be solved by a member of ##S##?
 
  • #25
Stephen Tashi said:
This might be a digression, but would it be useful to solve a "baby" version of this problem? Let the equation be ##x^2 + Ax + B = 0##. Let's restrict ourselves to the set ##S## consisting of expressions that are functions of the coefficients that involve only addition, subtraction, multiplication, and division. (i.e. We are not allowed to use radicals). Can we prove the equation (or set of equations) cannot be solved by a member of ##S##?
So to break it down even further, is the question:
##(1)## Why can ##x^2-4=0## be solved by means of ##S## whereas ##x^2-2=0## cannot?

The other step would be:
##(2)## Why can ##x^5-2=0## be solved by radicals (##S \cup \{\sqrt[*]{*}\}##), whereas ##x^5-\sqrt[2\,]{2}\,x^4+\sqrt[3\,]{2}\,x^3-\sqrt[5\,]{2}\,x^2+\sqrt[7\,]{2}\,x-\sqrt[11\,]{2} = 0## cannot?
 
  • #26
fresh_42 said:
So to break it down even further, is the question:
##(1)## Why can ##x^2-4=0## be solved by means of ##S## whereas ##x^2-2=0## cannot?
Ok, we can use expressions in ##S## such as "4/(1+1)", "4/(1+1) + 0" , "0 - 1-1", or "1+1" to obtain a root of ##(1)x^2 - 4 = 0 ##.

All expressions in ##S## produce rational numbers so we can cite a proof that ##\sqrt{2}## is irrational to show that no member of ##S## produces a root of ##x^2 - 4 = 0 \iff x^2 = 4##

Which method of proof that ##\sqrt{2}## is irrational can be generalized to provide a method for showing a particular root ##r## cannot be in a particular extension of a field?
 
  • #27
Stephen Tashi said:
Which method of proof that ##\sqrt{2}## is irrational can be generalized to provide a method for showing a particular root ##r## cannot be in a particular extension of a field?
Why shouldn't we say: As soon as we need a number with ##x^n=c## and ##c## isn't the ##n-##th power of another number, we will have to extend our field by ##r:=\sqrt[n]{c}## ? Simple as that. The moment we define the field, with which we start, we know which numbers are roots and which are not. In given cases like for instance ##\mathbb{Q}(\sqrt{2},\sqrt{6},\sqrt[3]{12})## or even nastier examples, it might not be obvious whether additional roots are already in or not. In these cases we would have to define a basis of this field, e.g. over ##\mathbb{Q}##, and see whether our number can be expressed in this basis or not, i.e. we'll have to test on linear dependency.
 
  • #28
fresh_42 said:
Why shouldn't we say: As soon as we need a number with ##x^n=c## and ##c## isn't the ##n-##th power of another number, we will have to extend our field by ##r:=\sqrt[n]{c}## ? Simple as that.
How do we know when we need the n-th power of a number? (The equation might not be as simple as ##x^n - A = 0 ##)

If we have added some radicals to the list of constants that we are permitted to use then how do we know when a different radical can't be equal to one of the permitted expressions?
 
  • #29
fresh_42 said:
In these cases we would have to define a basis of this field, e.g. over ##\mathbb{Q}##, and see whether our number can be expressed in this basis or not, i.e. we'll have to test on linear dependency.
 
  • #30
Stephen Tashi said:
What you proved doesn't answer your original question. The assertion "If one root of a quintic can be found with some solution method then all roots of the quintic can be found" doesn't answer your original question unless you also show "Not all roots of the quintic can be found".
Hi Stephen:
I don't understand the point you are making in the above. My original question is:
Buzz Bloom said:
What would a proof that "r is not expressible in terms of radicals" look like?
The first step of such a proof is that the Galois theory shows the equation is not solvable in radicals. This means that there is at least one root which is not expressible in radicals. The post #18 proof proves that this means that all of the five roots are not expressible in radicals. Therefore the real root r is not expressible in radicals.

Regards,
Buzz
 
  • #31
Buzz Bloom said:
Hi Stephen:
I don't understand the point you are making in the above. My original question is:
What would a proof that "r is not expressible in terms of radicals" look like?
The first step of such a proof is that the Galois theory shows the equation is not solvable in radicals.

My interpretation of what you meant by "r" was that "r" denoted an arbitrary root of ##x^5 - x + 1 = 0 ## or an arbitrary root of some arbitrary equation, but perhaps you mean "r" is the particular unique real root of that equation.

The mathematical question that I find interesting is "How do we (or Galois) show that the equation is not solvable in radicals?"

If you want to take Galois theory for granted, then I see what you are doing. It involves the interpretation of the phrase "not solvable in radicals". Your interpretation of "An equation p(x) = 0 is not solvable in radicals" is that "There exists at least one root r of p(x) = 0 such that r cannot be expressed as a function of the coefficients of the equation using only arithmetic operations and radicals". Another interpretation of "not solvable in radicals" would be "No root of p(x) = 0 can be expressed as a function...etc.".

When we say "Galois theory shows p(x) = 0 is not solvable in radicals" which of those interpretations is correct? At the moment, I don't know.

The problem for me is to interpret the abstract formulation of "solvable" in terms of "there exists a tower of field extensions such that...". Does "solvable" mean that there exists one root of p(x) = 0 that is contained in such a tower of field extensions? Or does "solvable" mean that there exists a single tower of field extensions that contains all the roots of p(x) = 0? Or does "solvable" mean for each root r of p(x) = 0 there exists a tower of field extensions that contains r - but possibly there are different towers for different roots?
 
  • #32
Hi Stephen:
Stephen Tashi said:
perhaps you mean "r" is the particular unique real root of that equation.
Yes, that is what I meant by
Buzz Bloom said:
This simple example of a Bring-Jerrard quintic equation has one real root, with an approximate value of
r = -1.16730397783.

Stephen Tashi said:
When we say "Galois theory shows p(x) = 0 is not solvable in radicals" which of those interpretations is correct? At the moment, I don't know.
The meaning is that at least one root is not expressible in radicals.
fresh_42 said:
And you're also right that this argument fails for higher degrees, because in this case q(x)=0q(x)=0 may not be solvable by radicals as it is for polynomials of degree four.
Here fresh_42 confirms that this is correct since for 6th order and higher degree equations my proof does not hold, and such equations may well have less that all roots failing to be expressible in radicals.

Hope this clarifies the issues we were discussing.

Stephen Tashi said:
The problem for me is to interpret the abstract formulation of "solvable" in terms of "there exists a tower of field extensions such that...".
I had previously hoped to understand the process of using Galois theory to prove just the single example
Buzz Bloom said:
x5x + 1 = 0.
is not solvable in radicals. However, I have given up on that.
Buzz Bloom said:
I am wondering if you know a source that shows the application of Galois theory to demonstrate in detail that anyone particular chosen example of a quintic equation has no radical solution. I am unable to understand Galois theory well enough to apply the general theory to make such a demonstration on my own, but I think I might be able to understand some one else's detailed description of such a single example demonstration.

Regards,
Buzz
 
  • #33
Buzz Bloom said:
I had previously hoped to understand the process of using Galois theory to prove just the single example ##x^5-x+1=0## is not solvable in radicals. However, I have given up on that.
One could start with a solution ##\sigma_0##. Then ##x^5-x+1=(x-\sigma_0)(x^4+\sigma_0x^3+\sigma_0^2x^2+\sigma_0^3x+(\sigma_0^4-1))##. We now know that this polynomial in ##\mathbb{Q}(\sigma_0)[x]## can be factored into ##(x-\sigma_1)(x-\sigma_2)(x-\sigma_3)(x-\sigma_4)## where ##\sigma_1,\sigma_2,\sigma_3,\sigma_4## can be expressed by radicals of ##\mathbb{Q}(\sigma_0)##.
Unfortunately the equations aren't as pleasant as they are for quadratic polynomials and it won't show us whether ##\sigma_0## can be expressed by radicals, too, or not.

What we can do is gather the equations that ##x^4+\sigma_0x^3+\sigma_0^2x^2+\sigma_0^3x+(\sigma_0^4-1))=(x-\sigma_1)(x-\sigma_2)(x-\sigma_3)(x-\sigma_4)## gives us:
  • ##\sigma_1\sigma_2\sigma_3\sigma_4=\sigma_0^4-1=(\sigma_0+i)(\sigma_0-i)(\sigma_0+1)(\sigma_0-1)##
  • ##\sigma_1+\sigma_2+\sigma_3+\sigma_4=-\sigma_0##
  • ##\sigma_1\sigma_2+\sigma_1\sigma_3+\ldots+\sigma_3\sigma_4=\sigma_0^2##
  • ##\sigma_1\sigma_2\sigma_3+\sigma_1\sigma_2\sigma_4+\sigma_1\sigma_3\sigma_4+\sigma_2\sigma_3\sigma_4=-\sigma_0^3##
Now ##\mathcal{Sym}(\{\sigma_i\}) \twoheadrightarrow Aut_\mathbb{Q}(\mathbb{Q}(\sigma_0,\ldots,\sigma_4))##, i.e. every permutation of the ##\sigma_i## induces an automorphism of ##\mathbb{Q}(\sigma_0,\ldots,\sigma_4)##, and the problem is reduced to the question of the kernel of this homomorphism. If it's injective or at least not bigger than ##\mathbb{Z}_2## we're done. If not we should look for a normal series which would guide us the way the solutions can be found.
 
  • #34
fresh_42 said:
kernel of this homomorphism.
Hi @fresh42:
I appreciate your effort to educate me. I am stuck right away on the above.

I am guessing that to actually do the necessary work, one has to write down the kernel of this homomorphism.
First: what is the definition of the underlined term.
From https://en.wikipedia.org/wiki/Kernel_(algebra):
The definition of kernel takes various forms in various contexts. But in all of them, the kernel of a homomorphism is trivial (in a sense relevant to that context) if and only if the homomorphism is injective. The fundamental theorem on homomorphisms (or first isomorphism theorem) is a theorem, again taking various forms, that applies to the quotient algebra defined by the kernel.​
I confess this does not help me at all. So, maybe you can tell me, is the homomorphism injective for the particular quintic, therefore leading to a "trivial" case?
From https://en.wikipedia.org/wiki/Injective_function
In mathematics, an injective function or injection or one-to-one function is a https://www.physicsforums.com/javascript:void(0) that preserves distinctness: it never maps distinct elements of its domain to the same element of its codomain.​
I confess this does not help me at all. Furthermore, when I keep digging down through more and more definitions, I get lost.

So for now, perhaps you can tell me whether this homomorphism is injective for the particular quintic equation I started with. If so, then perhaps you can also write down the trivial kernel. Then I may be able to ask another question.

Regards,
Buzz
 
Last edited by a moderator:
  • #35
Stephen Tashi said:
How do we know when we need the n-th power of a number? (The equation might not be as simple as ##x^n - A = 0 ##)

If we have added some radicals to the list of constants that we are permitted to use then how do we know when a different radical can't be equal to one of the permitted expressions?

fresh_42 said:
In these cases we would have to define a basis of this field, e.g. over ##\mathbb{Q}##, and see whether our number can be expressed in this basis or not, i.e. we'll have to test on linear dependency.

I agree, but the poorly explained aspect of Galois theory is how "test on linear dependency" has something to do with groups and field automorphisms.

A typical "test on linear dependency" on vectors involves trying to solve a linear vector equation, which is interpreted as a set of simultaneous linear equations.

Your post #33 indicates that the simultaneous equations we are trying to solve in Galois theory are non-linear. This suggests that that the theory of groups and field automorphisms is useful in reasoning about the solutions of special types of non-linear equations.

The exposition of Galois theory would be clearer if it first explained how group theory is useful in solving certain special types of simultaneous multivariate algebraic equations. It is not necessary to begin by explaining how these equations arise from reasoning about the roots of polynomial equations.
 

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
3K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
3K
  • Precalculus Mathematics Homework Help
Replies
3
Views
1K
Replies
1
Views
1K
  • Math Guides, Tutorials and Articles
Replies
6
Views
15K
Replies
13
Views
2K
Back
Top