Math Challenge - March 2021

So I guess you are right, the intention was to ask for four non-isomorphic rings. (The other five structures are all isomorphic to ##\mathbb{Z}_p## as rings.)
  • #36
bpet said:
Which bit would you like me to explain further? Keep in mind I’m tapping this out on a phone here.

Cheers
Well, you have the correct answers, but I have no way to check how you got there. You could at least have named the algorithm and its main step. And it are not 3 irreducible components.
 
Physics news on Phys.org
  • #37
Office_Shredder said:
For 3

Proof by induction on n, the dimension of the space.

It's clearly true for n=1, just pick the minimum point in T. I'll do an explicit proof for n=2, because it's easier to visualize.Let ##(a,b)## be an arbitrary point in T, and let's include it in S. Every point in T that's not greater than or equal to a point in S lies in one of the following hyperplanes (i.e. lines in two dimensions), either ##x=k## for some ##k=1,2,...,a-1## or ##y=k## for some ##k=1,2,...,b-1##. Let ##V## be an arbitrary such hyperplane. Then ##T\cap V## is a subset of a copy of ##\mathbb{Z}_+## and so by the inductive hypothesis there is a subset ##S_V \subset T\cap V## that is finite and every point in ##T\cap V## is greater than or equal to a point in ##S_V##. Then let ##S= \left( \bigcup_{V} S_V \right) \cup {(a,b)}##. This is finite as all the sets in the union are finite and there are finitely many of them, and given any point ##t\in T## we either have ##(a,b)\leq t## or ##t\in T\cap V## for some ##V##, and then ##s \leq t## for some ##s\in S_V##.

For the full inductive proof, if we know the statement is true for ##\mathbb{Z}^{n-1}##, then we pick some random ##z=(z_1,...,z_n)## to include in our set ##S##. We then select ##S_V## for ##V## any of the ##z_1-1 + z_2 - 1 +... + z_n -1## hyperplanes of the form ##x_i=k## for some ##k<z_i##. By induction we can construct these ##S_V## to be a finite subset of ##T\cap V## such that for any ##t\in T\cap V##, there exists ##s\in S_V## such that ##s\leq t##. Then let ##S## be the union of all the ##S_V## and the original point ##z##, and any point in ##T## is greater than either ##z## or is contained in one of the subspaces ##V## where we can find a point in the corresponding ##S_V##. By induction each ##S_V## is finite, and there are finitely many of them, so ##S## is finite
Another proof is possible by using Hilbert's basis theorem. The monomial ideal ##\langle x^\alpha\,|\,\alpha\in T\rangle## is generated by a finite set ##\{x^\alpha\,|\,\alpha\in T\}## by Hilbert's basis theorem. This set is necessarily of the form ##\{x^\alpha\,|\,\alpha\in S\}## for a finite subset ##S\subseteq T.## As a generating set of the ideal, ##S## has the required property. The Lemma of Gordan-Dickson is therefore a corollary of Hilbert's basis theorem.
 
  • Informative
Likes nuuskur
  • #38
fresh_42 said:
Another proof is possible by using Hilbert's basis theorem. The monomial ideal ##\langle x^\alpha\,|\,\alpha\in T\rangle## is generated by a finite set ##\{x^\alpha\,|\,\alpha\in T\}## by Hilbert's basis theorem. This set is necessarily of the form ##\{x^\alpha\,|\,\alpha\in S\}## for a finite subset ##S\subseteq T.## As a generating set of the ideal, ##S## has the required property. The Lemma of Gordan-Dickson is therefore a corollary of Hilbert's basis theorem.

I figured you must have had something fancier in mind, but I really enjoyed the simplicity and combinatorial nature of the induction.
 
  • Like
Likes fresh_42
  • #39
Office_Shredder said:
I figured you must have had something fancier in mind, but I really enjoyed the simplicity and combinatorial nature of the induction.
I had an induction, too. Along the last component, but that's only a minor difference.
 
  • #40
fresh_42 said:
Well, you have the correct answers, but I have no way to check how you got there. You could at least have named the algorithm and its main step. And it are not 3 irreducible components.
Ok so I used Buchberger’s algorithm https://en.m.wikipedia.org/wiki/Buchberger's_algorithm, only one polynomial was added to the basis as described in the previous post and this reduced to x+1, then the original polynomials reduced to 0 and y^2-1 trivially.

As for the irreducible components, I would guess two: one corresponding to x=-1 and y=1 and one for x=-1 and y=-1.

Is this right?
Thanks
 
  • Like
Likes fresh_42
  • #41
bpet said:
Ok so I used Buchberger’s algorithm https://en.m.wikipedia.org/wiki/Buchberger's_algorithm, only one polynomial was added to the basis as described in the previous post and this reduced to x+1, then the original polynomials reduced to 0 and y^2-1 trivially.

As for the irreducible components, I would guess two: one corresponding to x=-1 and y=1 and one for x=-1 and y=-1.

Is this right?
Thanks
I'll add the detailed solution in case someone wants to learn a bit of algebraic geometry:

##\mathbb{R}[x,y]## is partially ordered by ##x\prec y## according to which we define ##LT(f)## as the leading term of the polynomial ##f\in \mathbb{R}[x,y]## and ##LC(f)## as the leading coefficient of ##f.## A Gröbner basis of ##I## is a generating system ##G=(g_1,\ldots,g_n)## of polynomials, such that for all ##f\in I-\{0\}## there is a ##g\in G## whose leading term divides the one of ##f\, : \,LT(g)\,|\,LT(g).## A Gröbner basis is called minimal, if for all ##g\in G##
$$
LT(g) \notin \langle LT(G-\{g\})\rangle \, \wedge \,LC(g)=1.
$$
and reduced if no monomial of its elements ##g\in G## is an element of ##\langle LT(G-\{g\})\rangle ## and ##LC(g)=1.## Reduced Gröbner bases are automatically minimal. They are also unique whereas the minimal ones do not need to be.

Gröbner bases can be found by the Buchberger algorithm. We define for two polynomials ##p,q \in I-\{0\}## the division
$$
S(p,q):=\dfrac{lcm(LT(p),LT(q))}{LT(p)}\cdot p - \dfrac{lcm(LT(p),LT(q))}{LT(q)}\cdot q
$$
Then Buchberger's algorithm can be written as
\begin{align*}
\text{INPUT:}\;\; &\{I\}=\{f_1,\ldots,f_n\}\\
\text{OUTPUT:}\;\;& \text{Gröbner basis}\;\; G =(g_{1},\dots ,g_{m})\\
\text{INIT:}\;\; &G:=\{I\}\\
1. & \;\;\text{DO} \\
2. & \;\;\quad G':=G \\
3. & \;\;\quad \text{FOREACH}\;\;p,q\in G' \, , \,p\neq q \\
4. & \;\;\quad \quad \quad s=remainder(S(p,q),G) \\
5. & \;\;\quad \quad \quad \text{IF}\;\;s\neq 0 \;\;\text{THEN}\;\;G:=G\cup \{s\}\\
6. & \;\;\quad \text{NEXT} \\
7. & \;\;\text{UNTIL} \;\;G=G'
\end{align*}
We start with ##f_1(x,y)=x^2y+xy\, , \,f_2(x,y)=xy^2+1## and compute
\begin{align*}
S(f_1,f_2)&=\dfrac{lcm(x^2y,xy^2)}{x^2y}f_1-\dfrac{lcm(x^2y,xy^2)}{xy^2}f_2\\
&=yf_1-xf_2=xy^2-x=1\cdot f_2-x-1\\
G'&=G\cup \{f_3:=-x-1\}=\{f_1,f_2,f_3\}\\
S(f_1,f_3)&=\dfrac{lcm(x^2y,x)}{x^2y}f_1-\dfrac{lcm(x^2y,x)}{-x}f_3\\
&=f_1+xyf_3=x^2y+xy+xy(-x-1)=0\\
S(f_2,f_3)&=\dfrac{lcm(xy^2,x)}{xy^2}f_2-\dfrac{lcm(xy^2,x)}{-x}f_3\\
&=f_2+y^2f_3=xy^2+1+y^2(-x-1)=-y^2+1\\
G'&=G\cup \{f_4:=-y^2+1\}=\{f_1,f_2,f_3,f_4\}\\
S(f_1,f_4)&=\dfrac{lcm(x^2y,y^2)}{x^2y}f_1-\dfrac{lcm(x^2y,y^2)}{-y^2}f_4=yf_1+x^2f_4\\
&=x^2y^2+xy^2-x^2y^2+x^2=xy^2+1+x^2-1\\
&=f_2-(x-1)(-x-1)=f_2-xf_3+f_3 \equiv 0 \mod G\\
S(f_2,f_4)&=\dfrac{lcm(xy^2,y^2)}{xy^2}f_2-\dfrac{lcm(xy^2,y^2)}{-y^2}f_4=f_2+xf_4\\
&= xy^2+1-xy^2+x=x+1=-f_3\equiv 0 \mod G\\
S(f_3,f_4)&=\dfrac{lcm(x,y^2)}{-x}f_3-\dfrac{lcm(x,y^2)}{-y^2}f_4=-y^2f_3+xf_4\\
&=y^2(x+1)-xy^2+x=y^2+x\\
&=-f_2-y^2f_3-f_3\equiv 0 \mod G
\end{align*}
Hence we get a Gröbner basis ##\{x^2y+xy,xy^2+1,-x-1,-y^2+1\}## of ##I.##
\begin{align*}
LT(f_1)&=x^2y=(-xy)\cdot (-x)=(-xy)\cdot LT(f_3)\\
LT(f_2)&=xy^2=(-x)\cdot (-y^2)=(-x)\cdot LT(f_4)
\end{align*}
means that ##\{x+1,y^2-1\}## is a minimal Gröbner basis, which is already reduced, because we cannot omit another leading term and the leading coefficients are normed to ##1.## The vanishing variety are thus the points ##\{(-1,-1),(-1,1)\}## which are two separated points, i.e. two irreducible components.
 
  • #42
fresh_42 said:
I'll add the detailed solution in case someone wants to learn a bit of algebraic geometry:

##\mathbb{R}[x,y]## is partially ordered by ##x\prec y## according to which we define ##LT(f)## as the leading term of the polynomial ##f\in \mathbb{R}[x,y]## and ##LC(f)## as the leading coefficient of ##f.## A Gröbner basis of ##I## is a generating system ##G=(g_1,\ldots,g_n)## of polynomials, such that for all ##f\in I-\{0\}## there is a ##g\in G## whose leading term divides the one of ##f\, : \,LT(g)\,|\,LT(g).## A Gröbner basis is called minimal, if for all ##g\in G##
$$
LT(g) \notin \langle LT(G-\{g\})\rangle \, \wedge \,LC(g)=1.
$$
and reduced if no monomial of its elements ##g\in G## is an element of ##\langle LT(G-\{g\})\rangle ## and ##LC(g)=1.## Reduced Gröbner bases are automatically minimal. They are also unique whereas the minimal ones do not need to be.

Gröbner bases can be found by the Buchberger algorithm. We define for two polynomials ##p,q \in I-\{0\}## the division
$$
S(p,q):=\dfrac{lcm(LT(p),LT(q))}{LT(p)}\cdot p - \dfrac{lcm(LT(p),LT(q))}{LT(q)}\cdot q
$$
Then Buchberger's algorithm can be written as
\begin{align*}
\text{INPUT:}\;\; &\{I\}=\{f_1,\ldots,f_n\}\\
\text{OUTPUT:}\;\;& \text{Gröbner basis}\;\; G =(g_{1},\dots ,g_{m})\\
\text{INIT:}\;\; &G:=\{I\}\\
1. & \;\;\text{DO} \\
2. & \;\;\quad G':=G \\
3. & \;\;\quad \text{FOREACH}\;\;p,q\in G' \, , \,p\neq q \\
4. & \;\;\quad \quad \quad s=remainder(S(p,q),G) \\
5. & \;\;\quad \quad \quad \text{IF}\;\;s\neq 0 \;\;\text{THEN}\;\;G:=G\cup \{s\}\\
6. & \;\;\quad \text{NEXT} \\
7. & \;\;\text{UNTIL} \;\;G=G'
\end{align*}
We start with ##f_1(x,y)=x^2y+xy\, , \,f_2(x,y)=xy^2+1## and compute
\begin{align*}
S(f_1,f_2)&=\dfrac{lcm(x^2y,xy^2)}{x^2y}f_1-\dfrac{lcm(x^2y,xy^2)}{xy^2}f_2\\
&=yf_1-xf_2=xy^2-x=1\cdot f_2-x-1\\
G'&=G\cup \{f_3:=-x-1\}=\{f_1,f_2,f_3\}\\
S(f_1,f_3)&=\dfrac{lcm(x^2y,x)}{x^2y}f_1-\dfrac{lcm(x^2y,x)}{-x}f_3\\
&=f_1+xyf_3=x^2y+xy+xy(-x-1)=0\\
S(f_2,f_3)&=\dfrac{lcm(xy^2,x)}{xy^2}f_2-\dfrac{lcm(xy^2,x)}{-x}f_3\\
&=f_2+y^2f_3=xy^2+1+y^2(-x-1)=-y^2+1\\
G'&=G\cup \{f_4:=-y^2+1\}=\{f_1,f_2,f_3,f_4\}\\
S(f_1,f_4)&=\dfrac{lcm(x^2y,y^2)}{x^2y}f_1-\dfrac{lcm(x^2y,y^2)}{-y^2}f_4=yf_1+x^2f_4\\
&=x^2y^2+xy^2-x^2y^2+x^2=xy^2+1+x^2-1\\
&=f_2-(x-1)(-x-1)=f_2-xf_3+f_3 \equiv 0 \mod G\\
S(f_2,f_4)&=\dfrac{lcm(xy^2,y^2)}{xy^2}f_2-\dfrac{lcm(xy^2,y^2)}{-y^2}f_4=f_2+xf_4\\
&= xy^2+1-xy^2+x=x+1=-f_3\equiv 0 \mod G\\
S(f_3,f_4)&=\dfrac{lcm(x,y^2)}{-x}f_3-\dfrac{lcm(x,y^2)}{-y^2}f_4=-y^2f_3+xf_4\\
&=y^2(x+1)-xy^2+x=y^2+x\\
&=-f_2-y^2f_3-f_3\equiv 0 \mod G
\end{align*}
Hence we get a Gröbner basis ##\{x^2y+xy,xy^2+1,-x-1,-y^2+1\}## of ##I.##
\begin{align*}
LT(f_1)&=x^2y=(-xy)\cdot (-x)=(-xy)\cdot LT(f_3)\\
LT(f_2)&=xy^2=(-x)\cdot (-y^2)=(-x)\cdot LT(f_4)
\end{align*}
means that ##\{x+1,y^2-1\}## is a minimal Gröbner basis, which is already reduced, because we cannot omit another leading term and the leading coefficients are normed to ##1.## The vanishing variety are thus the points ##\{(-1,-1),(-1,1)\}## which are two separated points, i.e. two irreducible components.
Thanks for writing up the details.
Note that only the one S-polynomial needs to be computed if we do the reductions on all polynomials immediately.
 
  • Like
Likes fresh_42
  • #43
Here's the straightforward method for 10b

First we show ##M## is a subgroup of ##U##. Since ##0\in I## for any ideas, ##1-1\in I## and hence ##1\in M##.

If ##u\in M##, then since ##I## is an ideal and ##u-1\in I##, so is ##1-u## and ##u^{-1}(1-u) \in I##. But this is ##u^{-1}-1## and hence ##u^{-1}\in M##.

Suppose ##u,v\in M##. We have to show ##uv-1\in I.## we know that ##u-1## and ##v-1## are both in ##I##, and since it's an ideal so is ##v(u-1)=uv-v##. Since ##I## is closed under addition, we also get ##(uv-v)+(v-1)=uv-1\in I##.

So we have shown that ##M## is a subgroup of ##U##. To show that it is normal, suppose ##m\in M## and ##u\in U##. We need to show that ##umu^{-1} \in M##. From the definition of ##M## we know ##m-1\in I##. Since ##I## is a left ideal, ##u(m-1)\in I## as well. Since ##I## is a right ideal, ##u(m-1)u^{-1}\in I## also. But distributing the multiplication yields ##umu^{-1}-1\in I## which is what we needed.

And a second method
the projection ##\phi:R\to R/I## is a group homomorphism ##U\to U/I##. ##\phi(u)=1## by definition is true if and only if ##u-1\in I## so ##M## is the kernel of this map. Kernels of group homomorphisms are always normal subgroups.
 
  • #44
Office_Shredder said:
Here's the straightforward method for 10b

First we show ##M## is a subgroup of ##U##. Since ##0\in I## for any ideas, ##1-1\in I## and hence ##1\in M##.

If ##u\in M##, then since ##I## is an ideal and ##u-1\in I##, so is ##1-u## and ##u^{-1}(1-u) \in I##. But this is ##u^{-1}-1## and hence ##u^{-1}\in M##.

Suppose ##u,v\in M##. We have to show ##uv-1\in I.## we know that ##u-1## and ##v-1## are both in ##I##, and since it's an ideal so is ##v(u-1)=uv-v##. Since ##I## is closed under addition, we also get ##(uv-v)+(v-1)=uv-1\in I##.

So we have shown that ##M## is a subgroup of ##U##. To show that it is normal, suppose ##m\in M## and ##u\in U##. We need to show that ##umu^{-1} \in M##. From the definition of ##M## we know ##m-1\in I##. Since ##I## is a left ideal, ##u(m-1)\in I## as well. Since ##I## is a right ideal, ##u(m-1)u^{-1}\in I## also. But distributing the multiplication yields ##umu^{-1}-1\in I## which is what we needed.

And a second method
the projection ##\phi:R\to R/I## is a group homomorphism ##U\to U/I##. ##\phi(u)=1## by definition is true if and only if ##u-1\in I## so ##M## is the kernel of this map. Kernels of group homomorphisms are always normal subgroups.
I would have said that the projection ##\phi:R\to R/I## is a ring homomorphism which induces a group homomorphism ##U(R)\to U(R/I)##, but those were the methods I had in mind, too.
 
  • #45
You're right I could have worded that better.
 
  • #46
##a+b+c=2 \Rightarrow a = 2 - (b+c)##. Substituting for ##a## in the other given condition, ##ab + bc + ac = 1##, gives ##(2 - (b+c))(b+c) + bc =1##, which on expansion and rearrangement is equivalent to:

##b^2 + b(c-2) + (c^2 - 2c + 1) = 0 \Rightarrow b^2 + b(c-2) + (c - 1)^2 = 0##

For a fixed value of ##c##, the above expression can be viewed as a quadratic equation in variable ##b##, the solution of which is:

##b = \dfrac {-(c-2) \pm \sqrt {(c-2)^2 - 4(c-1)^2}} {2}##

For the above solution to be real-valued, the expression under square-root must be non-negative, i.e. ##(c-2)^2 - 4(c-1)^2 \geq 0##. This condition simplifies to ##-c(3c-4) \geq 0 \Rightarrow c(3c-4) \leq 0##.
  1. If ##c \lt 0##, then ##(3c - 4) \lt 0## and therefore ##c(3c-4) \gt 0##, hence the above condition is not met
  2. If ##c \geq 0##, then for the above condition to be met, we must also have ##(3c - 4) \leq 0 \Rightarrow 3c \leq 4 \Rightarrow c \leq \dfrac {4} {3}##
From cases (1) and (2), it is clear that ##b## has a real-valued solution only if ##0 \leq c \leq \dfrac {4} {3}##.

Since the original expressions given in the question are symmetric in ##a, b## and ##c##, the above finding on the allowed range of ##c## is also applicable to ##a## and ##b## and can be derived in an analogous manner with similar quadratic equations that treat ##a## and ##b## respectively as the variable.
 
  • Like
Likes fresh_42
  • #47
Let ##a=x-y##. Then the 2nd equation is equivalent to $$2-a = \sqrt{18+a} \Rightarrow (2-a)^{2} = 18+a
\Rightarrow (a-7)(a+2) = 0$$

Thus the possible values for for ##a## are -2 and 7.

The 1st equation is equivalent to ##5 = \sqrt{1+x+y} + \sqrt{2+a}##. We substitute the possible for values for ##a## and try to solve for ##x+y## and then for ##x, y##.

For ##a=-2##, the 1st equation becomes ##5 = \sqrt{1+x+y} + \sqrt{2-2} \Rightarrow x+y = 24##. Since ##a=-2## implies ##x-y = -2##, we now have 2 linear equations on ##x, y##, solving which we get ##x=11, y=13##.

For ##a=7##, the 1st equation becomes ##5 = \sqrt{1+x+y} + \sqrt{2+7} \Rightarrow x+y = 3##. Since ##a=7## implies ##x-y = 7##, we again have 2 equations on ##x, y##, solving which we get ##x=5, y=-2##.

Thus the possible solutions for the given pair of equations are ##x=11, y=13## and ##x=5, y=-2##.
 
  • #48
Not anonymous said:
Let ##a=x-y##. Then the 2nd equation is equivalent to $$2-a = \sqrt{18+a} \Rightarrow (2-a)^{2} = 18+a
\Rightarrow (a-7)(a+2) = 0$$

Thus the possible values for for ##a## are -2 and 7.

The 1st equation is equivalent to ##5 = \sqrt{1+x+y} + \sqrt{2+a}##. We substitute the possible for values for ##a## and try to solve for ##x+y## and then for ##x, y##.

For ##a=-2##, the 1st equation becomes ##5 = \sqrt{1+x+y} + \sqrt{2-2} \Rightarrow x+y = 24##. Since ##a=-2## implies ##x-y = -2##, we now have 2 linear equations on ##x, y##, solving which we get ##x=11, y=13##.

For ##a=7##, the 1st equation becomes ##5 = \sqrt{1+x+y} + \sqrt{2+7} \Rightarrow x+y = 3##. Since ##a=7## implies ##x-y = 7##, we again have 2 equations on ##x, y##, solving which we get ##x=5, y=-2##.

Thus the possible solutions for the given pair of equations are ##x=11, y=13## and ##x=5, y=-2##.
Have you checked, whether the necessity of your solutions is also sufficient? For short: Did you check whether these are solutions?
 
  • #49
fresh_42 said:
Have you checked, whether the necessity of your solutions is also sufficient? For short: Did you check whether these are solutions?

Yes, I checked. For ##x=11, y=13##, I see that the 2 equality conditions are satisfied. ##2-x+y = 2-11+13 = 4## and ##\sqrt{18+x-y} = \sqrt{18+11-13} = 4##, so the 2nd equality is satisfied. And ##\sqrt{1+x+y} + \sqrt{2+x-y}## becomes ##\sqrt{1+11+13} + \sqrt{2+11-13} = \sqrt{25} + \sqrt{0} = 5##, so the second equality is also met.

For ##x=5, y=-2##: ##2-x+y = 2-5-2 = -5## and ##\sqrt{18+x-y} = \sqrt{18+5+2} = \pm 5##, so the 2nd equality is satisfied (for square-root value of -5). And ##\sqrt{1+x+y} + \sqrt{2+x-y}## becomes ##\sqrt{1+5-2} + \sqrt{2+5+2} = \sqrt{4} + \sqrt{9} = 5##, so the second equality is also met.

I am not feeling too well right now - perhaps there is something obvious and simple that I am missing?
 
  • #50
Not anonymous said:
Yes, I checked. For ##x=11, y=13##, I see that the 2 equality conditions are satisfied. ##2-x+y = 2-11+13 = 4## and ##\sqrt{18+x-y} = \sqrt{18+11-13} = 4##, so the 2nd equality is satisfied. And ##\sqrt{1+x+y} + \sqrt{2+x-y}## becomes ##\sqrt{1+11+13} + \sqrt{2+11-13} = \sqrt{25} + \sqrt{0} = 5##, so the second equality is also met.

For ##x=5, y=-2##: ##2-x+y = 2-5-2 = -5## and ##\sqrt{18+x-y} = \sqrt{18+5+2} = \pm 5##, so the 2nd equality is satisfied (for square-root value of -5). And ##\sqrt{1+x+y} + \sqrt{2+x-y}## becomes ##\sqrt{1+5-2} + \sqrt{2+5+2} = \sqrt{4} + \sqrt{9} = 5##, so the second equality is also met.

I am not feeling too well right now - perhaps there is something obvious and simple that I am missing?
No. ##\sqrt{25}=5## only. If there is no sign, then it is automatically positive, not both. To get both one must write ##\pm\sqrt{25}=\pm 5##. Thus the pair ##(5,-2)## does not solve the second equation. It's a subtlety but important to learn.
 
  • #51
Given ##f(f(f(x))) =x##, suppose there exists ##x_1## such that ##f(x_1) \neq x_1##. Say ##f(x_1) = x_2##, where ##x_2 \neq x_1##. We see that ##f(x_2)## cannot be ##x_2## since that will imply ##f(f(f(x_1))) = x_2## and the given condition won't be met. So we must have ##f(x_2) =x_3## for where ##x_3## is different from ##x_2, x_1## and we must have ##f(x_2) = x_1## for the given condition to be met.

Without loss of generality we can assume ##x_1## to be the smallest of the 3 values. Two possibilities arise.
1. ##x_1 \lt x_2 \lt x_3##. This means ##f(x_1) = x_2 \lt f(x_2) = x_3## and ##f(x_2) = x_3 \gt f(x_2) = x_1## and ##f(x_3) \lt f(x_1)##. So the function ##f## must be increasing at least for a sub-interval of ##(x_1, x_2)## and decreasing for a sub-interval of ##(x_2, x_3)## and since it is continuous, there must be some ##x_4 \in (x_2, x_3)## such that ##f(x_4) = x_2##. This is because the continuous method should take every value between ##f(x_2) and f(x_3)## for ##x \in (x_2, x_3)## and ##f(x_2) = x_3 \gt x_2 \gt x_1 = f(x_3)##. But this introduces a contradiction because ##f(f(f(x_4))) = f(f(x_2)) = f(x_3) = x_1 \neq x_4## i.e. given condition is not met some real-valued ##x##. Hence the assumption that ##f(x) \neq x## for some ##x## must be wrong.

2. ##x_1 \lt x_3 \lt x_2##. Here too, we can prove like in case (1) that assuming ##f(x) \neq x## for some ##x## must be wrong

Since both cases lead to violation of given condition, it follows that ##f(x) =x## for all ##x## is a necessary condition for ##f(f(f(x)))=x## for all ##x##
 
  • Like
Likes fresh_42
  • #52
Not anonymous said:
So the function ##f## must be increasing at least for a sub-interval of ##(x_1, x_2)## and decreasing for a sub-interval of ##(x_2, x_3)## and since it is continuous, there must be some ##x_4 \in (x_2, x_3)## such that ##f(x_4) = x_2##.
You can drop this "at least". From ##f(a)=f(b)## follows ##a=f(f(f(a)))=f(f(f(b)))=b##, i.e. ##f## in injective (= into, =one to one). So it can only be either monotone decreasing or monotone increasing on the entire real number line.
 
  • #53
Question 13:
$$\begin{align*}
Q(n)&=\prod_{i=4}^n\frac{i^2-9}{i^2-4}\\
&=\prod_{i=4}^n\frac{(i+3)(i-3)}{(i+2)(i-2)}\\
&=\prod_{i=4}^n\frac{i+3}{i+2}\prod_{i=4}^n\frac{i-3}{i-2}\\
&=\frac{\frac{(n+3)!}{6!}}{\frac{(n+2)!}{5!}}\frac{(n-3)!}{(n-2)!}\\
&=\frac16\frac{n+3}{n-2}
\end{align*}$$
a) ##Q(4)=\frac{7}{12}## and ##\lim_{n\to\infty}Q(n)=\frac16##.
$$\left(\frac{x+3}{x-2}\right)'=\frac{(x-2)-(x+3)}{(x-2)^2}=-\frac{5}{(x-2)^2}<0$$
So ##Q(n)## starts at ##\frac7{12}## for ##n=4## and decreases to ##\frac16## as ##n## gets bigger:
$$\frac16<Q(n)\leq\frac7{12}$$
b) No. Since ##0.1667>\frac16##, there should be an ##N>4## such that ##Q(n)\leq0.1667## for every ##n\geq N##.
Another way to do it is by solving ##Q(n)=\frac16\frac{n+3}{n-2}=0.1667## for ##n##. This gives us ##n=25002\geq4##, which means that the inequality is false, because ##Q(25003)<Q(25002)##.
 
Last edited:
  • #54
I have tried 4), but got ##y_1(t)=y_2(t)=0## when the initial conditions are as stated. Is that wrong?
I started by solving for ##y_2## in the first equation, differentiating, then substituting by ##\dot y_2## from the second equation and ##y_2## from the first equation equation. This gave me a second order linear ode in ##\ddot y_1,\,\dot y_1## and ##y_1##.
 
  • #55
archaic said:
I have tried 4), but got ##y_1(t)=y_2(t)=0## when the initial conditions are as stated. Is that wrong?
I started by solving for ##y_2## in the first equation, differentiating, then substituting by ##\dot y_2## from the second equation and ##y_2## from the first equation equation. This gave me a second order linear ode in ##\ddot y_1,\,\dot y_1## and ##y_1##.
No, as far as part (a) is concerned. But this stable solution is only the warm up.
 
  • #56
(a) The inequality can be proven by using method of induction. We first notice that the LHS is equivalent to ##P_{n} \equiv \prod_{i=4}^n \frac {(i-3)(i+3)} {(i-2)(i+2)}##. For induction, we claim that this product is equal to ##\left(\frac {1} {6}\right) \left(\frac {n+3} {n-2}\right)##. The base case is for ##n=4## and we see that that claim holds true for this base case since ##\prod_{i=4}^4 \dfrac {(4-3)(4+3)} {(4-2)(4+2)} = \left(\frac {1} {6}\right) \left(\frac {4+3} {4-2}\right)##.

Now assume that the claim is true for all ##n \in {4, 5, ..., k}## for some positive integer ##k >= 4##. Now consider the product up to ##k+1##, i.e. ##P_{k+1} = P_{k} \times \left(\frac {(k+1-3)(k+1+3)} {(k+1-2)(k+1+2)}\right)##. Substituting for ##P_{k}## based on inductive claim, this product becomes $$P_{k+1} = \left(\frac {1} {6}\right) \left(\frac {k+3} {k-2}\right) \left(\frac {(k-2)(k+4)} {(k-1)(k+3)}\right) \Rightarrow P_{k+1} = \left(\frac {1} {6}\right) \left(\frac {k+4} {k-1}\right) \equiv \left(\frac {1} {6}\right) \left(\frac {(k+1)+3} {(k+1)-2}\right)$$, so the inductive claim holds true for ##k+1## too, proving the original claim for all values of ##n >= 4##.
Clearly ##P_{n} = \left(\frac {1} {6}\right) \left(\frac {n+3} {n-2}\right) \gt \frac {1} {6}## since the ##\left(\frac {n+3} {n-2}\right) \gt 1## for any ##n \gt 2##.

(b) No, the statement isn't valid if RHS of inequality is replaced by ##0.1667## because ##\left(\frac {n+3} {n-2}\right)## tends to 1 as ##n## tends to infinity. In other words, ##P_{n} \rightarrow \frac {1} {6}## as ##n \rightarrow \infty## (but tending from right side, so always greater than ##\frac {1} {6}##), meaning ##P_{n}## can go arbitrarily close to ##\frac {1} {6}## from right. A simple counterexample is ##n=100002##. ##P_{100002} = \frac {1} {6} \frac {100005} {100000} = 0.166675 \lt 0.1667##
 
  • #57
Re #15; Assume f:R-->R is continuous and for every x, there exists n ≥ 1 such that the nth iteration of f equals x at x, i.e. f(f(f(...(x)))))))), n iterates, = x. In particular, n can depend on x. Prove, or disprove, for all x, f(x) =x.Re #2: I can think of three,1) the quotient group Z/pZ; 2) Z localized at the multiplicative set of integers not divisible by p; and 3) the p-adic integers. 4) ? I guess I could cheat and say also the quotient ring Z/pZ!
 
Last edited:

Similar threads

  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
2
Replies
61
Views
6K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
3
Replies
86
Views
9K
  • Math Proof Training and Practice
2
Replies
67
Views
8K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
3
Replies
102
Views
7K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
Back
Top