Math Challenge - March 2020

In summary, the conversation includes 10 math problems and their corresponding solutions, ranging from proving inequalities and evaluating integrals to showing properties of Lie algebras and solving equations in modular arithmetic. The participants in the conversation are identified by their usernames and the type of problem they solved. Some of the key concepts discussed include continuity, linear mappings, eigenvalues, submultiplicative matrix norms, and coprime numbers. The conversation also includes a discussion about proof techniques and the importance of being accurate.
  • #36
archaic said:
Incomplete attempt:
Let ##p/q## and ##r/s## be in ##\mathbb{Q}## such that ##f(p/q)=f(r/s)##.
$$\begin{align*}
f\left(\frac{p}{q}\right)=f\left(\frac{r}{s}\right)&\Leftrightarrow\left(\frac{p}{q}\right)^3-2\frac{p}{q}=\left(\frac{r}{s}\right)^3-2\frac{r}{s}\\
&\Leftrightarrow \left(\frac{p}{q}\right)^3-\left(\frac{r}{s}\right)^3-2\left(\frac{p}{q}-\frac{r}{s}\right)=0\\
&\Leftrightarrow\left(\frac{p}{q}-\frac{r}{s}\right)\left(\left(\frac{p}{q}\right)^2+\frac{p}{q}\frac{r}{s}+\left(\frac{r}{s}\right)^2\right)-2\left(\frac{p}{q}-\frac{r}{s}\right)=0\\
&\Leftrightarrow\left(\frac{p}{q}-\frac{r}{s}\right)\left(\left(\frac{p}{q}\right)^2+\frac{p}{q}\frac{r}{s}+\left(\frac{r}{s}\right)^2-2\right)=0\\
\end{align*}$$
$$\frac{p}{q}=\frac{r}{s}\text{ or }\left(\frac{p}{q}\right)^2+\frac{p}{q}\frac{r}{s}+\left(\frac{r}{s}\right)^2=2$$
$$\begin{align*}
\left(\frac{p}{q}\right)^2+\frac{p}{q}\frac{r}{s}+\left(\frac{r}{s}\right)^2=2&\Leftrightarrow\left(\frac{p}{q}+\frac{r}{s}\right)^2-\frac{p}{q}\frac{r}{s}=2
\end{align*}$$
$$\left(\frac{p}{q}\right)^2+\frac{p}{q}\frac{r}{s}+\left(\frac{r}{s}\right)^2=2\Leftrightarrow (ps)^2+pqrs+(qr)^2=2(qs)^2$$
We will analyse ##(ps)^2+pqrs+(qr)^2=2(qs)^2##:
For the LHS to be even, I need to have 2 odd terms and 1 even term, or 3 even terms.
1) If both ##(ps)^2## and ##(qr)^2## are odd then ##p,\,s,\,q## and ##r## are all odd, and so ##pqrs## cannot be even. Thus, this configuration doesn't work.
2) That all terms are even: //to be finished.
 
Physics news on Phys.org
  • #37
fresh_42 said:
15. Division of an integer by a prime number ##p## leaves us with the possible remainders ##C:=\{\,0,1,2,\ldots ,p-1\,\}\,##. We can define an addition and a multiplication on ##C## if we wrap it around ##p##, i.e. we identify ##0=p=2p=3p= \ldots \, , 1=1+p=1+2p=1+3p=\ldots ,\ldots## This is called modular arithmetic (modulo ##p##).

Show that for any given numbers ##a,b\in C## the equations ##a+x=b## and ##a\cdot x =b## (##a\neq 0##) have a unique solution.
Is this still true if we drop the requirement that ##p## is prime? (FR)

Remark: This problem is about proof techniques, so be as accurate (not long) as possible, i.e. note which property or condition you use at each step.

We first prove for the equation ##a+x=b##. We only need to prove that there exists a unique ##x## within the set ##C## that meets the condition ##a+x=b## (modular arithmetic modulo ##p##), since if there were any integer ##y \notin C## for which ##a + y = b##, then it can be rewritten ##a + kp + r = b##, where ##k, r## are quotient and remainder of ##y## when divided by ##p##; since ##r## must belong to set ##C## and ##a + kp + r = a + r## by definition of modular arithmetic.

It is obvious that $$x_1 = \begin{cases} (b-a) & \text{if } a \leq b \\
(p-a+b) \text{if } a \gt b \end{cases}$$
is a valid solution for ##x## and that ##x_1 \in C##. Suppose this solution is not unique. Then there must be some ##x_2 \in C## such that ##x_1 \neq x_2## and ##a+x_2 = b##. Subtracting one expression from the other, we get ##(x_2 - x_1) = 0 \mod p##. For simplicity and without loss of generality, we may assume ##x_2 \gt x_1##. Since ##x_1, x_2 \in C##, even in normal, non-modular arithmetic, ##0 \leq (x_2 - x_1) \leq (p-1)##, and therefore ##(x_2 - x_1) = 0 \mod p## implies that ##x_2 - x_1 = 0## in normal arithmetic too and therefore ##x_2## must be equal to ##x_1##, a contradiction. Hence ##a+x=b## has a unique solution with ##x \in C##.

Proof for existence of ##x## for the multiplication equation ##a.x=b## given ##a \neq 0##:
Here too, we only need to prove that there exists a unique ##x## within the set ##C## that meets the condition, since if there were an integer ##y \notin C## such that ##a.y = b \mod p##, then, just as before, we can rewrite it as ##a(kp + r) = b \Rightarrow (akp + ar) = b \Rightarrow a.r = b## by definition of modular arithmetic and since ##r## is the remainder of ##y## divided by ##p##, ##r \in C##.

Consider the modulo ##p## values of the first ##p## non-negative products of ##a##, i.e. ##0, a, 2a, 3a, ..., (p-1)a##. All these values must be distinct, that is ##p## distinct values from ##C##. Otherwise, we would have ##x_1 a = x_2 a \mod p## for some pair of integers ##0 \leq x_1 \lt x_2 \leq (p-1)##, implying ##(x_2 - x_1) a = 0 \mod p \Rightarrow you = kp## where ##y = (x_2 - x_1)## and ##k## is some positive integer, but this would imply that ##p## has a prime factor smaller than itself, since ##1 \leq a, y \lt p##, which is a contradiction given ##p## is prime. Since the ##p## products have ##p## distinct modulo ##p## values, all elements of ##C## must match exactly one of the ##p## different products. Since ##b \in C##, this means that there is exactly one ##x \in \{0, 1, 2, \ldots, p-1\} \equiv x \in C## such that ##ax = b \mod p##, it is proven that there exists a unique solution for ##a.x = b## in modular multiplication.

Is this still true if we drop the requirement that p is prime?
If ##p## is not prime, existence of a unique solution is still true for modulo addition, but not for modulo multiplication, since in the proof for uniqueness in solution to multiplication equation, we relied on the fact that ##p## is prime. As an example for non-uniqueness of ##a.x = b## when ##p## is non-prime, consider ##a=2, b=6, p=8##. Then we have 2 values of ##x \in C## for which ##a.x = b##, namely ##x=3## and ##x=7##
 
  • #38
QuantumQuest said:
Your solution is correct. In order for your solution to be perfect just correct the small typo at the second to last line of your latex.
I'm unable to edit now, but I see what you mean :) was typing too fast I guess.
 
  • Like
Likes QuantumQuest
  • #39
.
Antarres said:
I'm unable to edit now, but I see what you mean :) was typing too fast I guess.
Done.
 
  • Like
Likes QuantumQuest and Antarres
  • #40
fresh_42 said:
1. Prove the inequality ##cos(θ)^p ≤ cos(pθ)## for ##0≤θ≤π/2## and ##0 < p < 1##. (IR)

The question got an accepted answer before I could post what I had worked out on paper. I don't think my proof would be considered rigorous enough because I make some inferences based on known properties but I am not sure how to put them as formulae, but this is the first non-high school level question from the past several challenges that I found approachable with techniques I studied in school, so I am eager to post what I hope at least comes close to a correct proof.

Given ##0≤θ≤π/2## and ##0 < p < 1##, the following 2 conditions must be obvious.
1. ##0 ≤ cos(θ) ≤ 1 \Rightarrow 0 ≤ cos(θ)^p ≤ 1##
2. ##0 ≤ pθ ≤ π/2 \Rightarrow 0 ≤ cos(θ) ≤ 1##

Since both expressions are non-negative, we may apply logarithms to both and use the fact that ##\log (x)## is a monotonically increasing function of ##x## to establish the smaller or larger of 2 values, i.e. ##\log (x_1) \geq \log (x_2) \Rightarrow x_1 \geq x_2##

Let ##f(x) = \log ({\cos(x)^p}) = p \log (\cos(x))## and ##g(x) = \log (\cos(px))##. Now consider the slopes of these 2 functions.
$$
f'(x) = p \frac {\cos(x)} {-\sin(x)} = -p \tan(x) \\
g'(x) = \frac {\cos(px)} {-\sin(px)} (-p) = -p \tan(px)
$$

When ##0 \leq px \leq x \leq π/2##, we have ##0 \leq \tan (px) \leq tan(x) \Rightarrow -p \tan(x) \leq -p \tan(px) \leq 0 \Rightarrow f'(x) \leq g'(x) \leq 0##. In other words, ##f(x)## and ##g(x)## are strictly decreasing, continuous functions of ##x## for ##x \in [0, π/2]## and ##f(x)## decreases equally or more rapidly than ##g(x)## for that range of ##x##. Since ##f(0) = g(0) = \log (\cos(0)) = 0##, this means that ##f(θ) \leq g(θ)## for any given ##θ \in [0,π/2]##. By using the earlier stated property of logarithms, this in turn means that the original expressions on which logarithm was applied to get ##f(x)## and ##g(x)## also follow the same inequality, therefore ##cos(θ)^p ≤ cos(pθ)##
 
  • Like
Likes Infrared, BWV and archaic
  • #41
.
Not anonymous said:
We first prove for the equation ##a+x=b##. We only need to prove that there exists a unique ##x## within the set ##C## that meets the condition ##a+x=b## (modular arithmetic modulo ##p##), since if there were any integer ##y \notin C## for which ##a + y = b##, then it can be rewritten ##a + kp + r = b##, where ##k, r## are quotient and remainder of ##y## when divided by ##p##; since ##r## must belong to set ##C## and ##a + kp + r = a + r## by definition of modular arithmetic.
This is true but not necessary. The moment we defined ##C## we were restricted to this set. There are sinply no integers outside, which are under consideration.
It is obvious that $$x_1 = \begin{cases} (b-a) & \text{if } a \leq b \\
(p-a+b) \text{if } a \gt b \end{cases}$$
is a valid solution for ##x## and that ##x_1 \in C##.
Correct. Best would have been to show that ##-a## and ##0## exist within ##C##, such that ##-a+a=0## and that ##C## is closed under addition, so that ##x:=b+(-a) \in C##. The order is problematic, as it only holds for integers, not for elements of ##C##. I should have written the elements of ##C## as ##[0],[1],\ldots,[p-1]## to distinguish them. My fault.
Suppose this solution is not unique. Then there must be some ##x_2 \in C## such that ##x_1 \neq x_2## and ##a+x_2 = b##. Subtracting one expression from the other, we get ##(x_2 - x_1) = 0 \mod p##. For simplicity and without loss of generality, we may assume ##x_2 \gt x_1##. Since ##x_1, x_2 \in C##, even in normal, non-modular arithmetic, ##0 \leq (x_2 - x_1) \leq (p-1)##, and therefore ##(x_2 - x_1) = 0 \mod p## implies that ##x_2 - x_1 = 0## in normal arithmetic too and therefore ##x_2## must be equal to ##x_1##, a contradiction. Hence ##a+x=b## has a unique solution with ##x \in C##.
Correct. You should in general try to avoid contradictions. In the case above, we can positively conclude ##a+x_1=b \,\wedge \, a+x_2=b \Longrightarrow x_1=x_2##.
\begin{align*}
a+x_1=b \,&\wedge \, a+x_2=b \Longrightarrow a+x_1=a+x_2\\
&\Longrightarrow (-a)+(a+x_1)=(-a)+(a+x_2) \quad \text{ existence of additive inverse }\\
&\Longrightarrow (-a+a)+x_1 = (-a+a)+x_2 \quad \text{ associativity of addition inherited from the integers }\\
&\Longrightarrow 0+x_1=0+x_2 \quad \text{ existence of neutral element of addition }\\
&\Longrightarrow x_1 = x_2
\end{align*}
I only mentioned what I had been thought of. Although the elements of ##C## are written as integers, they carry another structure and one has to be careful what is inherited from the integers and what is not.
Proof for existence of ##x## for the multiplication equation ##a.x=b## given ##a \neq 0##:
Here too, we only need to prove that there exists a unique ##x## within the set ##C## that meets the condition, since if there were an integer ##y \notin C## such that ##a.y = b \mod p##, then, just as before, we can rewrite it as ##a(kp + r) = b \Rightarrow (akp + ar) = b \Rightarrow a.r = b## by definition of modular arithmetic and since ##r## is the remainder of ##y## divided by ##p##, ##r \in C##.

Consider the modulo ##p## values of the first ##p## non-negative products of ##a##, i.e. ##0, a, 2a, 3a, ..., (p-1)a##. All these values must be distinct, that is ##p## distinct values from ##C##. Otherwise, we would have ##x_1 a = x_2 a \mod p## for some pair of integers ##0 \leq x_1 \lt x_2 \leq (p-1)##, implying ##(x_2 - x_1) a = 0 \mod p \Rightarrow you = kp## where ##y = (x_2 - x_1)## and ##k## is some positive integer, but this would imply that ##p## has a prime factor smaller than itself, since ##1 \leq a, y \lt p##, which is a contradiction given ##p## is prime. Since the ##p## products have ##p## distinct modulo ##p## values, all elements of ##C## must match exactly one of the ##p## different products. Since ##b \in C##, this means that there is exactly one ##x \in \{0, 1, 2, \ldots, p-1\} \equiv x \in C## such that ##ax = b \mod p##, it is proven that there exists a unique solution for ##a.x = b## in modular multiplication.
Same as before. We need associativity, a neutral element, inverse elements and closure under multiplication on ##C##. With these properties we can solve the equation.
Is this still true if we drop the requirement that p is prime?
If ##p## is not prime, existence of a unique solution is still true for modulo addition, but not for modulo multiplication, since in the proof for uniqueness in solution to multiplication equation, we relied on the fact that ##p## is prime.
The correct definition of a prime number ##p## is: ##p \text{ has no multiöpicative inverse and } p|ab \text{ implies } p|a \text{ or }p|b##.
As an example for non-uniqueness of ##a.x = b## when ##p## is non-prime, consider ##a=2, b=6, p=8##. Then we have 2 values of ##x \in C## for which ##a.x = b##, namely ##x=3## and ##x=7##
Yes, primality is necessary in the multiplicative case. From your example we get ##2\cdot 3 = 2 \cdot 7 = 6 \mod 8## which leads to ##2\cdot (7-3)=2\cdot 4 = 0##. But if ##2\cdot 4= 0## and ##2\cdot 0 = 0## then we are in trouble. ##2## has no inverse element, since otherwise we would have ##4=0##.
 
  • #42
@Not anonymous Your argument is good! Just to clarify the part that I assume you're unsure about: Since ##f'(x)\leq g'(x),## the function ##h(x)=f(x)-g(x)## satisfies ##h'(x)\leq 0##, so ##h## is decreasing. Since ##h(0)=f(0)-g(0)=0-0=0##, this means ##h(x)=f(x)-g(x)\leq 0## on ##[0,\pi/2]##, i.e. ##f(x)\leq g(x)##.
 
  • #43
I had a go at #10.

We have the identity:

\begin{align}
\left( e^{t {(A+B) \over n}} \right)^n - \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^n & = \left[ e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right] \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^{n-1}
\nonumber \\
& + e^{t {(A+B) \over n}} \left[ e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right] \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^{n-2}
\nonumber \\
& + \cdots + \left( e^{t {(A+B) \over n}} \right)^{n-1} \left[ e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right]
\nonumber
\end{align}

where there are ##n## terms on the RHS. For given ##A##, ##B##, and fixed ##t## there exists an integer ##N## such that for ##n \geq N##

##
\left\| {t A \over n} \right\| \leq 1 \; \text{ and } \; \left\| {t B \over n} \right\| \leq 1 .
##

Then

\begin{align}
\left\| e^{t (A+B)} - \left( e^{t {A \over n}} \cdot e^{t {A \over n}} \right)^n \right\| & \leq \left\| e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right\| \left( \left\| e^{t {A (n-1)\over n}} \right\| \left\| e^{t {B (n-1)\over n}} \right\| \right.
\nonumber \\
& + \left\| e^{t {(A+B) \over n}} \right\| \left\| e^{t {A (n-2)\over n}} \right\| \left\| e^{t {B (n-2)\over n}} \right\|
\nonumber\\
& + \cdots + \left. \left\| e^{t {(A+B) (n-1)\over n}} \right\| \right)
\nonumber\\
& \leq {6 e^2 |t|^2 \over n^2} \left\| [A,B] \right\|
\left( \left\| e^{t {A (n-1)\over n}} \right\| \left\| e^{t {B (n-1)\over n}} \right\| \right.
\nonumber \\
& + \left\| e^{t {(A+B) \over n}} \right\| \left\| e^{t {A (n-2)\over n}} \right\| \left\| e^{t {B (n-2)\over n}} \right\|
\nonumber\\
& + \cdots + \left. \left\| e^{t {(A+B) (n-1)\over n}} \right\| \right)
\nonumber\\
& = {6 e^2 |t|^2 \over n} \left\| [A,B] \right\| \sum_{k=0}^{n-1} \left\| e^{t {A(n-1-k)\over n}} \right\| \left\| e^{t {B (n-1-k)\over n}} \right\| \left\| e^{t {(A+B) k\over n}} \right\| {1 \over n}
\quad *
\nonumber
\end{align}

where we have used the triangle inequality, submultiplicativity, and the estimate in #9.

Is

##
g(x) := \left\| e^{t A (1-x)} \right\|
##

a continuous function on the closed and bounded interval ##[0,1]##? Consider:

\begin{align}
\| e^{t A (1-x_0 \mp \delta)} - e^{t A (1-x_0)} \| & = \| e^{t A (1-x_0)} (e^{\mp t A \delta} - \mathbb{1}) \|
\nonumber\\
& \leq \| e^{t A (1-x_0)} \| \| e^{\mp t A \delta} - \mathbb{1} \|
\nonumber\\
& = \left\| \sum_{k=0}^\infty {t^k A^k (1-x_0)^k \over k!} \right\| \left\| \sum_{l=1}^\infty {t^l (\mp \delta)^l A^l \over l!} \right\|
\nonumber\\
& \leq \sum_{k=0}^\infty {|t|^k \| A^k \| (1-x_0)^k \over k!} \sum_{l=1}^\infty {|t \delta|^l \| A^l \| \over l!}
\nonumber\\
& \leq \sum_{k=0}^\infty {|t|^k \| A \|^k (1-x_0)^k \over k!} \sum_{l=1}^\infty {|t|^l \delta^l \| A \|^l \over l!}
\nonumber\\
& = e^{|t| \| A \| (1-x_0)} ( e^{|t| \| A \| \delta} - 1 ) \quad **
\nonumber
\end{align}

where we have used submultiplicativity, the triangle inequality + homogeneity, and then submultiplicativity again. As we know that ##D e^{C x}##, where ##D## and ##C## are constants, is a continuous function we have that for any ##\epsilon > 0## there exists an ##\delta## such that:

##
|D e^{C (x_0 \pm \delta)} - D e^{C x_0}| < \epsilon .
##

Using this in ** we obtain that for any ##\epsilon > 0## there exists ##\delta > 0## such that

\begin{align}
\| e^{t A (1-x_0 \mp \delta)} - e^{t A (1-x_0)} \| & < \epsilon .
\nonumber
\end{align}

It follows from the triangle inequality that:

##
\|M - M' \| \geq \left| \; \|M \| - \| M' \| \; \right| .
##

Using this, we obtain that for any ##\epsilon > 0## there exists ##\delta > 0## such that

##
\left| \| e^{t A (1-x_0 \mp \delta)} \| - \| e^{t A (1-x_0)}\| \right| \leq
\| e^{t A (1-x_0 \mp \delta)} - e^{t A (1-x_0)} \|
< \epsilon .
##

Hence ##g(x)## is a continuous function on the closed and bounded interval ##[0,1]##. By similar considerations, the functions:

##
\left\| e^{t B (1-x)} \right\| \text{ and } \left\| e^{t (A+B) x} \right\|
##

are also continuous on the closed and bounded interval ##[0,1]##. It follows that the function:

##
f(x) := \left\| e^{t A (1-x)} \right\| \left\| e^{t B (1-x)} \right\| \left\| e^{t (A+B) x} \right\|
##

is continuous on the closed and bounded interval ##[0,1]##. It follows (by known results) that it is Riemann-integrable, meaning that the sum in * in the limit where ##n \rightarrow \infty## is equal to ##\int_0^1 f(x) dx##.

So that by * we have:

\begin{align}
& \lim_{n \rightarrow \infty} \left\| e^{t (A+B)} - \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^n \right\|
\nonumber\\
&
\leq \lim_{n \rightarrow \infty} {6 e^2 |t|^2 \over n} \| [A,B] \| \times
\int_0^1 dx \left\| e^{t A (1-x)} \right\| \left\| e^{t B (1-x)} \right\| \left\| e^{t (A+B) x} \right\|
\nonumber\\
& = 0
\nonumber
\end{align}

which is the desired result.
 
Last edited:
  • #44
.
julian said:
I had a go at #10.

We have the identity:

\begin{align}
\left( e^{t {(A+B) \over n}} \right)^n - \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^n & = \left[ e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right] \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^{n-1}
\nonumber \\
& + e^{t {(A+B) \over n}} \left[ e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right] \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^{n-2}
\nonumber \\
& + \cdots + \left( e^{t {(A+B) \over n}} \right)^{n-1} \left[ e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right]
\nonumber
\end{align}

where there are ##n## terms on the RHS. For given ##A##, ##B##, and fixed ##t## there exists an integer ##N## such that for ##n \geq N##

##
\left\| {t A \over n} \right\| \leq 1 \; \text{ and } \; \left\| {t B \over n} \right\| \leq 1 .
##

Then

\begin{align}
\left\| e^{t (A+B)} - \left( e^{t {A \over n}} \cdot e^{t {A \over n}} \right)^n \right\| & \leq \left\| e^{t {(A+B) \over n}} - e^{t {A \over n}} \cdot e^{t {B \over n}} \right\| \left( \left\| e^{t {A (n-1)\over n}} \right\| \left\| e^{t {B (n-1)\over n}} \right\| \right.
\nonumber \\
& + \left\| e^{t {(A+B) \over n}} \right\| \left\| e^{t {A (n-2)\over n}} \right\| \left\| e^{t {B (n-2)\over n}} \right\|
\nonumber\\
& + \cdots + \left. \left\| e^{t {(A+B) (n-1)\over n}} \right\| \right)
\nonumber\\
& \leq {6 e^2 |t|^2 \over n^2} \left\| [A,B] \right\|
\left( \left\| e^{t {A (n-1)\over n}} \right\| \left\| e^{t {B (n-1)\over n}} \right\| \right.
\nonumber \\
& + \left\| e^{t {(A+B) \over n}} \right\| \left\| e^{t {A (n-2)\over n}} \right\| \left\| e^{t {B (n-2)\over n}} \right\|
\nonumber\\
& + \cdots + \left. \left\| e^{t {(A+B) (n-1)\over n}} \right\| \right)
\nonumber\\
& = {6 e^2 |t|^2 \over n} \left\| [A,B] \right\| \sum_{k=0}^{n-1} \left\| e^{t {A(n-1-k)\over n}} \right\| \left\| e^{t {B (n-1-k)\over n}} \right\| \left\| e^{t {(A+B) k\over n}} \right\| {1 \over n}
\quad *
\nonumber
\end{align}

where we have used the triangle inequality, submultiplicativity, and the estimate in #9.

Is

##
g(x) := \left\| e^{t A (1-x)} \right\|
##

a continuous function on the closed and bounded interval ##[0,1]##? Consider:

\begin{align}
\| e^{t A (1-x_0 \mp \delta)} - e^{t A (1-x_0)} \| & = \| e^{t A (1-x_0)} (e^{\mp t A \delta} - \mathbb{1}) \|
\nonumber\\
& \leq \| e^{t A (1-x_0)} \| \| e^{\mp t A \delta} - \mathbb{1} \|
\nonumber\\
& = \left\| \sum_{k=0}^\infty {t^k A^k (1-x_0)^k \over k!} \right\| \left\| \sum_{l=1}^\infty {t^l (\mp \delta)^l A^l \over l!} \right\|
\nonumber\\
& \leq \sum_{k=0}^\infty {|t|^k \| A^k \| (1-x_0)^k \over k!} \sum_{l=1}^\infty {|t \delta|^l \| A^l \| \over l!}
\nonumber\\
& \leq \sum_{k=0}^\infty {|t|^k \| A \|^k (1-x_0)^k \over k!} \sum_{l=1}^\infty {|t|^l \delta^l \| A \|^l \over l!}
\nonumber\\
& = e^{|t| \| A \| (1-x_0)} ( e^{|t| \| A \| \delta} - 1 ) \quad **
\nonumber
\end{align}

where we have used submultiplicativity, the triangle inequality + homogeneity, and then submultiplicativity again. As we know that ##D e^{C x}##, where ##D## and ##C## are constants, is a continuous function we have that for any ##\epsilon > 0## there exists an ##\delta## such that:

##
|D e^{C (x_0 \pm \delta)} - D e^{C x_0}| < \epsilon .
##

Using this in ** we obtain that for any ##\epsilon > 0## there exists ##\delta > 0## such that

\begin{align}
\| e^{t A (1-x_0 \mp \delta)} - e^{t A (1-x_0)} \| & < \epsilon .
\nonumber
\end{align}

It follows from the triangle inequality that:

##
\|M - M' \| \geq \left| \; \|M \| - \| M' \| \; \right| .
##

Using this, we obtain that for any ##\epsilon > 0## there exists ##\delta > 0## such that

##
\left| \| e^{t A (1-x_0 \mp \delta)} \| - \| e^{t A (1-x_0)}\| \right| \leq
\| e^{t A (1-x_0 \mp \delta)} - e^{t A (1-x_0)} \|
< \epsilon .
##

Hence ##g(x)## is a continuous function on the closed and bounded interval ##[0,1]##. By similar considerations, the functions:

##
\left\| e^{t B (1-x)} \right\| \text{ and } \left\| e^{t (A+B) x} \right\|
##

are also continuous on the closed and bounded interval ##[0,1]##. It follows that the function:

##
f(x) := \left\| e^{t A (1-x)} \right\| \left\| e^{t B (1-x)} \right\| \left\| e^{t (A+B) x} \right\|
##

is continuous on the closed and bounded interval ##[0,1]##. It follows (by known results) that it is Riemann-integrable, meaning that the sum in * in the limit where ##n \rightarrow \infty## is equal to ##\int_0^1 f(x) dx##.

So that by * we have:

\begin{align}
& \lim_{n \rightarrow \infty} \left\| e^{t (A+B)} - \left( e^{t {A \over n}} \cdot e^{t {B \over n}} \right)^n \right\|
\nonumber\\
&
\leq \lim_{n \rightarrow \infty} {6 e^2 |t|^2 \over n} \| [A,B] \| \times
\int_0^1 dx \left\| e^{t A (1-x)} \right\| \left\| e^{t B (1-x)} \right\| \left\| e^{t (A+B) x} \right\|
\nonumber\\
& = 0
\nonumber
\end{align}

which is the desired result.
Well done. I think from (*) on it can be shortened by an estimation with ##e^{\|tA\|},e^{\|tB\|}##, and in general by putting ##t## into the matrices until the very end.
 
  • #45
2. Let ##F:\mathbb{R}^n\to\mathbb{R}^n## be a continuous function such that ##||F(x)-F(y)||\geq ||x-y||## for all ##x,y\in\mathbb{R}^n##. Show that ##F## is a homeomorphism. (IR)

A function between topological spaces is a homeomorphism iff it is a continuous bijection with a continuous inverse. Given that ##F## is continuous, we prove that F is injective, thus there exists a left inverse ##G##; we then prove that ##G## is continuous, and then finally we prove that the image of ##F## is all of ##\mathbb{R}^n##. First, note that if ##F(x)=F(y)##, then ##0=\|F(x)-F(y)\|\geq\|x-y\|\Rightarrow x=y##. Thus, ##F## is injective and has a left inverse ##G:\mathrm{im}(F)\rightarrow\mathbb{R}^n##.

Now we prove that ##G## is continuous (in fact, it is uniformly continuous). The ##\varepsilon,\delta##-definition of uniform continuity in a metric space is the following: $$(\forall\varepsilon>0)(\exists\delta>0)(\forall x,y\in\mathrm{im}(F))\quad\|x-y\|<\delta\Rightarrow\|G(x)-G(y)\|<\varepsilon.$$ Given ##\varepsilon##, we choose ##\delta(\varepsilon)=\varepsilon##. Since ##x,y\in\mathrm{im}(F)##, there exist ##x',y'\in\mathbb{R}^n## with ##F(x')=x,\ F(y')=y##. Then ##\|G(x)-G(y)\|=\|x'-y'\|## since ##G## is a left inverse of ##F##. By the inequality specified in the problem statement, we have ##\|G(x)-G(y)\|=\|x'-y'\|\leq\|F(x')-F(y')\|=\|x-y\|##. By transitivity, ##\|x-y\|<\delta(\varepsilon)=\varepsilon\Rightarrow\|G(x)-G(y)\|<\varepsilon##. We have proved that ##G## is continuous.

We have proven that ##F## is a homeomorphism onto its image. We now prove that the image of ##F## is all of ##\mathbb{R}^n## as follows: prove that ##\mathrm{im}(F)## is both open and closed in ##\mathbb{R}^n##, and since the only clopen sets in a connected space are the empty set and the whole space, the image of ##F## must be all of ##\mathbb{R}^n##.

Openness of ##\mathrm{im}(F)## follows from continuity of ##G##, so we need only show that the image is closed. In a complete metric space, this is equivalent to proving that the limit of a Cauchy sequence in the image is also contained in the image. Suppose ##c_n\rightarrow c## with ##c_n\in\mathrm{im}(F)##. Thus there exists a sequence ##x_n\in\mathbb{R}^n## with ##F(x_n)=c_n##. The definition of a Cauchy sequence states that $$(\forall\varepsilon>0)(\exists N\in\mathbb{N})(\forall m,n>\mathbb{N})\quad\|c_m-c_n\|<\varepsilon\\\Rightarrow\|x_m-x_n\|\leq\|F(x_m)-F(x_n)\|=\|c_m-c_n\|<\varepsilon$$ by the inequality in the problem statement. Thus, ##x_n## is also Cauchy with limit ##x_n\rightarrow x\in\mathbb{R}^n##. Then one has ##\|F(x)-c_n\|\rightarrow 0##. Since the limit of a sequence is unique, it must be true that ##c=F(x)##, proving that ##c\in\mathrm{im}(F)##.

We have shown that the image of ##F## is clopen. Since a nonempty clopen set in a connected space is the whole space, the image of ##F## is all of ##\mathbb{R}^n##. Altogether, we have shown that ##F## a continuous bijection with a continuous inverse, proving that ##F## is a homeomorphism. ##\square##

9. Let ##A,B\in \mathbb{M}(m,\mathbb{R})## and ##\|A\|,\|B\|\leq 1## with a submultiplicative matrix norm, then $$\left\|\,e^{A+B}-e^A\cdot e^B\,\right\|\leq 6e^2\cdot \left\|\,[A,B]\,\right\|$$(FR)

This problem took me 3 days, so hopefully I didn't mess up o_O

I expect that there is almost certainly a much easier way to do this problem, which I could not find. But I did manage to obtain a much better bound, so I suppose there's that.

We use the following result from https://www.researchgate.net/publication/318418316_The_Non-Commutative_Binomial_Theorem: in any unital Banach algebra, we have $$e^{X+Y}=e^Xe^Y+\left(\sum_{n=0}^\infty\frac{D_n(Y,X)}{n!}\right)e^Y$$ where $$D_{n+1}(Y,X)=[Y,X^n+D_n(Y,X)]+X\cdot D_n(Y,X)$$ is defined by a recurrence relation, with ##D_0(Y,X)=D_1(Y,X)=0##. From this it is clear that ##D_2(Y,X)=[Y,X]##.

Assume that ##X,Y\in\mathcal{A}## where ##\mathcal{A}## is a unital Banach algebra. Our strategy is as follows: factor the non-commuting part as ##\gamma([Y,X])##, where ##\gamma## is some element of the operator algebra ##B(\mathcal{A})## dependent on ##X## and ##Y##, then obtain bounds on its operator norm. Firstly, let us work in the enveloping algebra of ##\mathcal{A}##. Let us denote ##\mathrm{ad}_X=L_X-R_X##. We can rewrite ##D_n## in the following manner: $$D_{n+1}(Y,X)=[Y,X^n]+(L_X+\mathrm{ad}_Y)(D_n(Y,X)).$$ Our goal is to rewrite ##D_n## as a linear combination of previous elements in the sequence, for reasons that will become clear. Observe that ##[Y,X^n]=[Y,X^{n-1}]X+X^{n-1}[Y,X]##. Since ##D_{n+1}-(L_X+\mathrm{ad}_Y)D_n=[Y,X^n]##, we have $$[Y,X^n]=[Y,X^{n-1}]X+X^{n-1}[Y,X]=\left(D_n-(L_X+\mathrm{ad}_Y)(D_{n-1})\right)X+X^{n-1}[Y,X].$$ This implies that we can rewrite ##D_n## in the following manner: $$D_{n+2}=(D_{n+1}-(L_X+\mathrm{ad}_Y)(D_{n}))X+X^n[Y,X]+(L_X+\mathrm{ad}_Y)(D_{n+1})\\=X^n[Y,X]+(L_X+R_X+\mathrm{ad}_Y)(D_{n+1})-R_X(L_X+\mathrm{ad}_Y)(D_n).$$ Now, suppose that there exists ##\Gamma_n\in B(\mathcal{A})## for ##n<k## such that ##\Gamma_n([Y,X])=D_n(Y,X)##. Then, provided that ##k\geq 2##, one can find ##\Gamma_k## such that ##\Gamma_k([Y,X])=D_k(Y,X)##, using the second-order equation for ##D_n##: $$\Gamma_k=L_{X^{k-2}}+(L_X+R_X+\mathrm{ad}_Y)\Gamma_{k-1}-R_X(L_X+\mathrm{ad}_Y)\Gamma_k.$$ Letting ##\Gamma_0=\Gamma_1=0##, we have ##k=2## and the formula holds for all ##\Gamma_n## by induction. (In particular, ##\Gamma_2=\mathrm{id_{\mathcal{A}}}##.) Now let ##\gamma=\sum_{n=0}^\infty\frac{\Gamma_n}{n!}## and we have the formula ##e^{X+Y}-e^Xe^Y = \gamma([Y,X])\cdot e^Y##.

Now we obtain bounds on the operator norm of ##\gamma##, for ##\|e^{X+Y}-e^Xe^Y\|\leq\|\gamma\|_{\mathrm{op}}\,\|[Y,X]\|\,\|e^Y\|##. First note that
$$\|\gamma\|_{\mathrm{op}}\leq\sum_{n=0}^\infty\frac{\|\Gamma_n\|_{\mathrm{op}}}{n!}$$ by subadditivity. From the defining recurrence relation of ##\Gamma_n##, we have the following inequality: $$\|\Gamma_{n+2}\|\leq\|L_X\|^n+(\|L_X\|+\|R_X\|+\|\mathrm{ad}_Y\|\,\|\Gamma_{n+1}\|+\|R_X\|(\|L_X\|+\|\mathrm{ad}_Y\|)\,\|\Gamma_n\|$$ by subadditivity and submultiplicativity of the operator norm. We can make substitutions based on the fact that ##\|L_X\|_{\mathrm{op}},\|R_X\|_{\mathrm{op}}\leq\|X\|_{\mathcal{A}}##, implying that ##\|\mathrm{ad}_Y\|_{\mathrm{op}}\leq 2\|Y\|_{\mathcal{A}}##. Suppose ##\|X\|,\|Y\|\leq C##. Then we have the inequality $$\|\Gamma_{n+2}\|\leq C^n+4C\|\Gamma_{n+1}\|+3C^2\|\Gamma_n\|.$$ Roughly, this suggests that ##\|\Gamma_n\|=O(C^{n-2})## when n is held constant, which we will now make more precise. Let $$a_{n+2}=1+4a_{n+1}+3a_n;\quad a_0=a_1=0.$$ If we have ##a_n\geq\|\Gamma_n\|C^{n-2}## for ##n<k##, then one has $$\|\Gamma_k\|\leq C^{k-2}+4C\|\Gamma_{k-1}\|+3C^2\|\Gamma_{k-2}\|\\\leq C^k+4C\cdot C^{k-3}a_{k-1}+3C^2\cdot C^{k-4}a_{k-2}\\=C^{k-2}(1+4a_{k-1}+3a_{k-2})=C^{k-2}a_k$$ which is valid, provided that ##k\geq 4##. Observing that the inequality holds for ##\|\Gamma_2\|=1## and ##\|\Gamma_3\|\leq 5C##, the inequality holds for all ##n\in\mathbb{N}## by induction.

We now return to the operator ##\gamma##: applying the inequality for ##\|\gamma\|## in terms of ##\Gamma_n##, we have that $$\|\gamma\|\leq\sum_{n=2}^\infty\frac{C^{n-2}a_n}{n!}$$ The right-hand side is equivalent to ##f(C)/C^2##, where ##f## is the unique solution to the differential equation ##f''(t)=e^t+4f'(t)+3f(t)## with ##f(0)=0, f'(0)=0##. It can be written as $$f(t)=\frac{1}{84}\left((7-\sqrt{7})e^{(2+\sqrt{7})t}+(7+\sqrt{7})e^{(2-\sqrt{7})t}-14e^t\right).$$ We now have the bound $$\|e^{X+Y}-e^Xe^Y\|\leq\frac{f(C)}{C^2}\|[Y,X]\|\,\|e^Y\|\\\leq\frac{f(C)}{C^2}\|[Y,X]\|e^{\|Y\|}\\\leq\frac{f(C)}{C^2}e^C\|[Y,X]\|.$$ In the case that ##\|X\|,\|Y\|\leq 1##, one let's ##C\rightarrow 1##: $$\|e^{X+Y}-e^Xe^Y\|\leq f(1)e\|[Y,X]\|\approx 5.01e\|[Y,X]\|\leq 6e^2\|[Y,X]\|.$$ Thus, we have improved significantly on the original inequality.
 
  • Like
Likes Infrared and hilbert2
  • #46
suremarc said:
This problem took me 3 days, so hopefully I didn't mess up o_O

I expect that there is almost certainly a much easier way to do this problem, which I could not find. But I did manage to obtain a much better bound, so I suppose there's that.

We use the following result from https://www.researchgate.net/publication/318418316_The_Non-Commutative_Binomial_Theorem: in any unital Banach algebra, we have $$e^{X+Y}=e^Xe^Y+\left(\sum_{n=0}^\infty\frac{D_n(Y,X)}{n!}\right)e^Y$$ where $$D_{n+1}(Y,X)=[Y,X^n+D_n(Y,X)]+X\cdot D_n(Y,X)$$ is defined by a recurrence relation, with ##D_0(Y,X)=D_1(Y,X)=0##. From this it is clear that ##D_2(Y,X)=[Y,X]##.

Assume that ##X,Y\in\mathcal{A}## where ##\mathcal{A}## is a unital Banach algebra. Our strategy is as follows: factor the non-commuting part as ##\gamma([Y,X])##, where ##\gamma## is some element of the operator algebra ##B(\mathcal{A})## dependent on ##X## and ##Y##, then obtain bounds on its operator norm. Firstly, let us work in the enveloping algebra of ##\mathcal{A}##. Let us denote ##\mathrm{ad}_X=L_X-R_X##. We can rewrite ##D_n## in the following manner: $$D_{n+1}(Y,X)=[Y,X^n]+(L_X+\mathrm{ad}_Y)(D_n(Y,X)).$$ Our goal is to rewrite ##D_n## as a linear combination of previous elements in the sequence, for reasons that will become clear. Observe that ##[Y,X^n]=[Y,X^{n-1}]X+X^{n-1}[Y,X]##. Since ##D_{n+1}-(L_X+\mathrm{ad}_Y)D_n=[Y,X^n]##, we have $$[Y,X^n]=[Y,X^{n-1}]X+X^{n-1}[Y,X]=\left(D_n-(L_X+\mathrm{ad}_Y)(D_{n-1})\right)X+X^{n-1}[Y,X].$$ This implies that we can rewrite ##D_n## in the following manner: $$D_{n+2}=(D_{n+1}-(L_X+\mathrm{ad}_Y)(D_{n}))X+X^n[Y,X]+(L_X+\mathrm{ad}_Y)(D_{n+1})\\=X^n[Y,X]+(L_X+R_X+\mathrm{ad}_Y)(D_{n+1})-R_X(L_X+\mathrm{ad}_Y)(D_n).$$ Now, suppose that there exists ##\Gamma_n\in B(\mathcal{A})## for ##n<k## such that ##\Gamma_n([Y,X])=D_n(Y,X)##. Then, provided that ##k\geq 2##, one can find ##\Gamma_k## such that ##\Gamma_k([Y,X])=D_k(Y,X)##, using the second-order equation for ##D_n##: $$\Gamma_k=L_{X^{k-2}}+(L_X+R_X+\mathrm{ad}_Y)\Gamma_{k-1}-R_X(L_X+\mathrm{ad}_Y)\Gamma_k.$$ Letting ##\Gamma_0=\Gamma_1=0##, we have ##k=2## and the formula holds for all ##\Gamma_n## by induction. (In particular, ##\Gamma_2=\mathrm{id_{\mathcal{A}}}##.) Now let ##\gamma=\sum_{n=0}^\infty\frac{\Gamma_n}{n!}## and we have the formula ##e^{X+Y}-e^Xe^Y = \gamma([Y,X])\cdot e^Y##.

Now we obtain bounds on the operator norm of ##\gamma##, for ##\|e^{X+Y}-e^Xe^Y\|\leq\|\gamma\|_{\mathrm{op}}\,\|[Y,X]\|\,\|e^Y\|##. First note that
$$\|\gamma\|_{\mathrm{op}}\leq\sum_{n=0}^\infty\frac{\|\Gamma_n\|_{\mathrm{op}}}{n!}$$ by subadditivity. From the defining recurrence relation of ##\Gamma_n##, we have the following inequality: $$\|\Gamma_{n+2}\|\leq\|L_X\|^n+(\|L_X\|+\|R_X\|+\|\mathrm{ad}_Y\|\,\|\Gamma_{n+1}\|+\|R_X\|(\|L_X\|+\|\mathrm{ad}_Y\|)\,\|\Gamma_n\|$$ by subadditivity and submultiplicativity of the operator norm. We can make substitutions based on the fact that ##\|L_X\|_{\mathrm{op}},\|R_X\|_{\mathrm{op}}\leq\|X\|_{\mathcal{A}}##, implying that ##\|\mathrm{ad}_Y\|_{\mathrm{op}}\leq 2\|Y\|_{\mathcal{A}}##. Suppose ##\|X\|,\|Y\|\leq C##. Then we have the inequality $$\|\Gamma_{n+2}\|\leq C^n+4C\|\Gamma_{n+1}\|+3C^2\|\Gamma_n\|.$$ Roughly, this suggests that ##\|\Gamma_n\|=O(C^{n-2})## when n is held constant, which we will now make more precise. Let $$a_{n+2}=1+4a_{n+1}+3a_n;\quad a_0=a_1=0.$$ If we have ##a_n\geq\|\Gamma_n\|C^{n-2}## for ##n<k##, then one has $$\|\Gamma_k\|\leq C^{k-2}+4C\|\Gamma_{k-1}\|+3C^2\|\Gamma_{k-2}\|\\\leq C^k+4C\cdot C^{k-3}a_{k-1}+3C^2\cdot C^{k-4}a_{k-2}\\=C^{k-2}(1+4a_{k-1}+3a_{k-2})=C^{k-2}a_k$$ which is valid, provided that ##k\geq 4##. Observing that the inequality holds for ##\|\Gamma_2\|=1## and ##\|\Gamma_3\|\leq 5C##, the inequality holds for all ##n\in\mathbb{N}## by induction.

We now return to the operator ##\gamma##: applying the inequality for ##\|\gamma\|## in terms of ##\Gamma_n##, we have that $$\|\gamma\|\leq\sum_{n=2}^\infty\frac{C^{n-2}a_n}{n!}$$ The right-hand side is equivalent to ##f(C)/C^2##, where ##f## is the unique solution to the differential equation ##f''(t)=e^t+4f'(t)+3f(t)## with ##f(0)=0, f'(0)=0##. It can be written as $$f(t)=\frac{1}{84}\left((7-\sqrt{7})e^{(2+\sqrt{7})t}+(7+\sqrt{7})e^{(2-\sqrt{7})t}-14e^t\right).$$ We now have the bound $$\|e^{X+Y}-e^Xe^Y\|\leq\frac{f(C)}{C^2}\|[Y,X]\|\,\|e^Y\|\\\leq\frac{f(C)}{C^2}\|[Y,X]\|e^{\|Y\|}\\\leq\frac{f(C)}{C^2}e^C\|[Y,X]\|.$$ In the case that ##\|X\|,\|Y\|\leq 1##, one let's ##C\rightarrow 1##: $$\|e^{X+Y}-e^Xe^Y\|\leq f(1)e\|[Y,X]\|\approx 5.01e\|[Y,X]\|\leq 6e^2\|[Y,X]\|.$$ Thus, we have improved significantly on the original inequality.
This will take me some time to check, although I'm pretty sure it is right. You have used some big canons in there! My solutions only uses Bubble Sort :wink:

The result is used to prove H.F. Trotter's formula (problem 10) where the factor is irrelevant. I liked these two formulas, because they are probably useful for physicists.
 
  • Like
Likes suremarc
  • #47
fresh_42 said:
13. Let ##\mathbb{Q}\to\mathbb{Q}## be the function ##f(x)=x^3-2x##. Show that ##f## is injective. (IR)

I think there should be a solution simpler than whatever I could get so far, but I guess such a solution would use some property or identity that I am unfamiliar with.
To prove that the function is injective in the domain of ##\mathbb{Q}##, we need to show that there cannot be 2 different rational numbers that are mapped to the same value by function ##f##. Suppose ##f## is not injective in the domain of rational numbers, then there must be at least 2 different rational numbers, say ##x, y## such that ##f(x) = f(y)##. Without loss of generality, we may write ##y## as ##y=x+z## where ##z > 0## and ##z \in \mathbb{Q}## (I assume that it is obvious that ##\mathbb{Q}## is closed under addition).

Therefore,
$$
f(x) = f(x+z) \Rightarrow f(x+z) - f(x) = 0 \Rightarrow (x+z)^3 - 2(x+z) - x^3 + 2x = 0 \\
\Rightarrow z(x^2 + (x+z)^2 +x(x+z)) - 2z = 0 \Rightarrow (3x^2 + 3zx + z^2 - 2) = 0 \Rightarrow (3x^2 + 3zx + z^2 - 2) = 0
$$
with the last step equation arising because ##z## cannot be 0 be definition.

We solve the above quadratic equation considering ##x## as the variable and obtain solutions in terms of ##z##.
##x = \dfrac {-3z \pm \sqrt{9z^2 - 12z^2 + 24}} {6z} = \dfrac {-3z \pm \sqrt{24 - 3z^2}} {6z}##

Since rational numbers are closed under addition, multiplication and division (except division by zero), if we want ##x## to be rational as per original assumption, we need all terms in the RHS of above equation to be rational and therefore ##\sqrt{24 - 3z^2}## must be a rational number. Since ##z## is positive rational number, it can be written as ##a/b## where ##a,b## are coprime positive integers. Therefore,

##\sqrt{24 - 3z^2} \in \mathbb{Q} \Rightarrow \dfrac {\sqrt{24b^2 - 3a^2}} {b} \in \mathbb{Q} \Rightarrow {\sqrt{24b^2 - 3a^2}} \in \mathbb{Q}##

Since ##{24b^2 - 3a^2}## is also an integer, the condition ##{\sqrt{24b^2 - 3a^2}} \in \mathbb{Q}## requires ##(24b^2 - 3a^2)## to be a perfect square (since square root of a whole number is a rational number only if it is a perfect square). Since ##(24b^2 - 3a^2) = 3(8b^2 - a^2)##, it being a perfect square requires that ##(8b^2 - a^2)## must have an odd number as exponent of 3 in its prime factorization (since otherwise the product ##3(8b^2 - a^2)## will have an odd exponent of 3 and so cannot be a perfect square). Thus, ##(8b^2 - a^2) \mod 3 = 0##.

We now consider various possible combinations of modulo 3 values for ##a## and ##b## and show that none of them lead to ##(8b^2 - a^2) \mod 3 = 0##. We do not consider the modulo 3 combination ##(0, 0)## since that would mean both ##a,b## are multiples of 3, violating the assumption that they are coprime.

##a \mod 3####b \mod 3####(8b^2 - a^2) \mod 3##
012
022
102
111
121
202
211
221

Thus, there exists no pair of corpime positive integers, ##(a,b)##, which satisfies the criterion ##(8b^2 - a^2) \mod 3 = 0##. This implies that we cannot find a coprime integer pair ##(a,b)## for which ##24b^2 - 3a^2## will be a perfect square. Hence, as per the quadratic equation for ##x##, there is no rational-valued solution for ##x## is ##z \in \mathbb{Q}##, i.e. not both ##x## and ##z## can be rational, i.e. the requirement for function ##f## to be non-injective cannot be met. Hence ##f## must be an injective function.
 
  • Like
Likes PeroK and Infrared
  • #48
@suremarc Your solution is good, but you could elaborate on why ##F(\mathbb{R}^n)## is open?
suremarc said:
Openness of ##\mathrm{im}(F)## follows from continuity of ##G##
Keep in mind that a function that is a homeomorphism onto its image does not always have open image (e.g. embed ##\mathbb{R}\hookrightarrow\mathbb{R}^2## in the obvious way).@Not anonymous Nice work! There is a also solution using parity arguments along the lines of post 14, but this argument is good too.
 
  • #49
High School :-
Question 13. Let ##f : \mathbb Q \rightarrow \mathbb Q## be the function ## f(x) = x^3 -2x##. Show that ##f## is injective.

https://drive.google.com/file/d/1uLMRllrzimUW65IzqwOL9JIdOuG3RfTz/view?usp=sharing
 
  • #50
@Adesh You argue that for ##x_2## to be rational, that ##\sqrt{8-3x_1^2}## must be rational, so ##8-3x_1^2## must be a perfect square. However, this only means that ##8-3x_1^2## is the square of a rational, not necessarily of an integer, so you can't conclude that it has to be ##4## or ##1##.

@Not anonymous gave a proof with the same idea (quadratic formula) in post 47 you could look at.
 
  • #51
Infrared said:
@suremarc Your solution is good, but you could elaborate on why ##F(\mathbb{R}^n)## is open?

Keep in mind that a function that is a homeomorphism onto its image does not always have open image (e.g. embed ##\mathbb{R}\hookrightarrow\mathbb{R}^2## in the obvious way).
D’oh. This turned out to be more complicated than I thought. However, when ##\mathbb{R}^n## is the domain and codomain, one can apply invariance of domain to show that the image is open.

In both your problem and @fresh_42’s, I wanted to avoid assumptions based on Euclidean space. I guess my nature compels me to formulate things as generally as possible. However, it seems that in this case the solution depends on a nontrivial property of ##\mathbb{R}^n##. I have much to learn when it comes to topology...
 
Last edited:
  • #52
suremarc said:
This problem took me 3 days, so hopefully I didn't mess up o_O

I expect that there is almost certainly a much easier way to do this problem, which I could not find. But I did manage to obtain a much better bound, so I suppose there's that.

We use the following result from https://www.researchgate.net/publication/318418316_The_Non-Commutative_Binomial_Theorem: in any unital Banach algebra, we have $$e^{X+Y}=e^Xe^Y+\left(\sum_{n=0}^\infty\frac{D_n(Y,X)}{n!}\right)e^Y$$ where $$D_{n+1}(Y,X)=[Y,X^n+D_n(Y,X)]+X\cdot D_n(Y,X)$$ is defined by a recurrence relation, with ##D_0(Y,X)=D_1(Y,X)=0##. From this it is clear that ##D_2(Y,X)=[Y,X]##.

Assume that ##X,Y\in\mathcal{A}## where ##\mathcal{A}## is a unital Banach algebra. Our strategy is as follows: factor the non-commuting part as ##\gamma([Y,X])##, where ##\gamma## is some element of the operator algebra ##B(\mathcal{A})## dependent on ##X## and ##Y##, then obtain bounds on its operator norm. Firstly, let us work in the enveloping algebra of ##\mathcal{A}##. Let us denote ##\mathrm{ad}_X=L_X-R_X##. We can rewrite ##D_n## in the following manner: $$D_{n+1}(Y,X)=[Y,X^n]+(L_X+\mathrm{ad}_Y)(D_n(Y,X)).$$ Our goal is to rewrite ##D_n## as a linear combination of previous elements in the sequence, for reasons that will become clear. Observe that ##[Y,X^n]=[Y,X^{n-1}]X+X^{n-1}[Y,X]##. Since ##D_{n+1}-(L_X+\mathrm{ad}_Y)D_n=[Y,X^n]##, we have $$[Y,X^n]=[Y,X^{n-1}]X+X^{n-1}[Y,X]=\left(D_n-(L_X+\mathrm{ad}_Y)(D_{n-1})\right)X+X^{n-1}[Y,X].$$ This implies that we can rewrite ##D_n## in the following manner: $$D_{n+2}=(D_{n+1}-(L_X+\mathrm{ad}_Y)(D_{n}))X+X^n[Y,X]+(L_X+\mathrm{ad}_Y)(D_{n+1})\\=X^n[Y,X]+(L_X+R_X+\mathrm{ad}_Y)(D_{n+1})-R_X(L_X+\mathrm{ad}_Y)(D_n).$$ Now, suppose that there exists ##\Gamma_n\in B(\mathcal{A})## for ##n<k## such that ##\Gamma_n([Y,X])=D_n(Y,X)##. Then, provided that ##k\geq 2##, one can find ##\Gamma_k## such that ##\Gamma_k([Y,X])=D_k(Y,X)##, using the second-order equation for ##D_n##: $$\Gamma_k=L_{X^{k-2}}+(L_X+R_X+\mathrm{ad}_Y)\Gamma_{k-1}-R_X(L_X+\mathrm{ad}_Y)\Gamma_k.$$ Letting ##\Gamma_0=\Gamma_1=0##, we have ##k=2## and the formula holds for all ##\Gamma_n## by induction. (In particular, ##\Gamma_2=\mathrm{id_{\mathcal{A}}}##.) Now let ##\gamma=\sum_{n=0}^\infty\frac{\Gamma_n}{n!}## and we have the formula ##e^{X+Y}-e^Xe^Y = \gamma([Y,X])\cdot e^Y##.

Now we obtain bounds on the operator norm of ##\gamma##, for ##\|e^{X+Y}-e^Xe^Y\|\leq\|\gamma\|_{\mathrm{op}}\,\|[Y,X]\|\,\|e^Y\|##. First note that
$$\|\gamma\|_{\mathrm{op}}\leq\sum_{n=0}^\infty\frac{\|\Gamma_n\|_{\mathrm{op}}}{n!}$$ by subadditivity. From the defining recurrence relation of ##\Gamma_n##, we have the following inequality: $$\|\Gamma_{n+2}\|\leq\|L_X\|^n+(\|L_X\|+\|R_X\|+\|\mathrm{ad}_Y\|\,\|\Gamma_{n+1}\|+\|R_X\|(\|L_X\|+\|\mathrm{ad}_Y\|)\,\|\Gamma_n\|$$ by subadditivity and submultiplicativity of the operator norm. We can make substitutions based on the fact that ##\|L_X\|_{\mathrm{op}},\|R_X\|_{\mathrm{op}}\leq\|X\|_{\mathcal{A}}##, implying that ##\|\mathrm{ad}_Y\|_{\mathrm{op}}\leq 2\|Y\|_{\mathcal{A}}##. Suppose ##\|X\|,\|Y\|\leq C##. Then we have the inequality $$\|\Gamma_{n+2}\|\leq C^n+4C\|\Gamma_{n+1}\|+3C^2\|\Gamma_n\|.$$ Roughly, this suggests that ##\|\Gamma_n\|=O(C^{n-2})## when n is held constant, which we will now make more precise. Let $$a_{n+2}=1+4a_{n+1}+3a_n;\quad a_0=a_1=0.$$ If we have ##a_n\geq\|\Gamma_n\|C^{n-2}## for ##n<k##, then one has $$\|\Gamma_k\|\leq C^{k-2}+4C\|\Gamma_{k-1}\|+3C^2\|\Gamma_{k-2}\|\\\leq C^k+4C\cdot C^{k-3}a_{k-1}+3C^2\cdot C^{k-4}a_{k-2}\\=C^{k-2}(1+4a_{k-1}+3a_{k-2})=C^{k-2}a_k$$ which is valid, provided that ##k\geq 4##. Observing that the inequality holds for ##\|\Gamma_2\|=1## and ##\|\Gamma_3\|\leq 5C##, the inequality holds for all ##n\in\mathbb{N}## by induction.

We now return to the operator ##\gamma##: applying the inequality for ##\|\gamma\|## in terms of ##\Gamma_n##, we have that $$\|\gamma\|\leq\sum_{n=2}^\infty\frac{C^{n-2}a_n}{n!}$$ The right-hand side is equivalent to ##f(C)/C^2##, where ##f## is the unique solution to the differential equation ##f''(t)=e^t+4f'(t)+3f(t)## with ##f(0)=0, f'(0)=0##. It can be written as $$f(t)=\frac{1}{84}\left((7-\sqrt{7})e^{(2+\sqrt{7})t}+(7+\sqrt{7})e^{(2-\sqrt{7})t}-14e^t\right).$$ We now have the bound $$\|e^{X+Y}-e^Xe^Y\|\leq\frac{f(C)}{C^2}\|[Y,X]\|\,\|e^Y\|\\\leq\frac{f(C)}{C^2}\|[Y,X]\|e^{\|Y\|}\\\leq\frac{f(C)}{C^2}e^C\|[Y,X]\|.$$ In the case that ##\|X\|,\|Y\|\leq 1##, one let's ##C\rightarrow 1##: $$\|e^{X+Y}-e^Xe^Y\|\leq f(1)e\|[Y,X]\|\approx 5.01e\|[Y,X]\|\leq 6e^2\|[Y,X]\|.$$ Thus, we have improved significantly on the original inequality.
You must have nightmares with BCH! I don't see a mistake, although I haven't checked the details of your last recursion which led to ##5.01##.

The proof I have is essentially the same, only a bit lazier with the commutators and the estimation. I add it here, for the sake of completeness - and because it is not really different.

For a vector ##\alpha \in \{\,0,1\,\}^n## we define
$$
S(\alpha) := \prod_{k=1}^n A^{1-\alpha_k}B^{\alpha_k}
$$
Each of these vectors can be sorted in an ascending order in ##n^2## steps, e.g. with Bubble sort, until ##S(\alpha_{Sort})=A^{n-|\alpha|}B^{|\alpha|}## where every commutation between ##A,B## results in an additional term ##[A,B]=AB-BA\,:##
$$
S(BA)-S(AB) = BA - AB = -[A,B]
$$
or with an example which needs more steps:
\begin{align*}
\begin{bmatrix}\alpha_0=(1,0,1,0)\\S(\alpha_0)=BABA \end{bmatrix} &\longrightarrow \begin{bmatrix}\alpha_1=(0,1,1,0)\\S(\alpha_1)=ABBA \end{bmatrix} \\
S(\alpha_0)-S(\alpha_1)&=(BA-AB)BA=-[A,B]BA \\
\begin{bmatrix}\alpha_1=(0,1,1,0)\\S(\alpha_1)=ABBA \end{bmatrix} &\longrightarrow \begin{bmatrix}\alpha_2=(0,1,0,1)\\S(\alpha_2)=ABAB \end{bmatrix} \\
S(\alpha_1)-S(\alpha_2)&=AB(BA-AB)=-AB[A,B] \\
\begin{bmatrix}\alpha_2=(0,1,0,1)\\S(\alpha_2)=ABAB \end{bmatrix} &\longrightarrow \begin{bmatrix}\alpha_3=(0,0,1,1)\\S(\alpha_3)=AABB \end{bmatrix} \\
S(\alpha_2)-S(\alpha_3)&=A(BA-AB)B =-A[A,B]B
\end{align*}
So every sorting step produces a factor ##[A,B]##, which combined with the assumption ##\|A\|,\|B\|\leq 1## means ##\|S(\alpha_k)-S(\alpha_{k+1})\| \leq \left\|\,[A,B]\,\right\|## and
$$
\|S(\alpha)-S(\alpha_{Sort})\| \leq \sum_{k=0}^{n^2-1} \|S(\alpha_k)-S(\alpha_{k+1})\| \leq n^2\cdot \|\,[A,B]\,\|
$$
\begin{align*}
e^{A+B} &= \sum_{n=0}^\infty \dfrac{1}{n!} (A+B)^n=\sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} S(\alpha)\\
e^A\cdot e^B&= \left(\sum_{n=0}^\infty \dfrac{A^n}{n!}\right)\cdot\left(\sum_{n=0}^\infty \dfrac{B^n}{n!}\right)= \sum_{n=0}^\infty \sum_{k=0}^n \dfrac{A^{n-k}}{(n-k)!}\cdot \dfrac{B^k}{k!}\\
&=\sum_{n=0}^\infty \dfrac{1}{n!} \sum_{k=0}^n \binom{n}{k}A^{n-k}B^k = \sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} A^{n-|\alpha|}B^{|\alpha|}\\
&= \sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} S(\alpha_{Sort})
\end{align*}
With these equations we get
\begin{align*}
\|\;e^{A+B}-e^A \cdot e^B\;\| &\leq \sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} \|\;S(\alpha)-S(\alpha_{Sort})\;\|\\
& \leq \sum_{n=0}^\infty \dfrac{2^n}{n!} \cdot n^2\cdot \|\,[A,B]\,\|\\
&= 6e^2 \cdot \|\,[A,B]\,\|
\end{align*}
 
  • Like
Likes julian and suremarc
  • #53
fresh_42 said:
You must have nightmares with BCH!
I apologize, but I’m not sure what you mean by this 😅 Applying the BCH formula was my first instinct, but I didn’t know enough about the higher-order terms to extract anything useful. So instead I found that paper on the so-called “non-commutative binomial theorem” which helped me get a foothold on the problem.
 
  • Like
Likes fresh_42
  • #54
Infrared said:
@Adesh You argue that for ##x_2## to be rational, that ##\sqrt{8-3x_1^2}## must be rational, so ##8-3x_1^2## must be a perfect square. However, this only means that ##8-3x_1^2## is the square of a rational, not necessarily of an integer, so you can't conclude that it has to be ##4## or ##1##.

@Not anonymous gave a proof with the same idea (quadratic formula) in post 47 you could look at.
I have used Rolle’s Theorem for proving it.

Let’s assume we draw the graph of ##f(x) = x^3 -2x## on the coordinate system whose x and y axes consist of only rational numbers (well that’s what we are given) .

If in an interval ##\left [ a,b \right] ## we have a condition $$ f(a) = f(b)$$ then according to Rolle’s Theorem there exists a ## c \in \left(a , b \right)## such that $$ f’(c) = 0 $$.

Well, let’s assume that there exists such a ##c## then, $$ f’(c) =0 \\
3c^2 -2 =0 \\
c = \sqrt {
\frac{ 2}{3} } $$ But this ##c## doesn’t belong to ##\left ( a,b\right)##, because this ##c## is irrational and all the point between ##a## and ##b## is rational.

Therefore, when only rational inputs are allowed we cannot get ##f(a) = f(b)## for distinct ##a## and ##b## and hence ##f## is injective.
 
  • #55
@suremarc Great, invariance of domain certainly does the trick. I'm not sure if there's an easy, direct way that avoids using some topology.

@Adesh Nice solution! I hadn't thought of using Rolle's theorem like that. [Edit: As @archaic noted, the argument isn't sound]
 
Last edited:
  • #56
Adesh said:
I have used Rolle’s Theorem for proving it.

Let’s assume we draw the graph of ##f(x) = x^3 -2x## on the coordinate system whose x and y axes consist of only rational numbers (well that’s what we are given) .

If in an interval ##\left [ a,b \right] ## we have a condition $$ f(a) = f(b)$$ then according to Rolle’s Theorem there exists a ## c \in \left(a , b \right)## such that $$ f’(c) = 0 $$.

Well, let’s assume that there exists such a ##c## then, $$ f’(c) =0 \\
3c^2 -2 =0 \\
c = \sqrt {
\frac{ 2}{3} } $$ But this ##c## doesn’t belong to ##\left ( a,b\right)##, because this ##c## is irrational and all the point between ##a## and ##b## is rational.

Therefore, when only rational inputs are allowed we cannot get ##f(a) = f(b)## for distinct ##a## and ##b## and hence ##f## is injective.
I don't see why the nature of ##c## influences that of ##a## and ##b##. For ##f(x)=(x-1)(x-2)(x-3)##, we have ##f'(x)=0## for ##x=\frac{6\pm\sqrt 3}{3}##, an irrational with ##\frac{6-\sqrt 3}{3}\in(1,\,2)## but not in the rational interval ##(1,\,2)## and ##f(1)=f(2)##.
 
  • #57
@archaic Good catch! @Adesh there is no reason why ##c## has to be rational, so I don't think the contradiction works.
 
  • Like
Likes archaic
  • #58
Infrared said:
@archaic Good catch! @Adesh there is no reason why ##c## has to be rational, so I don't think the contradiction works.
It's because if x-axis consists of only rational numbers (which is our domain), and if our function ever makes a u-turn then at some point on x-axis it has to have a derivative of zero.

If c doesn't lie between ##a## and ##b## that means c doesn't lie on the x-axis and hence function doesn't take a u-turn.

I'm always available to give all kinds of justifications.
 
  • #59
Those classical theorems of calculus (IVT, MVT,...) aren't true if you restrict the domain to be only rational numbers.

For example, consider the function ##f(x)=x^3-x## Then ##f(0)=f(1)=0##, but ##f'(x)=3x^2-1## does not have rational roots.
 
  • Like
Likes archaic
  • #60
archaic said:
I don't see why the nature of ##c## influences that of ##a## and ##b##. For ##f(x)=(x-1)(x-2)(x-3)##, we have ##f'(x)=0## for ##x=\frac{6\pm\sqrt 3}{3}##, an irrational with ##\frac{6-\sqrt 3}{3}\in(1,\,2)## but not in the rational interval ##(1,\,2)## and ##f(1)=f(2)##.
Because ##c## doesn't belong to the domain of ##f## but ##a ## and ##b## do.

Let me explain it with a similar example, I'm going to quote from SL Loney's The Elements of Coordinate Geometry, article 153, page 121:

Point of intersection , in general, of the straight line , $$ y = mx +c$$ with the circle $$ x^2 + y ^2 =a^2$$
The coordinates of points in which the straight line meets the circle satisfy both the equations, therefore
$$ x^2 +( mx+c)^2 =a \\
x^2 (1+m^2) +2mxc +c^2-a^2=0 $$
The roots of this equation may be real, coincident or imaginary according as $$ (2mc)^2 -4(1+m^2) (c^2-a^2) ~ \textrm{is positive, zero or negative, i.e., according as } \\
a^2 (1+m^2) -c^2 ~\textrm{is positive, zero or negative}$$
...
When ##c^2## is greater than ##a^2(1+m^2)##, the line doesn't meet the circle at all, or rather this is better expressed by saying that it meets the circle in imaginary points
 
  • #61
Infrared said:
Those classical theorems of calculus (IVT, MVT,...) aren't true if you restrict the domain to be only rational numbers.

For example, consider the function ##f(x)=x^3-x## Then ##f(0)=f(1)=0##, but ##f'(x)=3x^2-1## does not have rational roots.
Yes, I agree with that. My proof of question 13 by Rolle's Theorem is flawed. Thank you @archaic and @Infrared
 
  • Like
Likes archaic
  • #62
One last try for question 13.

I have just remastered that quadratic proof which was flawed.
Here is my last try.
 
  • #63
fresh_42 said:
You must have nightmares with BCH! I don't see a mistake, although I haven't checked the details of your last recursion which led to ##5.01##.

The proof I have is essentially the same, only a bit lazier with the commutators and the estimation. I add it here, for the sake of completeness - and because it is not really different.

For a vector ##\alpha \in \{\,0,1\,\}^n## we define
$$
S(\alpha) := \prod_{k=1}^n A^{1-\alpha_k}B^{\alpha_k}
$$
Each of these vectors can be sorted in an ascending order in ##n^2## steps, e.g. with Bubble sort, until ##S(\alpha_{Sort})=A^{n-|\alpha|}B^{|\alpha|}## where every commutation between ##A,B## results in an additional term ##[A,B]=AB-BA\,:##
$$
S(BA)-S(AB) = BA - AB = -[A,B]
$$
or with an example which needs more steps:
\begin{align*}
\begin{bmatrix}\alpha_0=(1,0,1,0)\\S(\alpha_0)=BABA \end{bmatrix} &\longrightarrow \begin{bmatrix}\alpha_1=(0,1,1,0)\\S(\alpha_1)=ABBA \end{bmatrix} \\
S(\alpha_0)-S(\alpha_1)&=(BA-AB)BA=-[A,B]BA \\
\begin{bmatrix}\alpha_1=(0,1,1,0)\\S(\alpha_1)=ABBA \end{bmatrix} &\longrightarrow \begin{bmatrix}\alpha_2=(0,1,0,1)\\S(\alpha_2)=ABAB \end{bmatrix} \\
S(\alpha_1)-S(\alpha_2)&=AB(BA-AB)=-AB[A,B] \\
\begin{bmatrix}\alpha_2=(0,1,0,1)\\S(\alpha_2)=ABAB \end{bmatrix} &\longrightarrow \begin{bmatrix}\alpha_3=(0,0,1,1)\\S(\alpha_3)=AABB \end{bmatrix} \\
S(\alpha_2)-S(\alpha_3)&=A(BA-AB)B =-A[A,B]B
\end{align*}
So every sorting step produces a factor ##[A,B]##, which combined with the assumption ##\|A\|,\|B\|\leq 1## means ##\|S(\alpha_k)-S(\alpha_{k+1})\| \leq \left\|\,[A,B]\,\right\|## and
$$
\|S(\alpha)-S(\alpha_{Sort})\| \leq \sum_{k=0}^{n^2-1} \|S(\alpha_k)-S(\alpha_{k+1})\| \leq n^2\cdot \|\,[A,B]\,\|
$$
\begin{align*}
e^{A+B} &= \sum_{n=0}^\infty \dfrac{1}{n!} (A+B)^n=\sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} S(\alpha)\\
e^A\cdot e^B&= \left(\sum_{n=0}^\infty \dfrac{A^n}{n!}\right)\cdot\left(\sum_{n=0}^\infty \dfrac{B^n}{n!}\right)= \sum_{n=0}^\infty \sum_{k=0}^n \dfrac{A^{n-k}}{(n-k)!}\cdot \dfrac{B^k}{k!}\\
&=\sum_{n=0}^\infty \dfrac{1}{n!} \sum_{k=0}^n \binom{n}{k}A^{n-k}B^k = \sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} A^{n-|\alpha|}B^{|\alpha|}\\
&= \sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} S(\alpha_{Sort})
\end{align*}
With these equations we get
\begin{align*}
\|\;e^{A+B}-e^A \cdot e^B\;\| &\leq \sum_{n=0}^\infty \dfrac{1}{n!} \sum_{\alpha\in \{0,1\}^n} \|\;S(\alpha)-S(\alpha_{Sort})\;\|\\
& \leq \sum_{n=0}^\infty \dfrac{2^n}{n!} \cdot n^2\cdot \|\,[A,B]\,\|\\
&= 6e^2 \cdot \|\,[A,B]\,\|
\end{align*}

So this Bubble sort thing...Say there are ##n## positions for the ##0##'s and ##1##'s. A first cycle of comparisons, where you compare every pair of adjacent elements and swaps their positions :

\begin{align*}
(\underbrace{1,0},0,1 \dots, 0,0) & \rightarrow (0,1,0,1 \dots, 0,0)\\
(0, \underbrace{1,0},1 \dots, 0,0) & \rightarrow (0,0,1,1 \dots, 0,0)\\
(0, 0,\underbrace{1,1} \dots, 0,0) & \rightarrow (0,0,1,1 \dots, 0,0)\\
\vdots \\
(0, 0,1,1, \dots, \underbrace{1,0}) & \rightarrow (0,0,1,1, \dots, 0,1)
\end{align*}

inevitably leads to a ##1## in the last position after ##n-1## comparisons (unless they are all ##0##). Meaning in a second cycle we need only consider ##n-2## comparisons. In a third cycle we need only consider ##n-3## comparisons. And so on. The total number of comparisons needed is then ##(n-1)+ (n-2) + \cdots 1 = {n(n-1) \over 2}##. You can then say:

\begin{align*}
\|\;e^{A+B}-e^A \cdot e^B\;\| & \leq \sum_{n=0}^\infty \dfrac{2^n}{n!} \cdot {n(n-1) \over 2} \cdot \|\,[A,B]\,\|\\
&= 2e^2 \cdot \|\,[A,B]\,\| .
\end{align*}

Is this right?
 
Last edited:
  • #64
Seems right, but as mentioned earlier, this was only a step towards Trotter's formula in problem 10. For its proof we only need a finite value, so there isn't any optimization on the sorting algorithm. ##n^2## is a trivial upper bound, not the lowest upper bound.
 
  • #65
fresh_42 said:
5. Find the area of the shape ##T## which is surrounded by the line ##|a_1 x + b_1 y + c_1| + |a_2 x + b_2 y + c_2| = m , m \gt 0## (given that ##a_1 b_2 - a_2 b_1 \neq 0##). (QQ)

The solution of the equation ##|a_1 x + b_1 y + c_1| + |a_2 x + b_2 y + c_2| = m , m \gt 0## consists of points lying on 4 line segments that form a parallelogram. The vertices of this parallelogram are defined by intersection between the lines ##|a_1 x + b_1 y + c_1|= m, |a_1 x + b_1 y + c_1|= -m, |a_2 x + b_2 y + c_2|= m## and ##|a_2 x + b_2 y + c_2|= -m##, i.e. the enclosed shape ##T## is a parallelogram formed by finite area intersection between 2 infinite area regions, region 1 consisting of the all points between parallel lines ##|a_1 x + b_1 y + c_1|= m## and ##|a_1 x + b_1 y + c_1|= -m##, and region 2 consisting of the all points between parallel lines ##|a_2 x + b_2 y + c_2|= m## and ##|a_2 x + b_2 y + c_2|= -m##.

The area of this parallelogram can be found using the absolute value of cross product between 2 vectors that correspond to any pair of adjoining edges of the parallelogram and have the common as their source point. The value I get for area of ##T## by calculating this way is ##\left| \dfrac {2m^2} {a_2 b_1 - a_1 b_2}\right|##

Some details on how I found that the enclosed shape is a parallelogram. Solution to ##|a_1 x + b_1 y + c_1| + |a_2 x + b_2 y + c_2| = m## is the set made of all possible solutions to the pair of equations ##a_1 x + b_1 y + c_1 = z, a_2 x + b_2 y + c_2 = m - |z|## and all possible solutions to the pair of equations ##a_1 x + b_1 y + c_1 = z, a_2 x + b_2 y + c_2 = |z| - m##, both pairs evaluated for all ##z \in [-m, m]##.

For the 1st pair, the solution for any fixed value of ##z## is of the form ##y = \dfrac {a_1 c_2 - a_2 c_1 - a_1 m + a_2 z + a_1 |z|} {a_2 b_1 - a_1 b_2}##, ##x = \dfrac {-b_2 z + b_2 c_1 - b_1 c_2 + b_1 m - b_1 |z|} { a_2 b_1 - a_1 b_2}##.
Considering separately values of ##z \in [0,m]## and ##z \in [-m, 0)##, we find that overall solution for that equation pair consists of all points on the line segment between points (in ##X-Y## coordinates) between ##p_1## and ##p_3## and all points on the line segment between ##p_1## and ##p_3## where ##p_1= \left( \dfrac {b_2 c_1 - b_1 c_2 + b_1 m} {a_2 b_1 - a_1 b_2}, \dfrac {a_1 c_2 - a_2 c_1 - a_1 m} {a_2 b_1 - a_1 b_2} \right)##, ##p_2 = \left( \dfrac{-b_2 m + b_2 c_1 - b_1 c_2} {a_2 b_1 - a_1 b_2} , \dfrac {a_1 c_2 - a_2 c_1 + a_2 m} {a_2 b_1 - a_1 b_2} \right)## and ##p_3 = \left( \dfrac {b_2 m + b_2 c_1 - b_1 c_2} {a_2 b_1 - a_1 b_2}, \dfrac {a_1 c_2 - a_2 c_1 - a_2 m} {a_2 b_1 - a_1 b_2} \right)##. The 3 points correspond to values ##0, m \text { and } -m \text { for } z##.

Similarly, we can calculate the solutions for the 2nd equation pair and we find that it too consists of points on 2 line segments and each segment includes either the point ##p_2## or point ##p_3## obtained for 1st equation pair and that the 4 line segments form a parallelogram can be confirmed by looking at the slopes of the 4 line segments - it consists of 2 pairs of parallel line segments and the pairs intersect each other.
If we compute ##\vec A## corresponding to vector from ##p_1## to ##p_2## and ##\vec B## corresponding to vector from ##p_1## to ##p_3## and calculate the cross product ##\vec A \times \vec B##, we get the area of the enclosed parallelogram to be ##\left| \dfrac {2m^2} {a_2 b_1 - a_1 b_2}\right|##
 
Last edited:
  • Like
Likes QuantumQuest
  • #67
Pony said:
I don't like how the solution of the 2nd problem points to https://www.physicsforums.com/threads/math-challenge-march-2020.984960/page-2#post-6306971 . That answer is very misleading. It only contains the trivial part of the problem and it does not even notice that this problem may be very very very difficult. @suremarc can you edit your answer?
I agree that the answer is misleading, since applying invariance of domain means you need only prove injectivity; in that case the proof can be stated in two lines...

Unfortunately, I can’t edit my post anymore, since editing is disabled after a certain period of time.
 
  • #68
Pony said:
I don't like how the solution of the 2nd problem points to https://www.physicsforums.com/threads/math-challenge-march-2020.984960/page-2#post-6306971 . That answer is very misleading.
suremarc said:
I agree that the answer is misleading, since applying invariance of domain means you need only prove injectivity; in that case the proof can be stated in two lines...

Unfortunately, I can’t edit my post anymore, since editing is disabled after a certain period of time.
I split the link to both posts.
 
  • #69
For problem 5,
##\text{Area}=(\frac{m}{a_1b_2-a_2b_1})^2\sqrt{(A_1^2+B_1^2)(A_2^2+B_2^2)-(A_1A_2+B_1B_2)^2}##
where ##A_1=a_1+a_2## and ##A_2=a_2-a_1## and likewise for B in terms of b
Sorry I didn’t simplify all the way or explain because it was tedious and I think I might’ve made an error.
 
  • #70
@archaic You have an algebra mistakes. You should have ##3x_1^2=\frac{8q^2-p^2}{q^2}##. The approach is definitely fine though.
 

Similar threads

  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
Replies
33
Views
7K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
2
Replies
69
Views
4K
  • Math Proof Training and Practice
5
Replies
150
Views
15K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
4
Replies
107
Views
15K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
Back
Top