# Thread: What is the Riemann Hypothesis?

1. Throughout this note, I'll give a brief, explanatory, informal introduction to the Riemann Hypothesis (RH) explaining the statement of the conjecture, the difficulties of approaching it as well as some notable consequence of RH in the field of number theory. I hope the readers will enjoy the brief survey and would get inspiration to read further from this note. As the author (which is me) himself is an amateur and explains RH through his very limited understandings, he will appreciate any feedback from the readers. Corrections and suggestions for improvements are more than welcome.

Such commentary should be posted here:

[The Riemann $\zeta$]

The central object of this conjecture is a special function, called the $\zeta$ function. It is defined as the sum

$$\zeta(s) = \sum_{n = 1}^\infty \frac1{n^s}$$

Exercise : Prove that this series is divergent for $\Re[s] \leq 1$, so that the zeta function is defined only on the domain $\Re[s] > 1$.

Much of this function's real and complex analytic properties are described in the integration tutorial of the calculus forum. However, this function is also of number theoretic interest and is connected to primes. This is explained rather elementarily from the Euler product form :

$$\zeta(s) = \prod_{p} \left ( 1 - \frac1{p^s} \right )^{-1}$$

Where the product is taken over all the primes $p$. This is proved by noting that $\displaystyle \zeta(s) \prod_p \left ( 1 - \frac1{p^s}\right) = \sum_{n \in \Bbb N \setminus (p\#) \Bbb N} \frac1{n^s}$ where $p\#$ is the product $1 \cdot 2 \cdot 3 \cdots p$, often called the priomrial of $p$. According to the fundamental theorem of arithmetic, every natural number can be factorized into prime elements upto multiplicity. Thus, if there are finitely many primes, it is possible to continue the above process to reach some prime which would be the largest of the primes. In this case, without loss of generality, $p$ be that prime. Thus the last sum evaluates to $1$ as $\Bbb N \setminus (p \#) \Bbb N = \varnothing$, hence we are done. On the other hand, if there are infinitely many primes then one can continue this process forever, ending up with an infinite product instead. In both cases, the product holds.

Excercise : Verify the above proof by proving that the product $\prod_p \left ( 1 - \frac1{p^s} \right)$ converges for $\Re[s] > 1$, under the hypothesis of infinitude of primes. If possible, make use of the convergence of $\zeta(s)$ in the domain of interest.

The above relation is definitely of interest. In fact, we'll shortly see that fiddling with this relation how quickly one arrives at the whole theory of distribution of primes. The trick, however, is to get our hands dirty and look at the edges of the domain of $\zeta$ where the function behaves not-so-smoothly.

$\zeta(s)$ at $s = 1$ diverges. This, as almost all of our readers, who once went through a course of calculus, can see, is the so-called harmonic series $1 + 1/2 + 1/3 + 1/4 + 1/5 + \cdots$. Now let us consider the product formula we have just derived, and let $s$ tend to $1$ from the right.

$$\lim_{s \to 1^{+}}\zeta(s) = \lim_{s \to 1^{+}} \prod_{p} \left ( 1 - \frac1{p^s} \right )^{-1} = \prod_{p} \left ( 1 - \frac1{p} \right)$$

However, the limit on the left side diverges. If we assume that there are finitely many primes, this can never happen, as the product on the right side would then have finitely many terms and a finite value. This is a contradiction, thus there are infinitely many primes.

Excercise : Produce a proof of infinitude of primes by considering $\zeta(s)$ and the product formula for $s = 2$, under the hypothesis of irrationality of $\pi^2$ [You are given $\zeta(2) = \pi^2/6$. For a proof, see - there are six of them, choose whatever you like!] (Another interesting proof is mentioned at mathstackexchange)

This seems to be an interesting way to obtain the infinitude. Is it possible that if one lets $s$ wander around more singularities of $\zeta(s)$ then more information about primes would be revealed? What if, say, we observe the nature of this function at the very boundary of it's domain, namely $\Re[s] = 1$? Of course we are just hypothesizing here, but it turns out that $\Re[s] = 1$ is indeed the right place to look for when it comes to the connection with primes.

[The Prime Number Theorem]

Define $\pi(x)$ to be the number of primes upto some real $x$. The question of estimating $\pi(x)$ efficiently has always been an object of study in elementary analytic number theory. Let's try a probabilistic approach to have plausible guess for a good estimate.

Pick a random integer $n$ from the interval $(1, x)$. The probability of $n$ being a prime is $P(X) = \pi(x)/x$. $X_p$ be the events of $n$ being divisible by some prime $p \leq \sqrt{x}$. Assuming that these events are independent,

$$P(X) = P \bigg(\bigcap X_p \bigg) = \prod_{p \leq \sqrt{x}} P(X_p) = \prod_{p \leq \sqrt{x}} \left ( 1 - \frac1{p} \right)$$

The last product looks like $\prod_p ( 1 - 1/p) = 1/\zeta(1)$, so one should expect that the product grows as $\ll 1/H_{\sqrt{x}}$, where $H_k$ is the $k$-th partial sum of the harmonic series. But then $H_k \sim \log(k)$ by comparing with the integral, thus $\prod_{p \leq \sqrt{x}} (1 - 1/p) \sim c/\log(x)$ for some constant $c$. It turns out with some computational effort that $c = 1$ is a good choice here, which implies the expected estimate $\pi(x) \sim x/\log(x)$.

Excercise : $\pi_2(x) = \#\{p \leq x : p, p+2 \in \mathcal{P}\}$ be the analogous twin prime counting function. Derive, on probabilistic grounds, that $\pi_2(x) \sim 2C_2 \cdot x/\log^2(x)$ where $C_2 = \prod_q (1 - (q-1)^{-2})$, the product running through odd primes. [whether $\pi_2(x) \to \infty$ as $x \to \infty$ is yet an open problem in number theory]

 $x$ $\approx x/\log(x)$ $10^2$ $25$ $21.71$ $10^3$ $168$ $144.76$ $10^4$ $1229$ $1085.73$ $10^6$ $78498$ $72382.41$ $10^8$ $5761455$ $5428681.02$ $10^{12}$ $37607912018$ $36191206825.27$ $10^{16}$ $279238341033925$ $271434051189532.39$ $10^{17}$ $2623557157654233$ $2554673422960304.86$ $10^{18}$ $24739954287740860$ $24127471216847323.75$ $10^{19}$ $234057667276344607$ $228576043106974646.13$ $10^{20}$ $2220819602560918840$ $2171472409516259138.25$ $10^{21}$ $21127269486018731928$ $20680689614440563221.48$ $10^{22}$ $201467286689315906290$ $197406582683296285295.96$ $4 \cdot 10^{22}$ $783964159847056303858$ $768592742555118350978.96$

This is the celebrated conjecture of Gauss now known as prime number theorem (PNT). Almost a 100 years later de la Vallee-Poussin and Hadamard came up with proofs of this phenomenon, both using an observation related to zeta function. Only once I have tried to demonstrate this proof from scratch, which took me something like 5-6 hours or so. I wouldn't try to sketch the whole argument here since I am not mental, though I'd produce a general idea involved behind the scene. Readers who are interested in a complete proof are suggested to read.

We have suspected in #2 that inspecting the zeta function at $\Re[s] = 1$ might give some fruitful results. The precise result that we need (we'll see why) here is that $\zeta(s)$ has no zero at that line. Or, as in that line the zeta is not defined, more precisely, $\zeta(\sigma + it)$ never goes to $0$ as $\sigma \to 1^+$. This is proved (a la de la Vallee-Poussin) like this : Begin with the Euler product and start manipulating, the goal being to transform it into a function which doesn't take $0$ anywhere in the complex plane. An example is $\exp(z)$, which is the most convenient function we can go for.

$$\zeta(s) = \exp\left (\log \prod_p \left ( 1 - \frac1{p}\right) \right ) = \exp\sum_p \log\left(1 - \frac1{p}\right) = \exp\sum_p\sum_{n \geq 1} \frac{1}{np^{ns}} = \exp\sum_p\sum_{n \geq 1} \frac{\cos(nt\log p)}{np^{n\sigma}}$$

Recall that $s = \sigma + it$. The standard approach from here is to note that $3 + 4\cos(\theta) + \cos(2\theta) = 2(1 + \cos(\theta))^2 \geq 0$ and hence

$$\zeta(\sigma)^3 \cdot |\zeta(\sigma + it)|^4 \cdot |\zeta(\sigma + 2it)| = \exp\sum_p\sum_{n \geq 1} \frac{3 + 4\cos(nt\log(p)) + \cos(2nt\log(p))}{np^{n\sigma}} \geq 1$$

As $\zeta(s)$ has a pole at $s = 1$, $\zeta(\sigma) = O((\sigma - 1)^{-1})$ as $\sigma \to 1^+$; if $1 + it$ is a zero of $\zeta$ that'd mean $\zeta(\sigma + it) = O(\sigma-1)$. The last term is $O(1)$. Thus, $\zeta(\sigma)^3 \cdot |\zeta(\sigma + it)|^4 \cdot |\zeta(\sigma + 2it)| = O(\sigma - 1)$ but that is not possible from out inequality, which is our desired contradiction.

The connection with PNT is yet to be explained, however.

Theorem : The prime number theorem is equivalent to the equality $\displaystyle \sum_{n \geq 1} \frac{\mu(n)}{n} = 0$

For those who don't know, $\mu(n)$ is the Moebius function, defined to be $1$ when $n$ is square-free and has even number of factors, $-1$ when square-free and has odd number of factors and $0$ otherwise.

This is a very interesting equivalence. From the definition of $\mu(n)$ along with the Euler product of $\zeta(s)$, it is almost immediate that the equality holds for $\Re[s] > 1$. The nontrivial part is to ensure this equality for $\Re[s] = 1$, which is sufficient to prove the above version of PNT.

Delicate results of complex analysis are needed here to go through the whole proof in full rigor. Ideally, one uses the Perron's formula to obtain

$$\sum_{n < x} \frac{\mu(n)}{n^s} = \frac1{2\pi i} \int_{c - iT}^{c + iT} \frac1{\zeta(s+\omega)} \frac{x^\omega}{\omega} d\omega + O\left ( \frac{x^c}{Tc}\right) + O\left ( \frac{\log x}{T} \right)$$

For $s = 1 + it$, some $c > 0$ and some sufficiently large real $T$. One moves the the contour to the dent $[c - iT, -\delta - iT], [-\delta - iT, -\delta + iT], [-\delta - iT, c + iT]$ for some small $\delta > 0$ for which $\zeta(s + \omega)$ has no zeros for $\Re[\omega] \geq - \delta$; picking up the residue at $\omega = 0$ and letting $x \to \infty$ after appropriate estimations gives the desired result [though the calculations are more complicated than it seems at a glance!]

[The Theory of Analytic Continuation]

So far so good. We have seen that more one gets closer to the singularities of $\zeta$, more the connection about primes are unveiled. We have already worked on the boundary $\Re[s] = 1$ and revealed a great deal of information about prime numbers. It is then somewhat natural to believe that if we can somehow shift to $\{s : \Re[s] < 1\}$ then the connection with primes would become more significant. So the question is, how to make sense of $\zeta$ in the region? This should stand as a motivation for rethinking about the complex analytic properties of $\zeta$.

It is assumed that readers of this section are familiar with fundamentals of complex analysis. If I ever have the time, I'll add it to the addendum later some time.

$$f_1(z) = 1 + z + z^2 + z^3 + \cdots \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; f_2(z) = \int_0^\infty \exp(-t(1-z)) dz$$

The domain $D_1$ of $f_1(z)$ is $\{z \in \Bbb C : |z| < 1\}$ whereas the domain $D_2$ of $f_2(z)$ is $\{z \in \Bbb C : \Re[z] < 1\}$. These two function match at the region $D_1 \cap D_2 = D_1$ of the complex plane, so the former function can be considered as a restriction of the latter. Further still, the function

$$f(z) = \frac1{1-z}$$

which is defined on $\Bbb C\setminus \{1\}$ matches with $f_2$ on the region $D_2$, so that $f_2$ can be considered as a restriction of this function.

However, note that this in no way implies that $f_1(2) = 1 + 2 + 4 + 8 + \cdots = f(2) = -1$. The domain in which they match is strictly $|z| < 1$.

Theorem : Holomorphic functions are all complex analytic, i.e., expandable in Taylor series in it's neighborhood of holomorphy.

Let $f(z)$ be a function defined on $D$ inside the complex plane, overlapped by $D_1$ and $D_2$ with intersection $D_1 \cup D_2$, which in turn overlaps $D$. Consider holomorphic functions $f_1(z)$ and $f_2(z)$ over $D_1$ and $D_2$ such that $f_1 = f$ and $f_2 = f$ throughout $D$. Consider the function $F = f_1 - f_2$ in $D_1 \cap D_2$, which is holomorphic in that domain, as both $f_1$ and $f_2$ are holomorphic in $D_1 \cap D_2$. Using the above theorem, we expand $F$ in Taylor series around some neighborhood of $c \in D$ with radius of convergence $|z-c| < R$ :

$$F(z) = f_1(z) - f_2(z) = \sum_{n = 0}^\infty a_n (z - c)^n$$

If $f_1$ and $f_2$ doesn't agree on $D_1 \cap D_2$, i.e., $F(z) \neq 0$, then there exists a $k \geq 0$ such that $a_k \neq 0$. Thus,

$$|F(z)| = |z - c|^k \left | a_k + (z - c)\cdot a_{k+1} + \cdots \right | \geq |z-c|^k \left ( |a_k| - \frac{K \cdot |z-c|}{(r-c)^{k+1}} - \frac{K \cdot |z-c|^2}{(r-c)^{k+2}} - \cdots \right )$$

For some $r$ inside the convergence disk, and a sufficiently large $K$ such that $K \geq a_n (r-c)^n$ for all $n$ (the existence is straightforward from the convergence of the series).

Note that the last sum is a geometric series, thus applying the closed form one gets

$$|F(z)| \geq |z-c|^k \left ( |a_k| - \frac{K \cdot |z-c|}{(r-c)^k(r-c-|z-c|)}\right)$$

One can choose $|z-c|$ to be sufficiently small so that the RHS is positive, which implies that $f(z)$ has no zeros inside the unit disk around $z = c$. But then $c \in D$, and $F = f_1 - f_2$ vanishes identically on $D$, so this is a contradiction. Hence the coefficients of the Taylor series of $F(z)$ must all vanish, resulting in $F(z) = 0$ and thus $f_1(z) = f_2(z)$ identically on $D_1 \cap D_2$.

The result above really proves that analytic continuations are unique in one sense. Note that this happens only for the continuation is _analytic_. There are lots of ways one can extend a function (for example, think about a beth-number extension of the Riemann $\zeta$) but it is not guaranteed that it will be unique.

Exercise : Prove that $f(z) = \sum_{n = 0}^\infty z^{3^n}$ cannot be analytically continued beyond it's convergence disk $|z| < 1$.

There, we have an excellent tool at hand to bring our dreams to life : we can finally extend $\zeta$ beyond it's convergence domain, and more than anything, extend uniquely!

Exercise : Prove that $\zeta(s)$ is analytic on the region $\Re[s] > 1$.

We will now analytically continue $\zeta(s)$ to the left side of $\Re[s] = 1$. Firstly, note that for any function $f$ with a continuous derivative in $[a, b]$

$$\sum_{a < n \leq b} f(n) = \int_a^b f(x) \mathrm{d}x + \int_a^b \left(x - \lfloor x \rfloor -\frac12 \right) f'(x) \mathrm{d}x + \left(a - \lfloor a \rfloor - \frac12\right)f(a) - \left(b - \lfloor b \rfloor - \frac12\right)f(b)$$

The proof of pretty straightforward : since the formula is additive with respect to the interval, break $[a, b]$ up in short $[k, k+1]$ intervals. The details are left to the readers familiar with fundamentals of calculus. If one subs in $f(n) = 1/n^s$, letting $a$ and $b$ to be positive integers, then

$$\sum_{a < n \leq b} \frac1{n^s} = -s\int_a^b \frac{x - \lfloor x \rfloor - 1/2}{x^{s+1}} \mathrm{d}x - \frac{a^{1-s} - b^{1-s}}{1-s} - \frac1{2} \left ( \frac1{a^s} - \frac1{b^s} \right)$$

Letting $b \to +\infty$ and $a = 1$ results in

$$\zeta(s) = s\int_1^\infty \frac{\lfloor x \rfloor - x + 1/2}{x^{s+1}} \mathrm{d}x + \frac1{1-s} + \frac12$$

Excercise : Prove that this integral converges for $\Re[s] > -1$, thus is an analytic continuation of $\zeta(s)$ to the extended region.

However, note that for $1 < \Re[s] < 0$

\begin{align}\zeta(s) &= s\int_1^\infty \frac{\lfloor x \rfloor - x + 1/2}{x^{s+1}} \mathrm{d}x + \frac1{1-s} + \frac12 \\ &= s\int_1^\infty \frac{\lfloor x \rfloor - x + 1/2}{x^{s+1}} \mathrm{d}x + s\int_0^1 \frac{\lfloor x \rfloor - x + 1/2}{x^{s+1}} \mathrm{d}x \\ &= s\int_0^\infty \frac{\lfloor x \rfloor - x + 1/2}{x^{s+1}} \mathrm{d}x \end{align}

Now, from a little Fourier analysis, we see that

$$\lfloor x \rfloor - x + 1/2 = \sum_{n = 1}^\infty \frac{\sin(2n\pi x)}{n\pi}$$

Hence

\begin{align}\zeta(s) = s\int_1^\infty \frac{\lfloor x \rfloor - x + 1/2}{x^{s+1}} \mathrm{d}x & = s\int_1^\infty \sum_{n = 1}^\infty \frac{\sin(2n\pi x)}{x^{s+1} n\pi} \mathrm{d}x \\ &= \frac{s}{\pi} \sum_{n = 1}^\infty \frac{(2n\pi)^s}{n} \int_0^\infty \frac{\sin(t)}{t^{s+1}} \mathrm{d}t \\ &= \frac{s}{\pi} (2 \pi)^s \zeta(1 - s) \left ( -\Gamma(-s) \right)\sin(\pi s/2)\end{align}

This is a functional equation with reflection axis being $\Re[s] = 1/2$. Since we already have efficient ways of computing zeta for $\Re[s] > 0$, this functional equation can be used to produce an analytic continuation of $\zeta$ in $\Re[s] < 0$. From this equation, we immediately find out a bunch of $\zeta$ zeros on the real axis at negative integers, coming from the factor of $\sin(2k\pi) = 0$. These points, $s = -2k$, are called trivial zeros of the zeta.

All the tools which are needed to understand the Riemann Hypothesis are finally collected throughout these 3 posts. We are now prepared to state the conjecture.

[The RH and the music of Primes]

Statement of the conjecture : The Riemann zeta function, or rather the analytic continuation of zeta along $\Bbb C\setminus \{1\}$ has all it's nontrivial zeros at $\Re[s] = 1/2$.

$\small{Z(t) = \zeta(1/2+it) \cdot e^{i\vartheta(t)}}$
$\small {\vartheta(t) = -t/2 \cdot \log \pi + \text{arg}\, \Gamma(1/4+it/2)}$

As we have seen before, $\zeta(s)$ sprouts out a bunch of information about primes while $s$ is in the neighborhood of the cut $\Re[s] = 1$ on the complex plane. We also hypothesized that $\zeta(s)$ (in it's series form) might sprout more information about primes if $s$ is hurled right into the undefined region of $\Re[s] < 1$. It turns out that $\Re[s] = 1/2$ is exactly the place where a lot of information about primes are revealed by zeta. (The region of most interest is $0 \leq \Re[s] \leq 1$ as $\Re[s] > 1$ is mapped onto $\Re[s] < 1$ by the reflection formula we have derived above, while it just acts as an automorphism on the critical strip bounded by $\Re[s] = 0$ and $\Re[s] = 1$)

Again, it takes a little bit of complex analysis to explain the connection in full. One uses another version of the prime number theorem, which is left as an exercise for the readers :

Exercise : Define the Von Mangoldt function $\Lambda(n)$ as $\log(p)$ when $n = p^k$ for some prime $p$ and $0$ otherwise. Prove that the prime number theorem is equivalent to the identity $\sum_{n \leq x} \Lambda(n) \sim x$. This function is called the summatory Von Mangoldt function and is denoted by $\psi(x)$.

But then note that when $\Re[s] > 1$

$$\zeta'(s) \cdot \frac{1}{\zeta(s)} = \sum_{a = 1}^\infty \frac{\log(a)}{a^s} \cdot \sum_{b = 1}^\infty \frac{\mu(b)}{b^s} = \sum_{a = 1}^\infty \sum_{b = 1}^\infty \log(a) \mu(b) \cdot \frac1{(ab)^s} \stackrel{n = ab}{=\! =} \sum_{n = 1}^\infty \left ( \sum_{ab = n} \mu(a) \log(b) \right) \frac1{n^s}$$

However,

$$\sum_{ab = n} \mu(a) \log(b) = \sum_{d|n} \mu(d) \log(n/d) = \sum_{d|n} \mu(d) \log(n) - \sum_{d|n} \mu(d) \log(d) = -\sum_{d|n} \mu(d)\log(d)$$

By fundamental theorem of arithmetic, $n$ can be written as finite product of prime powers. Since, by the definition of $\mu$, if $d$ is not square-free the particular term vanishes, we can consider without any loss of generality that $n = \Pi_i^k \, p_i$ for primes $p_i$. If that is the case, then as $\mu(p_i) = -1, \mu(p_i q_j) = 1, \cdots, \mu(n) = (-1)^k$, it thus follows that $\sum_{d|n} \mu(d)\log(d) = 0$ for composite $n$. However, if $n = p^k$ for some integer $k \geq 1$, then $\sum_{d|n} \mu(d)\log(d) = \mu(1)\log(1) + \mu(p)\log(p) + 0 = -\log(p)$. Thus, from the definition of $\Lambda(n)$, we deduce that

$$\Lambda(n) = -\sum_{d|n} \mu(d)\log(d) \Longrightarrow -\frac{\zeta'(s)}{\zeta(s)} = \sum_{n = 1}^\infty \frac{\Lambda(n)}{n^s}$$

Using Perron's formula once again with $s = 0$, we derive after appropriate estimation that for some $c > 0$, sufficiently large real $T$ and noninteger $x$

$$\int_{c-iT}^{c+iT} \frac{-\zeta'(\omega)}{\zeta(\omega)} \frac{x^\omega}{\omega} = \psi(x) + O\left(\frac{x^c}{Tc}\right) + O\left ( \frac{\log^2x}{T}\right)$$

Where $\psi(x) = \sum_{n \leq x} \Lambda(n)$ is the summatory Von Mangoldt function we have mentioned before. If now one moves the contour to the dent $[c-iT, -\Delta - iT], [-\Delta - iT, -\Delta + iT], [-\Delta + iT, c + iT]$ for some $\Delta > 1/2$, going one step ahead of what we did before, then one picks up the residue $1$ of $\zeta'(s)/\zeta(s)$ at there zeros (trivial and nontrivial) of $\zeta(s)$, $-1$ at the pole $s = 1$ and $1$ at the pole $s = 0$ of the integrand coming from the factor of $1/s$. Adding up the residues gives, after a careful estimation of the integral on the boundaries of the dents,

$$\psi(x) = x - \sum_{\rho} \frac{x^\rho}{\rho} - \frac{\zeta'(0)}{\zeta(0)} + O \left ( \frac{x\log^2 x}{T} \right) + O \left ( \frac{x^c \log^2 T}{T \log c}\right) + O \left ( \frac{\log^3 T}{\sqrt{x}}\right) + O( x^\Delta)$$

Where $\rho$ runs through all the zeta zeros in the box bounded by $\Im(z) = T$, $\Im(z) = -T$, $\Re[s] = \Delta$ and $\Re[s] = 1$ (trivial and nontrivial alike). Letting $\Delta \to -\infty$, $T \to \infty$ and arranging the zeta zero sum in the increasing order of $\Im(\rho)$ gives the explicit formula

$$\psi(x) = x - \sum_{\rho} \frac{x^\rho}{\rho} - \frac{\zeta'(0)}{\zeta(0)}$$

It is immediately seen how groundbreaking this result is. The equality essentially implies (if you have finished the exercise a couple paragraphs back!) that if you know about the positions of a bunch of zeros of $\zeta$, you can compute exactly how many primes there are upto some integer.

Intuitively, if RH is true then that should mean that we would "know" where the primes are "sitting" inside $\Bbb N$. Moreover, we can give tight estimates to the prime counting function assuming RH. First, integrate the explicit formula with respect to $x$ :

$$\int_0^t \psi(x) \, \mathrm{d}x = \frac{x^2}{2} - \sum_{\rho} \frac{x^{\rho+1}}{\rho(\rho+1)} - \frac{\zeta'(0)}{\zeta(0)}x + O(1)$$

However, it takes a bit more manipulation (just manipulation though) to make this identity rigorous : we haven't proved that the zeta zero sum over there is termwise integrable.

\begin{align} \psi(x) \leq \int_x^{x+1} \psi(t) \, \mathrm{d}t = \int_0^{x+1} \psi(t) \, \mathrm{d}t - \int_0^x \psi(t) \, \mathrm{d}t &= \left | \frac{(x + 1)^2}{2} - \frac{x^2}{2} + \sum_\rho \frac{(x + 1)^{\rho+1} - x^{\rho+1}}{\rho(\rho+1)} + O \left ( \frac1{x^3} \right) + O(1) \right | \\ & \leq x + \left | \sum_\rho \frac{(x + 1)^{\rho+1} - x^{\rho+1}}{\rho(\rho+1)} \right | + O(1)\end{align}

\begin{align}\psi(x) \geq \int_{x - 1}^x \psi(t) \, \mathrm{d}(t) = \int_0^x \psi(t) \, \mathrm{d}t - \int_0^{x-1} \psi(t) \, \mathrm{d}t \! &= \left | \frac{x^2}{2} - \frac{(x-1)^2}{2} - \sum_\rho \frac{x^{\rho+1} - (x - 1)^{\rho+1}}{\rho(\rho+1)} + O \left ( \frac1{x^3} \right) + O(1) \right | \\ & \leq x - \left | \sum_\rho \frac{x^{\rho+1} - (x - 1)^{\rho+1}}{\rho(\rho+1)} \right | + O(1)\end{align}

Note that the trivial roots have already been shifted to the $O(x^{-3})$ term, and thus assuming RH $\rho = 1/2 + i \beta$ are all the possible nontrivial zeros. Using the estimates (for $|\beta| \leq x$ and $|\beta| \geq x$, respectively)

$$\left | \frac{(x + 1)^\rho - x^\rho}{\rho(\rho+1)} \right | = \frac1{|\rho|} \int_x^{x+1} t^\rho \, \mathrm{d}t \leq \frac{|x + 1|^{1/2}}{|\beta|} \leq \frac{(2x)^{1/2}}{| \beta |}$$

$$\left | \frac{(x + 1)^\rho - x^\rho}{\rho(\rho+1)} \right |\leq \frac{2|x + 1|^{\Re[\rho]+1}}{|\rho(\rho + 1)|} \leq \frac{2(2x)^{3/2}}{\beta^2}$$

One gets $\psi(x) \leq x + O(x^{1/2} \cdot \log^2 x)$ and $\psi(x) \geq x - O(x^{1/2} \cdot \log^2 x)$. These are the superbly tight bounds found by Koch and up until now it's an open problem to deduce these results unconditionally.

[A Moebius walk : Denjoy's probabilistic interpretation of RH]

Now that we have a great deal of information about the connection with zeta and primes, we will turn our head towards the proof of the theorem quoted in , which, as we will see, would give rise to and interesting connection with RH

Proof of theorem :

First, let us estimate $\sum_{n \leq x} \tau(n)$. Note that $\tau(n) = \sum_{ab = n} 1$. Thus, by inclusion-exclusion principle,

\begin{aligned}\sum_{ab \leq x} 1 = \sum_{a \leq \sqrt{x}} \sum_{b \leq x/a} 1 + \sum_{b \leq \sqrt{x}} \sum_{a \leq x/b} 1 - \sum_{a \leq \sqrt{x}} \sum_{b \leq \sqrt{x}} 1 \, &= \sum_{a \leq \sqrt{x}} \left ( \frac{x}{a} + O(1)\right ) + \sum_{b \leq \sqrt{x}} \left ( \frac{x}{b} + O(1)\right ) - \left ( \sqrt{x} + O(1) \right )^2 \\ &= 2x \sum_{k \leq \sqrt{x}} \frac1{k} + O(\sqrt{x}) - x - O(\sqrt{x}) - O(1) \\ &= 2x \left ( \log \sqrt{x} + \gamma + O(x^{-1/2}) \right ) - x + O(\sqrt{x}) \\ & = x\log x + (2\gamma - 1)x + O(\sqrt{x})\end{aligned}

This trick is usually called the hyperbola method, and we will soon use a similar one to get our desired result. Now consider the function

$$\Delta(x) = \sum_{n \leq x} \left ( \log n - \tau(n) + 2\gamma\right)$$

From the above estimate, combined with Stirling's estimate, we get $\Delta(x) = O(\sqrt{x})$. From $\sum_{ab = n} \mu(a)\log(b) = \Lambda(x)$ as we have proved before and $\sum_{ab = n} \mu(a) \tau(b) = 1$ which shouldn't be hard to verify from the equality $\tau(n) = \sum_{dk = n} 1$, we have

$$\sum_{ab = n} \mu(a) \sum_{n \leq x} \left ( \log b - \tau(b) + 2\gamma\right) = \sum_{n \leq x} \left ( \Lambda(n) - 1 + 2\gamma \delta_{n, 1} \right ) = \psi(x) - x + 2\gamma$$

Where $\delta_{i, j}$ is the Kronecker delta. On the other hand,

$$\sum_{ab = n} \mu(a) \sum_{n \leq x} \left ( \log b - \tau(b) + 2\gamma\right) = \sum_{b \leq K} \left ( \log k - \tau(k) + 2\gamma \right) M\left ( \frac{x}{b} \right ) + \sum_{a \leq x/K} \mu(a) \left ( \Delta(x/a) - \Delta(K)\right )$$

For some $K$ inside $[1, x]$. Assuming $M(x) = o(x)$, the first part is $o(x)$ while the last is, using $\Delta(x) = O(x)$, $O(x/\sqrt{K})$ from simple estimates. Thus as $K$ grows slowly with $x$, the last sum vanishes, leaving $\psi(x) - x = o(x)$. From a previous exercise, this implies the PNT.

Exercise : Prove that $M(x) = o(x)$ implies $\sum_{n \geq 1} \mu(n)/n = 0$, and vice-versa.

Interestingly, RH is connected to a moebius estimate too, as one might expect from the fact that PNT is really equivalent to zero-freeness at $\Re[s] = 1$. This is derived from using the Abel summation formula with $\phi(x) = 1/x^s$, $a_n = \mu(n)$ and $A(x) = M(x)$ :

$$\sum_{n \leq x} \frac{\mu(n)}{n^s} = \frac{M(x)}{x^s} + s \int_1^x \frac{M(x) \, \mathrm{d}x}{x^{1+s}}$$

If $\Re[s] > 1$, then as $x \to \infty$ the limit $M(x)/x^s$ vanishes as $M(x) = \sum_{n \leq x} \mu(n) \leq \sum_{n \leq x} 1 \leq x$. Thus,

$$\sum_{n \geq 1} \frac{\mu(n)}{n^s} = \frac1{\zeta(s)} = s\int_1^\infty \frac{M(x) \, \mathrm{d}x}{x^{1+s}}$$

The convergence of the integral depends upon the growth of $M(x)$. If $M(x) = O(x^\varepsilon)$ for some $\varepsilon > 0$ then the integral converges iff $\Re[s] > \varepsilon$. Hence the integral provides an analytic continuation of $1/\zeta(s)$ to this domain. However, as $1/\zeta(s)$ has poles at $\Re[s] = 1/2$ assuming Riemann Hypothesis, the integral cannot converge there. Thus $\varepsilon > 1/2$. Thus, a sufficient condition for RH is that $M(x) = O(x^{1/2 + \epsilon})$ for all $\epsilon > 0$. Necessity can be derived with delicate results in complex analysis, but we won't need them at this moment.

[Probability of RH]

Let a fair coin be tossed. Associate the random walk $X_n$ with the coin toss, which returns $+1$ if head is drawn and $-1$ for tail. Thus, using basics of probability theory, $E( \left | \sum_{n \leq x} X_n \right | ) \ll \sqrt{X}$. Denjoy hypothesized that $\mu(n)$, for large square-free $n$, "almost" behaves like a random variable.

That should imply that $M(x) = \sum_{n \leq x} \mu(x)$ also grows like $\sqrt{x}$, which is indeed a stronger hypothesis than $M(x) = O(x^{1/2+\epsilon})$. However, this is a false asymptotics, as Odlyzko and Riele proved in

These is a slight correction of this version using Laplace limit theorem, which states that if a fair coin is flipped $N$ times then the probability that the error of the expectation of $N/2$ heads/tails is $\ll N^{1/2}$ is approximately $\int^{(2K^2/\pi)^{1/2}}_{-(2K^2/Pi)^{1/2}} \exp(-\pi x^2) \mathrm{d}x$, where $K$ is the constant of the asymptotic growth. Thus, the probability that the error is something like $\ll N^{1/2+\epsilon}$ is approximately $\int_0^{N^\epsilon (2\pi)^{1/2}} \exp(-\pi x^2) \mathrm{d}x$ which tends to $1$ as $N$ grows towards $+\infty$.

However, since $\mu(n)$ acts almost as a random variable ( i.e., the event of $i$ having $(-1)^k$ number of prime factors is expected to be independent of the event of $j$ having $(-1)^l$ number of prime factors) we can state that the number of integers smaller than $x$ having heads deviates off from $x/2$ by $x^{1/2+\epsilon}$ with probability $1$, i.e., $M(x) = O(x^{1/2 + \epsilon})$ and thus RH is true with probability $1$.

While these kinds of arguments are of probabilistic importance, in general they can't be rigorified.

[Some computations]

We will first calculate some nontrivial zeros off-hand.

Let us introduce a very convenient function (well, it has already been mentioned below a plot) : Define the Z-function as

$$Z(t) = \exp(i \vartheta(t)) \cdot \zeta \! \left(\frac12 + it\right )$$

Where $\theta(t) = -t \log(\pi)/2 + \text{arg} \, \Gamma (1/4 + it/2 )$. The fun thing about this function is that it analytically translates the zeta function at $\Re[s] = 1/2$ to $\Bbb R$, so you can directly study it's behavior on the real plane instead. We will first derive an approximation formula for the Riemann zeta which will help greatly in computation of the zeros. First, by residue theorem,

$$\zeta(s) - \sum_{n \leq x} \frac1{n^s} = \sum_{n > x} \frac1{n^s} = \frac1{2i} \int_{x - i\infty}^{x + i\infty} \frac{\cot \pi t}{t^s} \, \mathrm{d}t$$

Splitting the integral up in $[x - i\infty, x]$ and $[x, x + i\infty]$, one gets

$$-\frac1{2i} \int_{x - i\infty}^x \frac{\cot(\pi t) - i}{t^s} \, \mathrm{d}t - \frac1{2i} \int_x^{x + i\infty} \frac{\cot(\pi t) + i}{t^s} \, \mathrm{dt} - \frac{x^{1-s}}{1 - s}$$

The two integrals converges throughout the complex plane, thus provides an analytic continuation of the expression above. Note, however, that $|\cot(\pi t) + i| = 2(1 + \exp(2\pi r))^{-1} < 2\exp(-2\pi r)$ and $|t^{-s}| = t^{-\Re[s]} \exp(\Im[s] \text{arg}(t))$ $< x^{-\Re[s]} \exp(|\Im[s]| \theta/x)$ where $t = x + i\theta$. Thus,

$$\left | \zeta(s) - \sum_{n \leq x} \frac1{n^s} \right | \leq 2 x^{-\Re[s]} \int_{0}^\infty \exp(-2\pi r + |\Im[s]|\theta/x) \, \mathrm{d}r + \left | \frac{-x^{1-s}}{1-s} \right | \leq \frac{x^{1-s}}{|1-s|} + \frac{x^{-\Re[s]}}{2\pi - |\Im[s]| \theta/x}$$

Now multiplying out by $\exp(i\vartheta(t))$ at $s = 1/2 + it$ and taking the real part gives, for $x > t > 0$,

$$\left | Z(t) - \sum_{n \leq x} \frac{\cos(t\log n - \vartheta(t))}{n^{1/2}} \right | \leq \frac{x^{1/2}}{t} + \frac{2x^{1/2}}{2\pi x - t}$$

Picking $x = t^2$, the error transforms into $1 + 1/(\pi t - 1/2)$, which goes to $1$ as $t$ grows. For example, say we want to evaluate $Z(17)$. Using Stirling's formula, $\vartheta(t) = t/2 \cdot \log \, (t/{2\pi}) - t/2 - \pi/8 + O (1/t)$. Hence ignoring the asymptotic term, $\vartheta(17) \approx -0.43$. Running the GP snippet $\colorbox{LightGray}{$\texttt{sum(n=1,17^2,cos(17*log(n)+0.43)/sqrt(n))}$}$, one gets the approximation $Z(17) \approx 2.68377$ (which is correct with an error of $\approx 0.54$). Since $Z(0) = \zeta(1/2)$ is a negative constant, there is a sign change of $Z(t)$ on the interval $[0, 17]$. Running the snippet $\colorbox{LightGray}{$\texttt{solve(t=0,17,real(exp(I*(arg(gamma(1/4+I*t/2))-t/2*log(Pi)))*zeta(1/2+I*t)))}$}$ GP returns $t \approx 14.1347251417346937904572519$. That is precisely the imaginary part of our first zero on the 1/2-line, correct upto 24 decimal digits.

Let's get a bit serious about this and really try calculating, say, a few hundred zeros of zeta using what we have found. This takes a little bit of trickery though : we have already shown that the worst error in our approximation through that oscillating cosine series for $x = t^2$ is not more than $1$, i.e.,

$$Z(t) \sim \sum_{n \leq t^2} \frac{\cos(t\log n - \vartheta(t))}{n^{1/2}}$$

Recall that $\cos(x-k\pi) = (-1)^k \cos(x)$. So one thing we can try is finding $t_n$s such that $\vartheta(t_n) = n\pi$ (i.e., $\zeta(1/2+it)$ is real there). From our asymptotics, $Z(t)$ is expected to change sign in $[t_k, t_{k + 1}]$, so there is an expected zero of zeta with imaginary part in that interval. The GP code

$$\colorbox{LightGray}{\begin{array}{1} \texttt{g(k)=2*Pi*exp(1)*exp(lambertw(1/exp(1)*(k+1/8)));} \\ \texttt{for(k=1,100,p=solve(x=g(k),g(k+1),real(exp(I*(arg(gamma(1/4+I*t/2))-t/2*log(Pi)))*zeta(1/2+I*t)));} \\ \texttt{print(1/2+I*p))} \end{array}}$$

Returns the first hundred zeros of zeta

However our superiorly convenient expectation fails, just a bit after the numbers we have calculated upto. It so happens that the error (as we've seen almost always nearly $0.5$) of the asymptotics makes the zeta zero jump right outside the closure of $t_k$s. The first counterexample is $[125, 126]$ :

i.e., $Z(x)$ has no sign changes in the interval $[125, 126]$. This is the failure of Gram's law, which, had it been always true, would produce a very nice easy method for computing large zeta zeros. Fun thing is that I never knew the intuition behind Gram's law before posting all this : I was just about to calculate some very small zeros with the asymptotic formula and then it just clicked. So, well, I learned something today.

As an addendum, it'd also be fun to check the explicit formula with the data we have. Take $x = 200.1$ (not taking an integer directly for the purpose of avoiding jump discontinuities). We know that $\psi(x) = \sum_{n \leq x} \Lambda(n)$, so coding it out $\colorbox{LightGray}{$\texttt{sum(n=1,floor(200.1),-sumdiv(n, d,moebius(d)*log(d)))}$}$ returns $\approx 206.145$. Whilst making a vector $\texttt{v}$ of the zeros we computed before and running the code $\colorbox{LightGray}{$\texttt{x-sum(i=1,#v,x^(v[i])/v[i])-zeta'(0)/zeta(0)}$}$ returns real part $201.113$. The numbers are astonishingly close when $x$ and $T$ are both large numbers. For example, leaving $T = 100$, let $x=1000.1$. Then doing through the same process, one finds $\psi(x) \approx 996.68$ and for the sum real part $\approx 996.352$. Riemann's original version was directly about $\pi(x)$, and knowing enough zeros one could exactly compute $\pi(x)$ for some given large $x$. If this does not destroy the myth about having no explicit/computationally helpful formula for counting number of primes, I don't understand what one means by a "formula".

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•