Solving Theorem 4.2.1 from Joel Spencer & Noga Alon's "The Probabilistic Method

In summary: Merten's Second Theorem $$\prod_{p\leq x}^{}\frac{1}{1 - p^{-1}} = e^{-y + O(lnlnx)}$$where $y = lnx + lnlnx + \frac{lnlnx}{lnx} + O(1)$In summary, the conversation discusses a theorem from chapter 4 of "The Probabilistic Method" by Joel Spencer and Noga Alon, specifically theorem 4.2.1. The theorem involves using Stirling's formula and Abel summation to prove an identity for a series
  • #1
Aryth1
39
0
I am in an independent study working through probabilistic graph theory and I am stuck on part of a theorem from chapter 4 of The Probabilistic Method by Joel Spencer and Noga Alon (specifically theorem 4.2.1).

In this context, $p$ is a prime number.

The part where I am confused comes from a remark he makes about an identity for a series. He says, and I quote: "... where here we used the well known fact that $\sum_{p\leq x}\left(\frac{1}{p}\right) = ln \ lnx + O(1)$". And claims that one can use Stirling's Formula and Abel Summation to arrive at the result. I can certainly see how Abel Summation is used, and, I tried the following:

Let \(\displaystyle a_n = \begin{cases} ln \ n &\mbox{if } n \mbox{ is prime} \\ 0 & \mbox{ } \mbox{otherwise}. \end{cases} \)

and let $\phi (n) = \frac{1}{nln \ n}$. Then $\sum_{p\leq x}\left(\frac{1}{p}\right) = \sum_{n=1}^x a_n\phi (n)$ and we can apply Abel summation to get:

$\sum_{p\leq x}\left(\frac{1}{p}\right) = A(x)\phi (x) - \int_1^x A(u)\phi '(u) ~du$

where $A(x) := \sum_{n=1}^x a_n = \sum_{p\leq x} lnp$. Then

$\sum_{p\leq x}\left(\frac{1}{p}\right) = \frac{A(x)}{xlnx} + \int_1^x \frac{A(u)(lnu +1)}{(u \ lnu)^2} ~du$.

But I'm stuck here. I did try other ways to do this, but they were equally unsuccessful. Any help is greatly appreciated!
 
Mathematics news on Phys.org
  • #2
I can see a way to prove that by using the prime number theorem and the much simpler functions:
$$a(x) = \begin{cases}1 ~ ~ &\text{if} ~ x ~ \text{is prime} \\ 0 &\text{otherwise}\end{cases}$$
Such that, of course:
$$A(x) = \sum_{p \leq x} 1 = \pi(x)$$
And the continuously differentiable function given by:
$$\phi(x) = \frac{1}{x} ~ ~ ~ \text{and} ~ ~ ~ \phi'(x) = - \frac{1}{x^2}$$
It requires a bit of care to show that the asymptotic terms resulting from the use of the prime number theorem in the integral are absorbed by the $O(1)$ term (I assume) but it seems to work. I don't know how to prove it otherwise, though - not sure what Stirling's formula (for factorials?) has to do with this :confused:
 
  • #3
Bacterius said:
I can see a way to prove that by using the prime number theorem and the much simpler functions:
$$a(x) = \begin{cases}1 ~ ~ &\text{if} ~ x ~ \text{is prime} \\ 0 &\text{otherwise}\end{cases}$$
Such that, of course:
$$A(x) = \sum_{p \leq x} 1 = \pi(x)$$
And the continuously differentiable function given by:
$$\phi(x) = \frac{1}{x} ~ ~ ~ \text{and} ~ ~ ~ \phi'(x) = - \frac{1}{x^2}$$
It requires a bit of care to show that the asymptotic terms resulting from the use of the prime number theorem in the integral are absorbed by the $O(1)$ term (I assume) but it seems to work. I don't know how to prove it otherwise, though - not sure what Stirling's formula (for factorials?) has to do with this :confused:

This was actually along the lines of my first solution to the problem. I'm glad that you also share this idea.

It does seem like using Stirling's formula (and I'm not sure which form I'm supposed to use) is a bit unnecessary... Although I've heard there is a weaker form that is actually a series with a logarithm of some kind in it. Something like this, I think:

$$\sum_{n\leq x} log(n) = xlog(x) - x + O(log(x))$$
 
  • #4
Indeed, still not quite clear to me how it is useful here though. I worked a bit with your idea in your first post but I don't see how it leads to a simpler solution; it seems you ultimately need to know about the density of the primes to make your integral converge to what you need it to - the other term is clearly $O(1)$ - the reason being that:
$$\int_{1}^x \frac{\ln{u} + 1}{u \ln{u}} ~ \mathrm{d} u = \ln{x} + \ln{\ln{x}} \tag{1}$$
Which is what I think you would eventually derive (up to asymptotic terms) without using the prime density, whereas:
$$\int_{1}^x \frac{\ln{u} + 1}{u \ln^2{u}} ~ \mathrm{d} u = \ln{\ln{x}} - \frac{1}{\ln{x}} = \ln{\ln{x}} + O(1)$$
Where you notice the extra $\ln{u}$ factor in the denominator which accounts for the density of the primes, coming from the fact that:
$$\frac{A(u)}{u \ln{u}} = \frac{1}{u} \sum_{p \leq u} \frac{\ln{p}}{\ln{u}} \leq \frac{\pi(u)}{u} \sim \frac{1}{\ln{u}}$$
And you (eventually) get the desired result, though not as easily, whereas not using the density of primes only gives you the weak upper bound:
$$\frac{A(u)}{u \ln{u}} \leq 1$$
Which as seen above in $(1)$ is insufficient to conclude :confused:
 
Last edited:
  • #5
Bacterius said:
Indeed, still not quite clear to me how it is useful here though. I worked a bit with your idea in your first post but I don't see how it leads to a simpler solution; it seems you ultimately need to know about the density of the primes to make your integral converge to what you need it to - the other term is clearly $O(1)$ - the reason being that:
$$\int_{1}^x \frac{\ln{u} + 1}{u \ln{u}} ~ \mathrm{d} u = \ln{x} + \ln{\ln{x}} \sim \ln{x} \tag{1}$$
Which is what I think you would eventually derive (up to asymptotic terms) without using the prime density, whereas:
$$\int_{1}^x \frac{\ln{u} + 1}{u \ln^2{u}} ~ \mathrm{d} u = \ln{\ln{x}} - \frac{1}{\ln{x}} = \ln{\ln{x}} + O(1)$$
Where you notice the extra $\ln{u}$ factor in the denominator which accounts for the density of the primes, coming from the fact that:
$$\frac{A(u)}{u \ln{u}} = \frac{1}{u} \sum_{p \leq u} \frac{\ln{p}}{\ln{u}} \leq \frac{\pi(u)}{u} \sim \frac{1}{\ln{u}}$$
And you (eventually) get the desired result, though not as easily, whereas not using the density of primes only gives you the weak upper bound:
$$\frac{A(u)}{u \ln{u}} \leq 1$$
Which as seen above in $(1)$ is insufficient to conclude :confused:

Yeah, now that I've seen it worked out it makes sense. Mine was certainly not leading to a simpler solution, I was just curious about how they were so sure that Stirling's formula and Abel summation were used to solve it. I think in any case the density of the prime numbers is required, as you said.

I did some pretty extensive googling yesterday and found the theorems that prove these (although I have yet to find a proof that uses both Stirling's formula and Abel summation). My problem is apparently Merten's second theorem and it uses Merten's first theorem to prove it.

Merten's First Theorem

$$\sum_{p\leq x}\frac{lnp}{p} = lnx + O(1)$$

Merten's Second Theorem

$$\sum_{p\leq x} \left(\frac{1}{p}\right) = ln \ lnx + C_1 + O\left(\frac{1}{lnx}\right)$$

So I may have gotten the functions mixed up. Maybe with a choice of

$$ \displaystyle a_n = \begin{cases} \frac{ln \ n}{n} &\mbox{if } n \mbox{ is prime} \\ 0 & \mbox{ } \mbox{otherwise}. \end{cases} $$

and

$$\phi(x) = \frac{1}{lnx}$$

the solution may come out in a slightly easier way than my previous choices?

Then we'd get

$$A(x) = \sum_{p\leq x}\frac{ln p}{p} = lnx + r(x)$$ (by Merten's First Theorem)

where $$|r(x)|\leq c$$ for some constant $c$, and so

$$\sum_{p\leq x}\left(\frac{1}{p}\right) = \frac{A(x)}{lnx} + \int_2^x \frac{A(u)}{uln^2u} ~du$$

$$ = 1 + \frac{r(x)}{lnx} + \int_2^x\frac{1}{ulnu} ~du + \int_2^x\frac{r(u)}{uln^2u}~du$$

Is this any better? I'm still kind of new to number theory so I'm not sure how I should proceed.
 
  • #6
Yep, that absolutely works and gives you the right answer; of course, Merten's first theorem which you are using for $A(x)$ is basically equivalent to the prime number theorem, at least for our purposes ;) (specifically it's what gives you the required asymptotic density of the primes of $1 / \ln{n}$) but the fact that it's already given in asymptotic notation makes it easy to prove your original expression (just evaluate the integrals and bound the error terms and you're done)
 

Related to Solving Theorem 4.2.1 from Joel Spencer & Noga Alon's "The Probabilistic Method

1. What is Theorem 4.2.1 from "The Probabilistic Method" by Joel Spencer & Noga Alon?

Theorem 4.2.1 is a fundamental result in the field of probabilistic combinatorics. It states that for any event in a probability space, there exists a randomized algorithm that can efficiently find a witness for the event with high probability. In other words, it provides a powerful tool for proving the existence of certain combinatorial objects or structures.

2. What is the significance of Theorem 4.2.1 in mathematics?

Theorem 4.2.1 has many important applications in mathematics, especially in the fields of combinatorics, computer science, and probability theory. It has been used to solve various problems in graph theory, number theory, and coding theory, among others. It also has implications in algorithm design and complexity theory.

3. How is Theorem 4.2.1 proven?

The proof of Theorem 4.2.1 involves the use of the probabilistic method, which is a powerful technique for proving the existence of certain objects or structures by showing that a randomized algorithm can efficiently find them. It also involves the use of various mathematical tools and concepts, such as probability theory, combinatorics, and graph theory.

4. Can you provide an example of a problem that can be solved using Theorem 4.2.1?

One example of a problem that can be solved using Theorem 4.2.1 is the existence of a Hamiltonian cycle in a graph. A Hamiltonian cycle is a cycle that visits every vertex exactly once. By applying Theorem 4.2.1, one can show that for any graph with a certain degree of randomness, there exists a randomized algorithm that can efficiently find a Hamiltonian cycle with high probability.

5. Are there any limitations or assumptions to Theorem 4.2.1?

Like any theorem, there are limitations and assumptions to Theorem 4.2.1. One limitation is that it only guarantees the existence of a witness for the event in question, but it does not provide any information about the structure or properties of the witness. Additionally, it assumes that the probability space is finite and that the randomized algorithm used is efficient.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
729
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
Replies
0
Views
403
Replies
16
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Precalculus Mathematics Homework Help
Replies
2
Views
850
  • General Math
Replies
6
Views
2K
  • General Math
Replies
1
Views
1K
Replies
4
Views
797
Back
Top