What Determines the Probability of Playing Coin Flips Indefinitely?

  • Thread starter Office_Shredder
  • Start date
  • Tags
    Challenge
In summary, the condition for the man to have a non-zero probability of playing forever is np>1, where n is the number of new coins introduced when the coin lands on heads and p is the probability of the coin landing on heads. This is a harder challenge than it may appear, as it involves understanding concepts such as convergence in probability and almost sure convergence in probability theory.
  • #1
Office_Shredder
Staff Emeritus
Science Advisor
Gold Member
5,615
1,531
A man has a single coin in front of him that lands on heads with probability p and tails with probability 1-p. He begins flipping coins with his single coin. If a coin lands on heads it is removed from the game and n new coins are placed on his stack of coins, if it lands on tails it is removed from the game and no new coins are introduced.

What is the condition on n and p for the man to have a non-zero probability of playing forever?

For an easier challenge check out Challenge 12a.

This is a harder challenge than it may appear at first glance; points may be awarded for a mildly non-rigorous solution (and bonus points for a fully rigorous solution).
 
Last edited:
Mathematics news on Phys.org
  • #2
This is purely empirical, but it appears the condition is ##np>1##.
 
  • #3
Let n = 1 and p > 0 (clearly if n < 1 the game will last one round). If the time for one round is t then after time T he will have played T / t rounds. He will still be playing if and only if he has flipped heads every time with probability ## p^{T/t} > 0 ##. So the man may still be playing after any finite time T (i.e. for ever) with non-zero probability for any n ≥ 1 and p > 0.

But I'm not sure that's the question you meant to ask - I think you want the man still to be playing after an infinite number of rounds.

Can I suggest for 12c you allow 1 minute for the first round, 30s for the second, then 15s etc. What are the conditions on n and p that give a non-zero probability that the man has any coins left when the game is finished after 2 minutes?
 
Last edited:
  • #4
D H said:
This is purely empirical, but it appears the condition is ##np>1##.

It seems clear that ##np>=1## is a necessary condition to have a non-zero probability of playing forever.

Consider the situation as a drunkard's walk. At each step he has a probability p of taking n-1 steps to the right/positive and a probability 1-p of taking one step to the left/negative.

After some number of steps, this will result in a distribution of possible positions with an easily calculable mean and standard deviation. This distribution is roughly normal. If the expectation of each trial is negative, then the expectation of the distribution becomes increasingly negative. This is linear with the number of trials. But the standard deviation increases as the square root of the number of trials. It would then follow that he tail of the distribution that is positive represents a probability that would become zero in the limit.

It follows that the expectation of each trial must be non-negative to preserve a finite probability of playing forever.

This is all ignoring the fact that a stack with negative coins is impossible. Or, equivalently, that there is a cliff to the left. That fact can only act to reduce the probability that the drunkard stays on the good side of the cliff. So it remains necessary that the expectation of each trial be non-negative.

The expectation of each trial is (n-1)*p - (1-p) = np - 1. This is positive when np > 1.

I am unable to make much progress on demonstrating that this is a sufficient condition, however.
 
  • #5
MrAnchovy said:
Let n = 1 and p > 0 (clearly if n < 1 the game will last one round). If the time for one round is t then after time T he will have played T / t rounds. He will still be playing if and only if he has flipped heads every time with probability ## p^{T/t} > 0 ##. So the man may still be playing after any finite time T (i.e. for ever) with non-zero probability for any n ≥ 1 and p > 0.

But I'm not sure that's the question you meant to ask - I think you want the man still to be playing after an infinite number of rounds.

Can I suggest for 12c you allow 1 minute for the first round, 30s for the second, then 15s etc. What are the conditions on n and p that give a non-zero probability that the man has any coins left when the game is finished after 2 minutes?

I suspect we need the limit probability that the game lasts n rounds as as n goes to infinity to be greater than 0.
 
  • #6
jbriggs444 said:
Consider the situation as a drunkard's walk. At each step he has a probability p of taking n-1 steps to the right/positive and a probability 1-p of taking one step to the left/negative. After some number of steps, this will result in a distribution of possible positions with an easily calculable mean and standard deviation.
Easily calculable if you ignore the cliff at 0. Taking that cliff into account makes the probability distribution much harder. This is more of a gambler's ruin problem than a drunkard's walk.
 
  • #7
D H said:
Easily calculable if you ignore the cliff at 0. Taking that cliff into account makes the probability distribution much harder. This is more of a gambler's ruin problem than a drunkard's walk.

Yes indeed, and the solution to gamblers' ruin is well known - the probability of eventual ruin converges almost surely to 1. However the solution does not say that ruin will happen in a finite time with probability 1 (or alternatively that ruin may not take forever).

The problem here lies in the subtle differences between convergence in probability, almost sure convergence and the kind of limits we are used to dealing with in analysis. When we translate the words of a problem to a statement of probability theory so that we can prove or disprove that statement we need to be very sure that the translation is correct.
 
Last edited:
  • #8
Oh my. This exercise cost me a few hours of my life that I'd like back.

PeroK said:
I suspect we need the limit probability that the game lasts n rounds as as n goes to infinity to be greater than 0.
I can't quite parse that. What we need is that the probability of going bust (playing and losing the last coin in the stack) is less than 1.

Given a value of n, the only rolls on which the player will "go bust" are roll #1, roll #n+1, roll #2n+1, …, roll #kn+1, … where k is a non-negative integer. Denote Pn(k) as the probability of going bust on roll #kn+1. Then the condition Office_Shredder is looking for is the relation between n and p/i] that makes [itex]\sum_{k=0}^{\infty} P_n(k) < 1[/itex]. Easy!

Going bust on roll #kn+1 means getting k wins and (n-1)k+1 losses without having gone bust on some previous roll. There are lots of ways to obtain this sum of wins and losses, each with probability pk(1-p)(n-1)k+1. One has to be careful in counting the possible paths. For example, the only way to go bust on roll #3 with n=2 is to win on the first roll and lose on the next two. The sequences lose-win-lose and lose-lose-win don't count because these sequences mean the player already busted out on roll #1. Without derivation, the probability of going bust on roll #kn+1 is
[tex]P_n(k) = \frac 1 {(n-1)k+1} \binom{nk}k p^k (1-p)^{(n-1)k+1}[/tex]
Summing these individual probabilities to yield the total probability of going bust after an infinite number of rolls yields
[tex]P_n = \sum_{k=0}^{\infty} \frac 1 {(n-1)k+1} \binom{nk}k p^k (1-p)^{(n-1)k+1}[/tex]
Once again without derivation, this sum is exactly one if [itex]np\le 1[/itex] and is less than one if [itex]np>1[/itex].

The condition to have a non-zero probability of playing forever is [itex]np>1[/itex].
 
Last edited:
  • #9
D H said:
Summing these individual probabilities to yield the total probability of going bust after an infinite number of rolls...

If the player goes bust after an infinite number of rolls (taking an equal amount of time), how long has he played for?
 
  • #10
MrAnchovy said:
Yes indeed, and the solution to this is well known - the probability of eventual ruin converges almost surely to 1. However the solution does not say that ruin will happen in a finite time with probability 1.
The challenge says absolutely nothing about time. It asks for the probability of playing forever be non-zero. If the probability of ruin is almost surely one, the probability of playing forever is almost surely zero. In other words, it is not non-zero.
 
  • #11
D H said:
The challenge says absolutely nothing about time. It asks for the probability of playing forever be non-zero. If the probability of ruin is almost surely one, the probability of playing forever is almost surely zero. In other words, it is not non-zero.

How can you say that "playing forever" says nothing about time? Surely it means "playing for longer than any finite amount of time"?

However if you want to, you could interpret "playing forever with probabilty greater than zero" to mean "playing for more rounds than any finite number of rounds with probability greater than zero". However what (I argue) you CANNOT do is equate "playing forever with probability greater than zero" with "play eventually terminating with probability almost surely converging to a value less than 1" - this is a much stronger criterion.
 
  • #12
An anchovy taking on a big fish!
 
  • #13
The intended question is that if Pk is the probability of busting on the kth flip, finding
[tex] \sum_{k\geq 1} P_k [/tex] and whether this number is equal to 1 or not.

I don't think I understand your subtlety MrAnchovy. I thought this was equivalent to your suggestion for a 12c problem.

DH, that was definitely not how I expected the problem to be solved. How do you calculate that sum?
 
Last edited:
  • #14
Let's cover the boring cases first:
With n=0 or p=0, the man will always get rid of coins and stop with probability 1.
With n=1 and p=1 the man will always stay at one coint and play forever.
With n=1 and p<1, the man cannot gain coins and will eventually lose his coin. He stops with probability 1.
With p=1 and n>0, the man cannot get rid of coins and will play forever.

For the general case, consider n>1 and 0<p<1:
Let r(k) be the probability to get run out of coins with an initial stack of k. Then ##r(k)=\left(r(1)\right)^k## as we have run "run out of coins" for each coin on the stack and this is independent of other coins.

Clearly r(1) stays the same if we consider the result after one round. Therefore,
$$r(1)=(1-p)+pr(n)=(1-p)+pr(1)^n$$
Define R:=r(1):
$$0=pR^n-R+(1-p)$$
To find solutions, let
$$f(R)=pR^n-R+(1-p)$$
Clearly R=1 is a root of f and f(0)=1-p > 0.
f''=n(n-1)R^(n-2) > 0 for R>0
f cannot have more than two positive roots and we already found one, to have the second root between 0 and 1 we need f'(1)=np-1 > 0.

Therefore, to have a non-zero probability of playing forever (R<1), the condition is np>1.
This agrees with the idea that the expectation value of the game should be positive, as we have to "invest" 1 coin per round.By the way: in addition to the answer "is the probability non-zero?", this formula allows to calculate the probability.
 
Last edited:
  • Like
Likes 1 person
  • #15
mfb said:
Let's cover the boring cases first:
With n=0 or p=1, the man will always get rid of coins and stop with probability 1 … the condition is n(1-p)>1.
You have the sense of p backwards. p is the probability of getting heads, which is what the player wants. p=1 means the player always wins, so he definitely plays forever. After fixing this, your condition is the same as mine: np>1.
 
  • #16
Office_Shredder said:
The intended question is that if Pk is the probability of busting on the kth flip, finding
[tex] \sum_{k\geq 1} P_k [/tex] and whether this number is equal to 1 or not.
That's exactly what I calculated. I noticed that P_k is identically zero if k≠nr+1 for some integer r and calculated P_r (which I called Pn(k)).


DH, that was definitely not how I expected the problem to be solved. How do you calculate that sum?
You can't. What I wrote was nonsense (now corrected). I had a LaTeX typo in my expressions. Here are the corrected expressions:

The probability of going bust on roll #kn+1 is
[tex]P_n(k) = \frac 1 {(n-1)k+1} \binom{nk}k p^k (1-p)^{(n-1)k+1}[/tex]
Summing these individual probabilities to yield the total probability of going bust after an infinite number of rolls yields
[tex]P_n = \sum_{k=0}^{\infty} \frac 1 {(n-1)k+1} \binom{nk}k p^k (1-p)^{(n-1)k+1}[/tex]
 
  • #17
D H said:
You have the sense of p backwards. p is the probability of getting heads, which is what the player wants. p=1 means the player always wins, so he definitely plays forever. After fixing this, your condition is the same as mine: np>1.
Oh, you are right. I misread heads/tails.
 
  • #18
Office_Shredder said:
I don't think I understand your subtlety MrAnchovy. I thought this was equivalent to your suggestion for a 12c problem.

Put simply my subtlety is that although the game may eventually terminate with probability 1 you may have to wait for more than any finite number of rounds (i.e. forever) for that to happen.

Restating the problem as "What is the condition on n and p for the man to have a non-zero probability of not eventually losing all his coins?" avoids this problem but introduces an awkward double-negative, perhaps "What is the condition on n and p for the man to have a probability less than one of eventually losing all his coins?" is better.

Edit: or (to match 12a) "Once he begins flipping, what is the condition on n and p such that the probability that he ever runs out of coins is less than one?"
 
  • #19
MrAnchovy said:
Put simply my subtlety is that although the game may eventually terminate with probability 1 you may have to wait for more than any finite number of rounds (i.e. forever) for that to happen.
You are making a mountain out of a molehill, an infinitesimally small molehill at that. Office_Shredder's ##\sum_{k=1}^{\infty} P_k=1## is shorthand for ##\forall \epsilon >0 \, \exists N \in I : \forall n>N \,\,|1 - \sum_{k=1}^n P_k|<\epsilon##. Pick a positive epsilon. No matter how close to zero you pick your epsilon, you will find a finite N such that the partial sums are within this epsilon of one. If this sum is one, the player almost surely will not play forever. The probability of an almost sure event is one. Not "almost one", one, period. The point of this challenge is to find the relationship between Office_Shredder's n and p such that marks the boundary between this series having a sum of one versus a sum less than one.
 
  • #20
mfb said:
Clearly r(1) stays the same if we consider the result after one round. Therefore,
$$r(1)=(1-p)+pr(n)=(1-p)+pr(1)^n$$
Define R:=r(1):
$$0=pR^n-R+(1-p)$$
To find solutions, let
$$f(R)=pR^n-R+(1-p)$$
Clearly R=1 is a root of f and f(0)=1-p > 0.
f''=n(n-1)R^(n-2) > 0 for R>0
f cannot have more than two positive roots and we already found one, to have the second root between 0 and 1 we need f'(1)=np-1 > 0.

As you have observed R=1 is a root, so you need to check that this is not the probability of playing forever (which is the tricky part I mention in the opening post).

MrAnchovy, are you suggesting that he might run out of coins on flip [itex]\omega+1[/itex] (with omega the ordinality of the natural numbers) or later? If you want to attack the problem where he takes more than [itex] \omega[/itex] flips then go for it; it wasn't what I intended but is a valid enough interpretation I am interested in seeing what comes up for a proof.
 
  • #21
The cases n=0 and n=1 are a bit pathological. In the case that n=0, this is easily dealt with. This is the "heads, you lose; tails I win" case. Regardless of the value of p, the player will lose, guaranteed, on the first flip.

In the case that n=1, there are only two outcomes: Playing forever with a stack of size 1, or almost surely losing in a finite number of flips. The former happens if p is identically equal to one. The latter happens for all other p (i.e., 0≤p<1). Note that this case changes the constraint from np>1 to np≥1. This is the only case where the strictly less than changes to less than or equal.
Office_Shredder said:
D H said:
[tex]P_n = \sum_{k=0}^{\infty} \frac 1 {(n-1)k+1} \binom{nk}k p^k (1-p)^{(n-1)k+1}[/tex]
DH, that was definitely not how I expected the problem to be solved. How do you calculate that sum?
Wolfram Alpha can evaluate this sum for small n. With a little help getting rid of the superfluous nonsense WA is won't to add to expressions,
 
Last edited:
  • #22
Office_Shredder said:
As you have observed R=1 is a root, so you need to check that this is not the probability of playing forever (which is the tricky part I mention in the opening post).
R is the probability to get ruined, not the probability to play forever. R=1 as solution to the equation is a mathematical artifact (if I get ruined with every number of coins, the equation is still satisfied), but the correct root has f'<=0.
 
  • #23
D H said:
You are making a mountain out of a molehill, an infinitesimally small molehill at that.

I agree that the molehill is infinitesimally small, however the challenge simply asks for a non-zero probability not a probability that is greater than some finite ## \epsilon ##.

D H said:
Office_Shredder's ##\sum_{k=1}^{\infty} P_k=1## is shorthand for ##\forall \epsilon >0 \, \exists N \in I : \forall n>N \,\,|1 - \sum_{k=1}^n P_k|<\epsilon##. Pick a positive epsilon. No matter how close to zero you pick your epsilon, you will find a finite N such that the partial sums are within this epsilon of one.

Well in order to be rigorous it would be nice to first justify the use of a summation by showing that the sample space is at most countably infinite.

D H said:
If this sum is one, the player almost surely will not play forever.

I agree that if this sum is one, the player almost surely (i.e. with probability 1) will run out of coins. However I do not agree that this implies that the player will, with probability one, play for a finite number of rounds. Indeed I demonstrated in post #3 that there is a non-zero probability that the player will still be playing after any finite number of rounds.

D H said:
The point of this challenge is to find the relationship between Office_Shredder's n and p such that marks the boundary between this series having a sum of one versus a sum less than one.

I agree that this was the intention of the challenge, as was confirmed by Office_Shredder in post #, however that is not how the challenge was originally posed.

Office_Shredder said:
MrAnchovy, are you suggesting that he might run out of coins on flip [itex]\omega+1[/itex] (with omega the ordinality of the natural numbers) or later? If you want to attack the problem where he takes more than [itex] \omega[/itex] flips then go for it.

This is not what I am suggesting. There is an isomorphism between flips and the natural numbers so there are at most ## \omega ## flips. What I am suggesting is that if flips are counted separately then it will take forever to perform ## \omega ## flips.

You see by summing over a (countably) infinite number of terms you are by definition including the possibility that there will be a countably infinite number of flips, and there is an isomorphism between this number and the number of flips you can perform in "forever".

Office_Shredder said:
it wasn't what I intended but is a valid enough interpretation I am interested in seeing what comes up for a proof.

Let ## n = 1 ## and ## p > 0 ##. The probability that he is still playing after k rounds is ## p^k > 0 ##. So for any finite k there is a non-zero probability that he is still playing.

However there is a problem with this argument: if ## n=1 ## he must roll heads on every throw. There is therefore only one pattern among the uncountably infinite number of possibilities that will lead to him playing forever, and the probability of this is clearly zero.

I think I will modify my answer to ## n > 1 ## and ## p > 0 ##. There are now uncountably many ways to keep playing for ever, and the probability that he is still playing after k rounds is ## ≥ p^k + k p^{k-1}(1-p) + \binom 2 k p^{k-2}(1-p)^2 ... > 0 ##.
 
  • #24
MrAnchovy said:
Well in order to be rigorous it would be nice to first justify the use of a summation by showing that the sample space is at most countably infinite.
Seriously? Office_Shredder's Pk is the probability of busting out on the kth flip. It is indexed by a positive integer, the coin flip number. You either flip the coin or you don't. There are no halfsies on coin flips, let alone irrational number of coin flips. Of course it's countable.


I agree that if this sum is one, the player almost surely (i.e. with probability 1) will run out of coins. However I do not agree that this implies that the player will, with probability one, play for a finite number of rounds.
I'll modify the rules of the game a bit to make it easier to analyze. What I want is a way to make sense of continuing to flip coins after the player has busted out.

Denote sk as the size of the player's stack after the kth coin flip, with an initial value of s0=1. After the kth coin flip, the stack size remains unchanged if sk=0. If sk is positive, the stack size increases by n-1 if the outcome of the kth was heads but decreases by one if the outcome was tails. Thus sk+1=0 if sk=0, regardless of the outcome of the flip. If sk>0, sk+1=sk+n-1 on heads, sk-1 on tails.

Now for some events. Denote bk as the set of all coin toss sequences of length k that result in going bust going bust on exactly the kth flip. Denote Bk as the set of all coin toss sequences of length k that resulted in going bust before the kth flip. Finally, denote Ak as denoting the set of all coin toss sequences of length k that result in sk+1>0.

Clearly, Bk, bk, and Ak are mutually exclusive events that collectively exhaust the sample space Ωk, the set of all possible coin toss sequences of length k. In other words, P(Ak)=1-(P(Bk)+P(bk)). If P(Bk)+P(bk)→1 as k→∞ then P(Ak) must become vanishingly small as k→∞.

It should be equally clear that bk2 bk1 are also mutually exclusive events for all k2k1. Thus P(Bk)=∑r<kP(br). Note that Office_Shredder's Pk is just P(bk), and also note that his sum is valid because of the mutual exclusivity of going bust on flip number k1 versus going bust on some other flip number k2.


I agree that this was the intention of the challenge, as was confirmed by Office_Shredder in post #, however that is not how the challenge was originally posed.
Yes, it is. You just read stuff into it that wasn't there.


I think I will modify my answer to ## n > 1 ## and ## p > 0 ##. There are now uncountably many ways to keep playing for ever, and the probability that he is still playing after k rounds is ## ≥ p^k + k p^{k-1}(1-p) + \binom 2 k p^{k-2}(1-p)^2 ... > 0 ##.
There are two things wrong here:
1. There are only a countable number of ways to keep playing forever.
2. Your probability is wrong.

I'll look at item #2 first. You are clearly using a binomial expansion. That is not correct. Look at what your expression says for the probability that the player is still playing after one round. Your expression evaluates to one. It evaluates to one for all k! You are ignoring the fact that the player stops playing after going bust (or in my alteration of the game, the play idly watches meaningless coin flips after going bust).

Regarding the countability of the sample space, first off, it doesn't matter if the coin toss sequences that form my sets bk, Bk, and Ak become uncountable in the limit k→∞. The sets themselves are mutually exclusive, and the set of all {bk, Bk, Ak} are countable. Secondly, there are only a countable number of infinitely long coin toss sequences that lead to the player playing forever. The sample space in the original problem is countable.

I'll use an ordered string comprising the symbols h and t to label a sequence of coin tosses of length k. For example, the string t designates tails (and hence going bust) on the very first toss, while h designates getting heads (and hence continuing the game). Note that the only string that starts with t is the string t itself. The string th is not a valid string, in any game with reward n. The string terminates when the player goes bust. In a valid string S, the number of ts in all initial proper substrings s of S is at most n-1 times the number of hs in s. (An initial proper substring s of a string S is a substring that starts at the first character of the string and ends before s ends.) What this means is that you can't use a diagonalization argument to prove uncountability. If the kth character of the kth string is an h, flipping that h to a t might mean game over. You have to keep that h if you want to continue your diagonalization argument, and now you can't guarantee that your generated string will be different from every string in my enumerated list.
 
  • #25
mfb said:
R is the probability to get ruined, not the probability to play forever. R=1 as solution to the equation is a mathematical artifact (if I get ruined with every number of coins, the equation is still satisfied), but the correct root has f'<=0.

I'm not sure I understand mathematically why you can claim that R=1 is not the correct solution.
 
  • #26
Office_Shredder said:
I'm not sure I understand mathematically why you can claim that R=1 is not the correct solution.
The solution R=1 is not just a mathematical artifact. The expression pRn-R+1-p=0 has either two or zero positive solutions when p>0. Since R=1 is always a solution, this expression must have two positive solutions when p>0. One cannot reject the solution R=1 out-of-hand because this is the correct solution when p<1/n. In the case p<1/n, the other solution to pRn-R+1-p=0 is greater than one, which obviously is incorrect. When p=1/n, the solution R=1 is a double root. When p>1/n, the other solution is between 0 and 1.

Note that in the extreme case of p=1, the two solutions are R=0 and R=1. The R=0 solution is obviously correct in this extreme case. If the coin always comes up heads, the probability of ruin is 0, not 1.

This plus continuity suggests a way around the dilemma: The correct solution is the lesser of the two solutions to pRn-R+1-p=0.

This is consistent with my unwieldy sums. Those sums can be expressed in closed form for small n:
  • n=2: ##R=\frac{1-|2p-1|}{2p}##, or
    [tex]R=\begin{cases}
    1 & p\le \frac 1 2 \\[4pt]
    \frac{1-p} p & p > \frac 1 2
    \end{cases}[/tex]
  • n=3: ##R = \frac{2\sin(\frac 1 3 \sin^{-1}(\frac 3 2 (1-p)\sqrt{3p}))}{\sqrt{3p}}##, or
    [tex]R=\begin{cases}
    1 & p\le \frac 1 3 \\[4pt]
    \frac{\sqrt{(4-3p)p}-p} {2p} & p > \frac 1 3
    \end{cases}[/tex]
 
  • #27
OK, to make clear what my protest was: The solution R=1 always is a continuous solution that seems like it could be feasible without further analysis; I do not believe that it is clear without further analysis that when n > 1/p that the other root is the solution. Showing through another means that for n=3 that when p is large enough the R=1 root is not the correct answer (and in particular for that value of p plus a larger n R=1 is not correct either through direct comparison), plus continuity of the solution for other values of n does seem good enough for me. I think mfb and you have combined to form a solution there.

From how I read mfb's post however he seemed to be arguing that the derivative of his polynomial could tell us which root was the correct answer, and I don't understand that part.
 
  • #28
Office_Shredder said:
OK, to make clear what my protest was: The solution R=1 always is a continuous solution that seems like it could be feasible without further analysis; I do not believe that it is clear without further analysis that when n > 1/p that the other root is the solution.
I understand your concern, particularly when it's obvious that the R=1 solution is not just an artifact. It is an essential part of the equation. R=1 is the correct solution in the case pn<1, so rejecting it in the case np>1 is an interesting problem.


The rest of this post describes how I got those multipliers ##\frac 1 {(n-1)k+1} \binom{nk}k## in my unwieldy sums.

Those multipliers result from counting the number of permutations of a sequence of flips that result in losing. Here's a graph of the number of permutations for the case n=2. The nodes in the graph are labeled by the number of permutations. The terminal nodes go nowhere. Each non-terminal node (stack size > 0) goes to two nodes, downward and to the left representing a tails on a coin flip, downward and to the right for a heads.

Code:
      |    stack size
flip# | 0 1 2 3 4 5 6 7 8 
--------------------------
    0 |   1   
      |  / \ 
    1 | 1   1   
      |    / \ 
    2 |   1   1   
      |  / \ / \ 
    3 | 1   2   1   
      |    / \ / \ 
    4 |   2   3   1   
      |  / \ / \ / \ 
    5 | 2   5   4   1   
      |    / \ / \ / \ 
    6 |   5   9   5   1   
      |  / \ / \ / \ / \ 
    7 | 5   14  14  6   1   
      |    / \ / \ / \ / \

The above graph is a bit problematic. There are holes in the graph (e.g., there's no node at flip#=1, stack size=1). Those holes get bigger as n increases. This makes it hard to generate and hard to analyze. It helps to reorganize the tree so that vertical represents the number of heads and horizontal the number of tails. With this, we can delete the edges, and we can delete the terminal nodes. There's an implied vertical edge from an node to the node below, and except for the stack size=1 nodes, there's an implied horizontal edge to the node to the right. Here are the starts of these reoriented graphs for n=2, n=3, and n=4:

Code:
  N=2  |            #tails
#heads |   0   1   2   3   4   5   6   7   
----------------------------------------
     0 |   1   
     1 |   1   1   
     2 |   1   2   2   
     3 |   1   3   5   5   
     4 |   1   4   9  14  14  
     5 |   1   5  14  28  42  42  
     6 |   1   6  20  48  90 132 132 
     7 |   1   7  27  75 165 297 429 429

Code:
  N=3  |                   #tails
#heads |   0   1   2   3   4   5   6   7   8   9  10  
----------------------------------------------------
     0 |   1   
     1 |   1   1   1   
     2 |   1   2   3   3   3   
     3 |   1   3   6   9  12  12  12  
     4 |   1   4  10  19  31  43  55  55  55  
     5 |   1   5  15  34  65 108 163 218 273 273 273

Code:
  N=4  |                         #tails
#heads |   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  
------------------------------------------------------------------------
     0 |   1   
     1 |   1   1   1   1   
     2 |   1   2   3   4   4   4   4   
     3 |   1   3   6  10  14  18  22  22  22  22  
     4 |   1   4  10  20  34  52  74  96 118 140 140 140 140 
     5 |   1   5  15  35  69 121 195 291 409 549 689 829 969 969 969 969

The first table (graph), n=2, is Catalan's triangle. The set of numbers comprising the last number in each row of the first table are the Catalan numbers. The other tables and their terminal numbers are extensions of the concept of Catalan's triangle and the Catalan numbers. All tables are generated the same way: Ch,t=Ch-1,t+Ch,t-1, with implied zeros for entries that aren't in the table. The only difference between the tables is the number of entries on each row, with the last entry on a row for t=(n-1)h. It's those terminal numbers (the last entry on a row) that are of concern. These represent the number of paths leading to a stack size of one. The value of that last entry is ##\frac 1 {(n-1)h+1} \binom{nh}h##.
 

Related to What Determines the Probability of Playing Coin Flips Indefinitely?

1. What is Challenge 12b: Flipping Coins?

Challenge 12b: Flipping Coins is a statistical experiment where you flip a coin multiple times and record the results to analyze the probability of getting heads or tails.

2. How do you perform this experiment?

To perform this experiment, you will need a coin and a flat surface. Toss the coin in the air and let it land on the flat surface. Record whether it landed on heads or tails. Repeat this process for a specific number of times to get accurate results.

3. What is the purpose of this experiment?

The purpose of this experiment is to understand the concept of probability and how it relates to the act of flipping a coin. It also helps to demonstrate the law of large numbers, which states that the more times an experiment is repeated, the closer the results will be to the expected outcome.

4. What are some potential outcomes of this experiment?

The potential outcomes of this experiment are getting heads or tails when flipping the coin, as well as gathering data to calculate the probability of getting each outcome. Additionally, the results of this experiment may vary depending on the number of times the coin is flipped and the accuracy of the experimental setup.

5. How can this experiment be applied in real-life situations?

This experiment can be applied in various real-life situations, such as predicting the outcome of a coin toss in a sports game or using probability to make informed decisions in business or finance. It can also be used to understand and analyze random events in everyday life and make predictions based on the collected data.

Similar threads

Replies
4
Views
2K
Replies
10
Views
5K
Replies
5
Views
3K
  • Set Theory, Logic, Probability, Statistics
2
Replies
45
Views
3K
Replies
14
Views
6K
  • Calculus and Beyond Homework Help
Replies
15
Views
2K
  • Precalculus Mathematics Homework Help
Replies
4
Views
1K
  • Precalculus Mathematics Homework Help
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
925
  • Engineering and Comp Sci Homework Help
Replies
4
Views
3K
Back
Top