- Thread starter
- #1

Consider a sequence of Bernoulli trials with success probability $p$. Fix a positive integer $r$ and let $\mathcal{E}$ denote the event that a run of $r$ successes is observed; recall that we do not allow overlapping runs. We use a recurrence relation for $u_n$, the probability that $\mathcal{E}$ occurs on the $n$th trial, to derive the generating function $U(s)$. Consider $n \ge r$ and the event that trials $n, n − 1, . . . , n − r + 1$ all result in success. This event has probability $p^r$.

On the other hand, if this event occurs then event $\mathcal{E}$ must occur on trial $n−k$ ($0 \le k \le r−1$) and then the subsequent $k$ trials must result in success. Thus we derive

$u_n + u_{n−1}p + · · · + u_{n−r+1}p^{r−1} = p^r$, for $n \ge r$.

Everything is fine until 'On the other hand...' How do we know that event $\mathcal{E}$ must occur on trial $n−k$ ($0 \le k \le r−1$)? And also why the LHS of the above equation equals RHS? I mean I understand RHS - the probability of $r$ successes, but I'm struggling to understand LHS. In my opinion, when we take for example $u_{n-1}$, don't we also take into account the successes which happened outside our run in question? I'm clearly wrong but I don't understand why.

Please, could anyone try to explain it to me?