Expected value of random sums with dependent variables

In summary: So you would need to know the conditional expectation E(X|N=n). This is what you are looking for. Regards,In summary, Wald's equation states that the expectation of a random variable (X) is equal to the sum of the expectations of its individual independent variates (I_i). In this example, I_i is the sum of the expectations of the ith event up to and including N=i, where N is the stopping time. Ross's book provides a concise and easy-to-follow derivation of Wald's equation.
  • #1
agarwalv
3
0
Hi all,

I have a question of computing the expectation of random sums.

E(sim_{k=1}^N X_k) = E(N)E(X) if N and X_1, X_2,...are independent and X_k's are iid. Here both N and X_k's are r.vs.

But the condition of N and X_1, X_2,...being independent is not true in many cases.

How will you compute E(sim_{k=1}^N X_k) if N and X_1, X_2,...are not independent (even weakly dependent).

Can we use Law of iterative expectation? I am not sure what will E(sim_{k=1}^N X_k) equal to?

Please help...

Thank you
Regards

Agrawal V
 
Physics news on Phys.org
  • #2
agarwalv said:
...
How will you compute E(sim_{k=1}^N X_k) if N and X_1, X_2,...are not independent (even weakly dependent).

Can we use Law of iterative expectation?

Yes. Condition on the value of N, and then take the expectation with respect to N. If X_1, X_2, ... are identically distributed (but not necessarily independent), you still get E(X)E(N).

EDIT #1: if N is not independent of the X's, then [tex]E(\sum_{k=1}^N X_k) = E(N)E(X)[/tex] is not generally true. Instead it would be something like

[tex]E(\sum_{k=1}^N X_k}) = \sum_{n} nP\{N=n\}E(X|N=n)[/tex]

So you would need to know the conditional expectation E(X|N=n).

EDIT #2: And even that may not be quite general enough. It may happen that even though the X's are all identically distributed, E(X_i | N=n) does not equal E(X_j | N=n) for i and j different. So it would be

[tex]E(\sum_{k=1}^N X_k}) = \sum_{n} P\{N=n\} \left (\sum_{k=1}^nE(X_k|N=n) \right )[/tex]
 
Last edited:
  • #3
Hi techmologist

Thanks for the reply. In my case, I,m considering N is the stopping time and each X_i's act as a renewal process, i.e, each X_i is replaced by another X_j having a common distribution function F. So I was thinking more on the lines of renewal process and stopping time.

I can across Wald's equality where N depends upon the X_i's until X_{n-1} and is independent of X_n, X_{n+1},..., because at X_n the condition (any stopping condition) is satisfied...which gives similar expression as E(sim_{k=1}^N X_k) = E(N)E(X). Do you think this will address the issue of dependence between N and the X_is..

Also, can it take expectation with respect to N on this term as per law of iterative expectation...please suggest
[tex]
\sum_{n} nP\{N=n\}E(X|N=n)
[/tex]

Thank you
 
  • #4
Hi Agrawal,

I had to look up Wald's equation, and I think now I see what you are getting at. By the way, the current Wikipedia article on Wald's equation is very confusing. I would give that article time to "stabilize" before I paid any attention to it. Instead of that, I used Sheldon Ross's book Introduction to Probability Models, 8th edition. On pages 462-463, he talks about Wald's equation and stopping times.

So in the case you are talking about, the X_i 's are independent identically distributed random variables for a renewal process. To take an example from Ross's book, X could represent the time between arrivals of customers at a bank. But as you say, the stopping time N may depend on the X_i's. In the above example, the sequence could stop with the first customer to arrive after the bank has been open for an hour. Thus, if the twentieth customer arrived at 0:59:55, and the twenty-first customer arrived at 1:03:47, the stopping "time" would be N=21 and the sum of the waiting times would be 1:03:47.

Note: Ross's definition of stopping time is that the event N=n is independent of X_{n+1}, X_{n+2},..., but generally depends on X_1, ..., X_n. It might be that he is labelling the X_i's differently than you. In his book, X_i is the waiting time between the (i-1)st and the ith event.

I no longer think that conditioning on N is the way to do it, although it may be possible. That is what you meant by using the law of iterated expectations, right? In practice, finding E(X_i | N=n) is very difficult. Ross uses indicator variables to prove Wald's equation:

[tex]I_i=\left\{\begin{array}{cc}1,&\mbox{ if }
i\leq N\\0, & \mbox{ if } i>N\end{array}\right[/tex]

Now note that I_i depends only on X_1, ..., X_{i-1}. You have observed the first i-1 events, and if you have stopped then N<i. If you have not stopped, then N is at least i.

[tex]E\left( \sum_{i=1}^N X_i \right) = E\left(\sum_{i=1}^{\infty}X_iI_i\right) = \sum_{i=1}^{\infty}E(X_iI_i)[/tex]

[tex]E\left( \sum_{i=1}^N X_i\right) = \sum_{i=1}^{\infty}E(X_i)E(I_i) = E(X)\sum_{i=1}^{\infty}E(I_i)[/tex]

Now use the fact that: [tex]\sum_{i=1}^{\infty}E(I_i) = E\left( \sum_{i=1}^{\infty}I_i\right) = E(N)[/tex]
 
  • #5
Thank you techmologist...
 
  • #6
You are welcome. I got to learn something out of it, too. Wald's equation helped me solve a problem I had been wondering about for a while. Suppose Peter and Paul bet one dollar on successive flips of a coin until one of them is ahead $5. How many flips, on average, will it take for their game to end? At least I think my approach using Wald's equation will work... it involves taking a limit.
 

Related to Expected value of random sums with dependent variables

What is the expected value of a random sum with dependent variables?

The expected value of a random sum with dependent variables is the sum of the expected values of each individual variable. This means that the expected value of the sum is equal to the sum of the expected values.

How do you calculate the expected value of a random sum with dependent variables?

To calculate the expected value of a random sum with dependent variables, you first need to calculate the expected value of each individual variable. Then, you simply add these expected values together to get the overall expected value of the sum.

What is the significance of the expected value in relation to random sums with dependent variables?

The expected value is an important measure in random sums with dependent variables as it represents the average value that can be expected from the sum. It helps in understanding the overall behavior and potential outcomes of the sum.

Can the expected value of a random sum with dependent variables be negative?

Yes, the expected value of a random sum with dependent variables can be negative. This can happen if one or more of the individual variables have negative expected values. However, it is also possible for the expected value to be positive or zero.

How does the dependence between variables affect the expected value of a random sum?

The dependence between variables can have a significant impact on the expected value of a random sum. If the variables are positively correlated, the expected value of the sum will be higher than if they were independent. On the other hand, if the variables are negatively correlated, the expected value of the sum will be lower than if they were independent.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
958
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
36
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
552
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
705
Replies
0
Views
440
Back
Top