- #1
16universes
- 35
- 0
Consider two independent random events A and B. Is the event A/B independent from event B?
My real problem stems from a debate between myself and a colleague dealing with using the arithmetic mean vs. the geometric mean. My colleague used an example of rolling dice to discredit arithmetic mean and support geometric mean. I think the example was flawed. It went something like this:
"Suppose you have two 6-sided dice, A and B. Consider the ratio of all possible dice throws A/B. Since both dice are independent, each combination of A and B has the same probability (1/36). If you take the arithmetic mean of the ratios you get approximately 1.43 . Which means on average, A will be larger than B. We know that both dice have the same probability, so we expect this ratio to be 1 on average. If we take the geometric mean of the ratio of A/B, we get the right result, so the geometric mean is the appropriate mean to use."
This sounds great, but it is flawed. Intuitively, I would think you should take the ratio of the averages of A and B, i.e., ##\frac{\overline A}{\overline B}##. Doing this, gives you the correct ratio of 1. But is this right?
I know that the "correct" way is to look at expectation values. Given an event X, the expectation value is given by:
$$ E(X) = \sum \limits_{i=1} ^n \left(x_i \cdot P(x_i)\right) $$
for ##x_i \in X## with probability ##P_i##. For a single 6-sided die we get:
$$E(A) = \sum \limits_{i=1}^6 \left(a_i* P(a_i) \right) = \frac{1}{6} * \sum \limits_{i=1} ^6 a_i = 3.5$$
Let's apply this to A/B. Let R = A/B:
$$E(R) = \sum \limits_{i=1} ^n \left(r_i*P(r_i)\right)$$
If you carry out this calculation you get approximately ##E(R) = 1.33##. Meaning that on average, A will be 33% greater than B. This is supposed to be the "correct" way. But how can this be? A and B were chosen arbitrarily, and each have the same probability.
I know that in general:
$$E(A/B) \neq \frac{E(A)}{E(B)}$$
I feel like the only way for this dice example to make any sense is IF this is true. In fact, it would be true if B was independent from A/B. How would one prove the dependency between B and A/B?
In terms of two independent dice throws A and B, what is the meaning of E(A) / E(B), and how does this meaning differ from that of E(A/B)? Also, if we had one die, what is the meaning of E(1/A)? I feel that my result above of "1.33" has a totally different meaning than I am interpreting it to be. This might be the source of the confusion.
Also, if A and B are independent, then this SHOULD be true:
$$E(A/B) = E(A)*E(1/B)$$
I ran some numbers in excel and I find:
E(A/B) = 1.33
E(A)*E(1/B) = 1.43
Can anyone shed some light into this problem? It's driving me crazy! Thanks in advance.
My real problem stems from a debate between myself and a colleague dealing with using the arithmetic mean vs. the geometric mean. My colleague used an example of rolling dice to discredit arithmetic mean and support geometric mean. I think the example was flawed. It went something like this:
"Suppose you have two 6-sided dice, A and B. Consider the ratio of all possible dice throws A/B. Since both dice are independent, each combination of A and B has the same probability (1/36). If you take the arithmetic mean of the ratios you get approximately 1.43 . Which means on average, A will be larger than B. We know that both dice have the same probability, so we expect this ratio to be 1 on average. If we take the geometric mean of the ratio of A/B, we get the right result, so the geometric mean is the appropriate mean to use."
This sounds great, but it is flawed. Intuitively, I would think you should take the ratio of the averages of A and B, i.e., ##\frac{\overline A}{\overline B}##. Doing this, gives you the correct ratio of 1. But is this right?
I know that the "correct" way is to look at expectation values. Given an event X, the expectation value is given by:
$$ E(X) = \sum \limits_{i=1} ^n \left(x_i \cdot P(x_i)\right) $$
for ##x_i \in X## with probability ##P_i##. For a single 6-sided die we get:
$$E(A) = \sum \limits_{i=1}^6 \left(a_i* P(a_i) \right) = \frac{1}{6} * \sum \limits_{i=1} ^6 a_i = 3.5$$
Let's apply this to A/B. Let R = A/B:
$$E(R) = \sum \limits_{i=1} ^n \left(r_i*P(r_i)\right)$$
If you carry out this calculation you get approximately ##E(R) = 1.33##. Meaning that on average, A will be 33% greater than B. This is supposed to be the "correct" way. But how can this be? A and B were chosen arbitrarily, and each have the same probability.
I know that in general:
$$E(A/B) \neq \frac{E(A)}{E(B)}$$
I feel like the only way for this dice example to make any sense is IF this is true. In fact, it would be true if B was independent from A/B. How would one prove the dependency between B and A/B?
In terms of two independent dice throws A and B, what is the meaning of E(A) / E(B), and how does this meaning differ from that of E(A/B)? Also, if we had one die, what is the meaning of E(1/A)? I feel that my result above of "1.33" has a totally different meaning than I am interpreting it to be. This might be the source of the confusion.
Also, if A and B are independent, then this SHOULD be true:
$$E(A/B) = E(A)*E(1/B)$$
I ran some numbers in excel and I find:
E(A/B) = 1.33
E(A)*E(1/B) = 1.43
Can anyone shed some light into this problem? It's driving me crazy! Thanks in advance.