- #1
kidsasd987
- 143
- 4
"Let X be a Bernoulli random variable. That is, P(X = 1) = p and P(X = 0) = 1 − p. Then E(X) = 1 × p + 0 × (1 − p) = p. Why does this definition make sense? By the law of large numbers, in n independent Bernoulli trials where n is very large, the fraction of 1’s is very close to p, and the fraction of 0’s is very close to 1 − p. So, the average of the outcomes of n independent Bernoulli trials is very close to 1 × p + 0 × (1 − p)."
I don't understand why it gives the average of 1 × p + 0 × (1 − p).
So, we are given with total n number of independent trials. Then, let's say we have k number of success, and n-k number of failures.
then, 1*p*k will be our success fraction, and (1-p)(n-k)*0 will be the failure fraction. If we find the average for n trials, it must be pk/n.
how do we have 1 × p + 0 × (1 − p) as our average
I don't understand why it gives the average of 1 × p + 0 × (1 − p).
So, we are given with total n number of independent trials. Then, let's say we have k number of success, and n-k number of failures.
then, 1*p*k will be our success fraction, and (1-p)(n-k)*0 will be the failure fraction. If we find the average for n trials, it must be pk/n.
how do we have 1 × p + 0 × (1 − p) as our average