- #1
bobby2k
- 127
- 2
This question is only about formal definitions, so it is only about how we define things. It is probably very easy even though it is kind of long.
Most of the time when I have seen a random variable it is used as follows. We have a probability space (Ω,F,P) and a measureable space (E,ε), and the random variable X is a measurable function.
X: Ω→E. Now, Ω usually does not contain any information about previous values of X, like if we flip a coin and say that you can win 10 USD if we get tail we have that the probability space is
({heads,tails},
{{heads},{tails},{heads,tails},ø},
( ({heads},0.5),({tails},0.5),({heads,tails},1),(ø,0) )
And the measurable space where X can take values.
({0,10},{{0},{10},{0,10},ø})
And X = {({heads},0),({tails},10)}
Now the point is here that in the sample-space Ω there is no information about X.
But in Markov-chains, is this not the case? I have not seen in any book or wikipedia how they define it?
Wikipedia defines a stochasic process in a way that we assume we have the probability space
(Ω,F,P), and a measurable space (E,ε) where E is called the state space. The stochastic process is a set of random variables T = {Xt, t [itex]\in[/itex] P}, where P is an index set.
Now finally I can explain my problem. In a markov-chain the pobability in the next step depends on what state you are in. How does this coincide with the formal definition? This really confuses be since we only have one common probability function for all the states. Is it solved by letting the sample-space Ω contain a lot more information? So in this case we have to have values from the measurable state-space E in the sample space Ω?
Most of the time when I have seen a random variable it is used as follows. We have a probability space (Ω,F,P) and a measureable space (E,ε), and the random variable X is a measurable function.
X: Ω→E. Now, Ω usually does not contain any information about previous values of X, like if we flip a coin and say that you can win 10 USD if we get tail we have that the probability space is
({heads,tails},
{{heads},{tails},{heads,tails},ø},
( ({heads},0.5),({tails},0.5),({heads,tails},1),(ø,0) )
And the measurable space where X can take values.
({0,10},{{0},{10},{0,10},ø})
And X = {({heads},0),({tails},10)}
Now the point is here that in the sample-space Ω there is no information about X.
But in Markov-chains, is this not the case? I have not seen in any book or wikipedia how they define it?
Wikipedia defines a stochasic process in a way that we assume we have the probability space
(Ω,F,P), and a measurable space (E,ε) where E is called the state space. The stochastic process is a set of random variables T = {Xt, t [itex]\in[/itex] P}, where P is an index set.
Now finally I can explain my problem. In a markov-chain the pobability in the next step depends on what state you are in. How does this coincide with the formal definition? This really confuses be since we only have one common probability function for all the states. Is it solved by letting the sample-space Ω contain a lot more information? So in this case we have to have values from the measurable state-space E in the sample space Ω?