- #1
Dustinsfl
- 2,281
- 5
If a Markov transition matrix's columns add up to 1, the matrix is a normal Markov transition matrix. Since it is normal, the Markov process must converge to a steady-state vector.
Is this correct?
Is this correct?
hgfalling said:Well, no. For example, consider the following Markov transition matrix:
[tex]
\begin{bmatrix}
0 & 1 \\
1 & 0 \\
\end{bmatrix}
[/tex]
The result of this process is that at each step the result vector is flipped, ie (1 0) becomes (0 1), etc. So the Markov process itself doesn't converge to a steady state vector. However, the long-term time average does converge. However, if the Markov transition matrix is regular (that is, all the entries are positive instead of just being non-negative), then the Markov process will converge.
Dustinsfl said:The steady-state vector is the "long-term" vector since to obtain the the steady-state vector we run to limit as [itex]n\rightarrow\infty[/itex] where n is:
[tex]A^n=XD^nX^{-1}\mathbf{x}_0[/tex]
where [itex]\mathbf{x}_0[/itex] is the initial state vector
hgfalling said:Yes, but the Markov process I described does not converge to that vector. Only the time-average does.
Dustinsfl said:If a Markov transition matrix's columns add up to 1, the matrix is a normal Markov transition matrix. Since it is normal, the Markov process must converge to a steady-state vector.
Is this correct?
A Normal Markov Transition Matrix is a square matrix that represents the probabilities of transitioning from one state to another in a Markov chain. It is called "normal" because the sum of each row in the matrix adds up to 1, making it a stochastic matrix.
As the number of iterations in a Markov chain increases, the probability vector (initially set to represent the starting state) will eventually converge to a steady-state vector. This steady-state vector represents the long-term probabilities of being in each state in the Markov chain.
A Markov chain with an irreducible transition matrix means that every state in the chain is reachable from every other state. This ensures that the chain will eventually converge to a steady-state vector.
No, a Normal Markov Transition Matrix can only have one steady-state vector. This vector represents the long-term probabilities of being in each state, and it is unique for each transition matrix.
The steady-state vector can be calculated by finding the eigenvector associated with the eigenvalue of 1 for the transition matrix. This can be done using methods such as power iteration or eigendecomposition.