Normal Markov Transition Matrix: Converging to Steady-State Vector

In summary: This is the situation even for a 2-by-2 matrix:In summary, it is not always the case that a Markov transition matrix's columns adding up to 1 means that the matrix is a normal Markov transition matrix. However, if the matrix is regular, then the Markov process will converge to a steady-state vector. The steady-state vector is the "long-term" vector that can be obtained by taking the limit as n approaches infinity of the expression A^nXDX^{-1}\mathbf{x}_0 where X is the diagonal matrix of eigenvectors and D is the diagonal matrix of eigenvalues. However, this limit may not always exist, as shown by the example of a stoplight with a changing probability vector
  • #1
Dustinsfl
2,281
5
If a Markov transition matrix's columns add up to 1, the matrix is a normal Markov transition matrix. Since it is normal, the Markov process must converge to a steady-state vector.

Is this correct?
 
Physics news on Phys.org
  • #2
Well, no. For example, consider the following Markov transition matrix:

[tex]
\begin{bmatrix}
0 & 1 \\
1 & 0 \\
\end{bmatrix}
[/tex]

The result of this process is that at each step the result vector is flipped, ie (1 0) becomes (0 1), etc. So the Markov process itself doesn't converge to a steady state vector. However, the long-term time average does converge. However, if the Markov transition matrix is regular (that is, all the entries are positive instead of just being non-negative), then the Markov process will converge.
 
Last edited:
  • #3
hgfalling said:
Well, no. For example, consider the following Markov transition matrix:

[tex]
\begin{bmatrix}
0 & 1 \\
1 & 0 \\
\end{bmatrix}
[/tex]

The result of this process is that at each step the result vector is flipped, ie (1 0) becomes (0 1), etc. So the Markov process itself doesn't converge to a steady state vector. However, the long-term time average does converge. However, if the Markov transition matrix is regular (that is, all the entries are positive instead of just being non-negative), then the Markov process will converge.

The steady-state vector is the "long-term" vector since to obtain the the steady-state vector we run to limit as [itex]n\rightarrow\infty[/itex] where n is:

[tex]A^n=XD^nX^{-1}\mathbf{x}_0[/tex]

where [itex]\mathbf{x}_0[/itex] is the initial state vector
 
  • #4
Dustinsfl said:
The steady-state vector is the "long-term" vector since to obtain the the steady-state vector we run to limit as [itex]n\rightarrow\infty[/itex] where n is:

[tex]A^n=XD^nX^{-1}\mathbf{x}_0[/tex]

where [itex]\mathbf{x}_0[/itex] is the initial state vector

But this limit does not always exist.

Consider the matrix in my previous post, with x0 = (1/3,2/3). Now if n is even, the expression
[tex]XD^nX^{-1}\mathbf{x}_0[/tex] = (1/3,2/3)
while if n is odd,
[tex]XD^nX^{-1}\mathbf{x}_0[/tex] = (2/3,1/3)

so the limit doesn't exist.

Putting this in more concrete terms, suppose there is a stoplight that can be either red or green. Every minute, it changes color. What is the steady-state vector? It's not a steady state -- at any particular point in the future it will be either deterministically red or green. However, the long-term time average:

[tex] \frac{1}{N}\stackrel{lim}{N\rightarrow\infty} \sum_{n=1}^N A^n x_0 [/tex]

will converge to (1/2,1/2).
 
  • #5
The probability vector

[tex]\begin{bmatrix}
\frac{1}{2}\\
\frac{1}{2}
\end{bmatrix}[/tex] is a steady state vector.
 
  • #6
Yes, but the Markov process I described does not converge to that vector. Only the time-average does.
 
  • #7
hgfalling said:
Yes, but the Markov process I described does not converge to that vector. Only the time-average does.

We are dealing with stochastic process so none of the [tex]a_{ij}[/tex] can't be negative . We don't have a stochastic matrix in your example.
 
Last edited:
  • #8
Dustinsfl said:
If a Markov transition matrix's columns add up to 1, the matrix is a normal Markov transition matrix. Since it is normal, the Markov process must converge to a steady-state vector.

Is this correct?

Also, I wrote the column vectors add up to 1 not -1 in the transition matrix.
 
  • #9
[tex]

\begin{bmatrix}
0 & 1 \\
1 & 0 \\
\end{bmatrix}

[/tex]

has columns that add up to 1, and the entries are non-negative.
 
  • #10
"In general, if A is a nxn stochastic matrix, then [tex]\lambda_1=1[/tex] is an eigenvalue of A and the remaining eignevalues satisfy...the remaining eigenvalues of A satisfy [tex]\begin{vmatrix}
\lambda_j
\end{vmatrix}<\begin{vmatrix}
\lambda_1
\end{vmatrix}[/tex]" (Leon, pg 313-314, 2010).

Leon, S. (2010). Linear algebra with applications. Upper Saddle River, NJ: Pearson.
 
  • #11
[tex]1\nless 1[/tex]
 
  • #12
I don't have a copy of that book, but unless there are additional qualifiers, that statement just isn't right. *REGULAR* stochastic matrices satisfy that condition, but the inequality has to be weakened to "less than or equal to" to apply to all stochastic matrices in general.

For example:
http://en.wikipedia.org/wiki/Perron–Frobenius_theorem

Stochastic matrices

A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. Strictly speaking the theorem cannot be applied directly to such matrices because they need not be irreducible. If A is row-stochastic then the column vector with each entry 1 is clearly an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. However it may not be the only eigenvalue on the unit circle and the associated eigenspace can be multi-dimensional.
 

Related to Normal Markov Transition Matrix: Converging to Steady-State Vector

1. What is a Normal Markov Transition Matrix?

A Normal Markov Transition Matrix is a square matrix that represents the probabilities of transitioning from one state to another in a Markov chain. It is called "normal" because the sum of each row in the matrix adds up to 1, making it a stochastic matrix.

2. How does a Normal Markov Transition Matrix converge to a steady-state vector?

As the number of iterations in a Markov chain increases, the probability vector (initially set to represent the starting state) will eventually converge to a steady-state vector. This steady-state vector represents the long-term probabilities of being in each state in the Markov chain.

3. What does it mean for a Normal Markov Transition Matrix to be irreducible?

A Markov chain with an irreducible transition matrix means that every state in the chain is reachable from every other state. This ensures that the chain will eventually converge to a steady-state vector.

4. Can a Normal Markov Transition Matrix have more than one steady-state vector?

No, a Normal Markov Transition Matrix can only have one steady-state vector. This vector represents the long-term probabilities of being in each state, and it is unique for each transition matrix.

5. How is the steady-state vector calculated from a Normal Markov Transition Matrix?

The steady-state vector can be calculated by finding the eigenvector associated with the eigenvalue of 1 for the transition matrix. This can be done using methods such as power iteration or eigendecomposition.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
17
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Precalculus Mathematics Homework Help
Replies
24
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
Replies
24
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
366
Replies
93
Views
5K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
Back
Top