Facebook Page
Twitter
RSS
Thanks Thanks:  0
+ Reply to Thread
Results 1 to 3 of 3
  1. MHB Journeyman
    MHB Math Scholar
    caffeinemachine's Avatar
    Status
    Offline
    Join Date
    Mar 2012
    Location
    India
    Posts
    822
    Thanks
    582 times
    Thanked
    1,149 time
    Thank/Post
    1.398
    Awards
    MHB Topology and Advanced Geometry Award (2016)
    #1
    Let $X=X_0, X_1, X_2, \ldots$ be a Markov chain on a finite state space $S$, and let $P$ denote the transition matrix.
    Assume that there is an $\varepsilon>0$ such that whenever $\mu_0$ and $\nu_o$ are point distributions on $S$ (in other words, $\mu_0$ and $\nu_0$ are Direac masses) we have
    $$\|\mu_0P-\nu_0P\|_{TV}\leq \varepsilon$$

    Now let $Y=Y_0, Y_1, Y_2, \ldots$ be an independent copy of $X$.

    Question. What can we say about the magnitude of $P[X_1\neq Y_1|X_0=x_0, Y_0=y_0]$, where $x_0$ and $y_0$ are two different states in $S$.

    I intuitively think that we should be above to bound the above quantity by $\varepsilon$ up to multiplication by an absolute constant. But I am unable to prove it.

    We have that
    $$P[X_1\neq Y_1|X_0=x_0, Y_0=y_0] = \sum_{x\in S}\sum_{y\in S, y\neq x}P[X_1=x, Y_1=y|X_0=x_0, Y_0=y_0]$$
    which is equal to
    $$\sum_{x\in S} p(x_0, x)(1-p(y_0, x))$$
    but I am unable to make progress.

  2. # ADS
    Circuit advertisement
    Join Date
    Always
    Location
    Advertising world
    Posts
    Many
     

  3. MHB Apprentice

    Status
    Offline
    Join Date
    Dec 2018
    Posts
    51
    Thanks
    6 times
    Thanked
    41 times
    #2
    Really a standard exercise/result for countable state random variables (with emphasis on markov chains) is to show, where, for notational reasons it is understood in your particular case that $X_0 = 0 $ and $Y_0 = 0$

    $P[X_1\neq Y_1|X_0=x_0, Y_0=y_0] = P[X_1\neq Y_1] \geq d_{TV}(X_1,Y_1)= \|\mu_0P-\nu_0P\|_{TV}$
    with $\mu := \mathbf e_0$ $\nu := \mathbf e_0$

    and the inequality is reached with equality in the case of a maximal coupling. This is somewhat trivial here but useful for other couplings e.g. if $\nu_0$ was the steady state vector.

    (nit: there are couple different formulations of total variation distance I am using the 1/2 L1 norm of difference between probability vectors interpretation for countable state r.v.v)

    the inequality you're trying to prove 'points in the wrong way' so to speak. More context / background information is needed to make sense of what you're trying to do. E.g. if you don't know what coupling, that is a big deal and my post won't make any sense.

  4. MHB Journeyman
    MHB Math Scholar
    caffeinemachine's Avatar
    Status
    Offline
    Join Date
    Mar 2012
    Location
    India
    Posts
    822
    Thanks
    582 times
    Thanked
    1,149 time
    Thank/Post
    1.398
    Awards
    MHB Topology and Advanced Geometry Award (2016)
    #3 Thread Author
    Quote Originally Posted by steep View Post
    Really a standard exercise/result for countable state random variables (with emphasis on markov chains) is to show, where, for notational reasons it is understood in your particular case that $X_0 = 0 $ and $Y_0 = 0$

    $P[X_1\neq Y_1|X_0=x_0, Y_0=y_0] = P[X_1\neq Y_1] \geq d_{TV}(X_1,Y_1)= \|\mu_0P-\nu_0P\|_{TV}$
    with $\mu := \mathbf e_0$ $\nu := \mathbf e_0$

    and the inequality is reached with equality in the case of a maximal coupling. This is somewhat trivial here but useful for other couplings e.g. if $\nu_0$ was the steady state vector.

    (nit: there are couple different formulations of total variation distance I am using the 1/2 L1 norm of difference between probability vectors interpretation for countable state r.v.v)

    the inequality you're trying to prove 'points in the wrong way' so to speak. More context / background information is needed to make sense of what you're trying to do. E.g. if you don't know what coupling, that is a big deal and my post won't make any sense.
    I realize now that the statement I want to prove does not hold for the random walk on a big complete graph.

Similar Threads

  1. Markov Chain Problem
    By Imarobby55 in forum Advanced Probability and Statistics
    Replies: 1
    Last Post: November 29th, 2018, 22:34
  2. "Two step" Markov chain is also a Markov chain.
    By caffeinemachine in forum Advanced Probability and Statistics
    Replies: 4
    Last Post: October 20th, 2018, 04:39
  3. how can i set this problem as a continuous markov chain?
    By Pablorodn in forum Advanced Probability and Statistics
    Replies: 0
    Last Post: April 10th, 2018, 09:42
  4. Markov Chain - Is state 2 periodic?
    By mathmari in forum Advanced Probability and Statistics
    Replies: 4
    Last Post: January 30th, 2014, 07:22
  5. Probability, Markov chain
    By Sarah in forum Advanced Probability and Statistics
    Replies: 4
    Last Post: May 2nd, 2013, 12:19

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Math Help Boards