Probability of Finding a Random Walker in D Dimensions After N Steps

In summary: This problem considers a random walker in D dimensions, where D is a natural number. The walker can only make steps to neighboring sites, with equal probability for each adjacent site. The question asks for the probability to find the walker at a particular site, \vec r, after N steps starting from the origin. Two methods are presented: using the assumption that the walk can be described as a Markov chain in D dimensions, and using the distribution function of the displacement of the walker after N independent steps. The first method yields a result that is not dependent on the initial position, while the second method takes into account the individual dimensions of the displacement vector. The final answer for the probability is given as P_N(\vec r) = \left (
  • #1
fluidistic
Gold Member
3,924
261

Homework Statement


Hi guys, I'm absolutely desperate on the following problem:
Consider a random walker who can make steps only to neighbor sites in "D" dimensions where D is an arbitrary natural number. Assume that the distance between 2 adjacent sites is the same for all sites and that the probabilty for the walker to choose between any adjacent site is the same. Let [itex]\vec r[/itex] be a particular site and N be the number of steps the walker has done.
What is the probability to find the random walker at [itex]\vec r[/itex] after N steps if he starts initially at the origin?
Answer to this question using:
1)the assumption that the walk can be described as a Markov chain in D dimensions.
2)the distribution function of the displacement of the walker after N independent steps, [itex]\vec r = \vec r _1 + ... + \vec r _N[/itex].

Homework Equations


Probably a lot! Not really sure.

The Attempt at a Solution


I tried part 1) so far but did not go far at all.
I know that the probability to choose any adjacent site is [itex]\frac{1}{2D}[/itex] where D is the dimension of the space the walker walks in.
So they are asking me [itex]P_N(\vec r)[/itex]. I'll use the notation [itex]\vec r=r_1 \hat x_1 +... + r_D \hat x_D[/itex]; that's the position of the walker after N steps.
I have the initial condition that [itex]P_0(\vec r )=\vec 0[/itex]. Since I'm given an initial condition on [itex]P_N(\vec r )[/itex] and that I'm asked to find [itex]P_N (\vec r )[/itex] I can "smell" that I'll have to solve a differential equation or something like that, but I've no idea how to find it.
Now I know that for a Markovian process, the probability that the walker will go to say [itex]P _N(\vec r)[/itex] does not depend on its past but only on its present. Namely only on the "state" [itex]P_ {N-1} (\vec l )[/itex] where [itex]\vec l[/itex] is the site of the walker prior to [itex]\vec r[/itex].
If I'm not wrong, then, I guess [itex]P_N (\vec r)=\frac{1}{2D} P_{N-1} (\vec l )[/itex].
But then if I take [itex]P_{N-1} (\vec l )[/itex] it will depend only on the previous state. In the end I get the wrong result that [itex]P_N ( \vec r ) = \left ( \frac{1}{2D} \right ) ^{N} P_0 (0)=\left ( \frac{1}{2D} \right ) ^{N}[/itex].
I know this result is wrong but I don't know what I'm doing wrong. Any help will be appreciated.
 
Physics news on Phys.org
  • #2
Your method didn't work because it only considered one possible prior state. There are 2D to sum over.
You could try assessing the number of ways N can be partitioned into the D dimensions, then how many ways each of those partitions can lead to the target coordinates. Or (almost the same?) get a recurrence relation based on number of steps available, number of dimensions, and remaining vector for those dimensions. I.e. knock off one dimension at a time.
 
  • #3
First of all, thanks for the reply.
haruspex said:
Your method didn't work because it only considered one possible prior state. There are 2D to sum over.
Hmm I don't really understand. Since it's a Markov process, the current state only depends on the last one, not on the previous ones.

You could try assessing the number of ways N can be partitioned into the D dimensions, then how many ways each of those partitions can lead to the target coordinates. Or (almost the same?) get a recurrence relation based on number of steps available, number of dimensions, and remaining vector for those dimensions. I.e. knock off one dimension at a time.
Hmm I see, but would that be for part 2)? I don't see what it has to do with Markov chains (I'm explicitely asked to solve the problem by considering the process as a Markov chain in part 1).

I made a mistake in my first post, I reach that [itex]P_N(\vec r ,N | 0, 0) = \left ( \frac{1}{2D} \right )[/itex] instead of [itex]\left ( \frac{1}{2D} \right ) ^{N}[/itex]. That's still false because there's no dependency on [itex]\vec r[/itex] while there should.

Edit: Apparently I believe my result is wrong because I considered that the Markov chain is homogeneous while it seems it isn't.
A homogeneous Markov chain has the property that [itex]P(X_k=i_k | X_{k-1} =i_{k-1} )[/itex] does not depend on k. Well I'm confused. In the problem the Markov chain seems homogeneous... but then I reach a non sense answer.
 
Last edited:
  • #4
Attempt for 2):
Let [itex]\vec r = \vec x_1 +... + \vec x_D[/itex]. Such that [itex]x_i[/itex] is the component of the vector r along the i'th dimension.
For any dimension i, there are [itex]{x_ i \choose N}[/itex] ways for the walker to get to [itex]\vec x_i[/itex] in N steps. And the probability to choose one particular way is [itex]\left ( \frac{1}{2D} \right ) ^N[/itex].
Since the position of the walker in any dimension has no influence whatsoever on its position along the other dimensions, I can safely write [itex]P _N ( \vec r ) = {x_1 \choose N} ... {x_d \choose N} \left ( \frac{1}{2D} \right ) ^N = \left ( \frac{1}{2D} \right ) ^N \sum _{i=1}^D \frac{N!}{(N-x_i )!x_i!}[/itex]. Does this look correct? I tried D=1 and I think it looked right; at least to me. What do you think? This would be my answer for the probability to find the walker at the position [itex]\vec r[/itex] after N steps.
 
  • #5
fluidistic said:
Attempt for 2):
Let [itex]\vec r = \vec x_1 +... + \vec x_D[/itex]. Such that [itex]x_i[/itex] is the component of the vector r along the i'th dimension.
For any dimension i, there are [itex]{x_ i \choose N}[/itex] ways for the walker to get to [itex]\vec x_i[/itex] in N steps. And the probability to choose one particular way is [itex]\left ( \frac{1}{2D} \right ) ^N[/itex].
Since the position of the walker in any dimension has no influence whatsoever on its position along the other dimensions, I can safely write [itex]P _N ( \vec r ) = {x_1 \choose N} ... {x_d \choose N} \left ( \frac{1}{2D} \right ) ^N = \left ( \frac{1}{2D} \right ) ^N \sum _{i=1}^D \frac{N!}{(N-x_i )!x_i!}[/itex]. Does this look correct? I tried D=1 and I think it looked right; at least to me. What do you think? This would be my answer for the probability to find the walker at the position [itex]\vec r[/itex] after N steps.

This is incorrect: for N = x_1 there is just 1 way to get from 0 to x_1 in N steps; for N = x_1 + 1, there is no way to get from 0 to x_1 in N steps; for N = x_1 + 2, there are 2 ways to get from 0 to x_1 in N steps, etc.

RGV
 
  • #6
Ray Vickson said:
This is incorrect: for N = x_1 there is just 1 way to get from 0 to x_1 in N steps; for N = x_1 + 1, there is no way to get from 0 to x_1 in N steps; for N = x_1 + 2, there are 2 ways to get from 0 to x_1 in N steps, etc.

RGV
Woops I've got some latex typos. All my [itex]{n \choose k}[/itex] should be [itex]{k \choose n}[/itex]. Does this fix the problem?
 
  • #7
fluidistic said:
Since it's a Markov process, the current state only depends on the last one, not on the previous ones.
Quite so, but you seem to be confusing "the probability of being at a certain point after N steps given it was at a certain neighbour after N-1" with ""the probability of being at a certain point after N steps (given only that it was at the origin at step 0)".
Isn't it clear that [itex]P_{N}(\vec{r})=\sum_{\vec{l} \in A(\vec{r})}P_{N-1}(\vec{l})/2D[/itex], where [itex]A(\vec{r})[/itex] is the set of neighbours of r?
In the OP, you don't say you have to "use" Markov chains, merely that you can assume it behaves like one. I don't know why they tell you that since it clearly does from the preceding description.

Wrt your later post, I don't understand how you got [itex]\left(^{x_i}_N\right)[/itex]. Maybe you meant [itex]\left(_{x_i}^N\right)[/itex], but I still don't see it. To reach xi in the i-direction there must be s steps in the 'wrong' direction and s+xi in the 'right' direction for any s<N. But the larger s is for one dimension, the less is left for the other dimensions, so it gets quite messy.

Try this: start by considering the case where N is only just large enough to reach r, i.e. no steps in the 'wrong' direction in any dimension. That's fairly easy. Then figure out the multiplier that arises from having spare steps, i.e. a larger N for the same r. Note that an odd number of spares will give you zero!
 
  • #8
haruspex said:
Quite so, but you seem to be confusing "the probability of being at a certain point after N steps given it was at a certain neighbour after N-1" with ""the probability of being at a certain point after N steps (given only that it was at the origin at step 0)".
Isn't it clear that [itex]P_{N}(\vec{r})=\sum_{\vec{l} \in A(\vec{r})}P_{N-1}(\vec{l})/2D[/itex], where [itex]A(\vec{r})[/itex] is the set of neighbours of r?
In the OP, you don't say you have to "use" Markov chains, merely that you can assume it behaves like one. I don't know why they tell you that since it clearly does from the preceding description.

Wrt your later post, I don't understand how you got [itex]\left(^{x_i}_N\right)[/itex]. Maybe you meant [itex]\left(_{x_i}^N\right)[/itex], but I still don't see it. To reach xi in the i-direction there must be s steps in the 'wrong' direction and s+xi in the 'right' direction for any s<N. But the larger s is for one dimension, the less is left for the other dimensions, so it gets quite messy.

Try this: start by considering the case where N is only just large enough to reach r, i.e. no steps in the 'wrong' direction in any dimension. That's fairly easy. Then figure out the multiplier that arises from having spare steps, i.e. a larger N for the same r. Note that an odd number of spares will give you zero!
Ok thanks guys. I'll concentrate for now on the 2) part.
I totally overlooked that more steps in a dimension reduce the ones in the other as to reach N total steps...
 
  • #9
fluidistic said:
Woops I've got some latex typos. All my [itex]{n \choose k}[/itex] should be [itex]{k \choose n}[/itex]. Does this fix the problem?

Sorry: what I said above is incorrect. For n = x+2 there are not just 2 ways to get from 0 to x in n steps; there are C(n,n-1) = C(n,1) = n ways. In general, the number of ways of getting from 0 to x in n steps is
[tex] N_{n,x} = C(n, (n+x)/2) = {n \choose (n+x)/2}, [/tex]
where it is understood that this equals 0 unless (n+x)/2 is an integer between 0 and n inclusive.

All this is treated beautifully in the marvelous book by W. Feller "An Introduction to Probability Theory and its Applications", Vol. I (Wiley, 1968), pp. 73-75 (an oldie, but an unbeatable classic).

RGV
 
Last edited:
  • #10
Ray Vickson said:
Sorry: what I said above is incorrect. For n = x+2 there are not just 2 ways to get from 0 to x in n steps; there are C(n,n-1) = C(n,1) = n ways. In general, the number of ways of getting from 0 to x in n steps is
[tex] N_{n,x} = C(n, (n+x)/2) = {n \choose (n+x)/2}, [/tex]
where it is understood that this equals 0 unless (n+x)/2 is an integer between 0 and n inclusive.

All this is treated beautifully in the marvelous book by W. Feller "An Introduction to Probability Theory and its Applications", Vol. I (Wiley, 1968), pp. 73-75 (an oldie, but an unbeatable classic).

RGV
Ok thank you infinitely for the information. I am going to take a look at this reference and post back in 1 or a few days.
 

Related to Probability of Finding a Random Walker in D Dimensions After N Steps

What is a random walker?

A random walker is a concept in mathematics and physics that refers to a mathematical model of a particle or object that moves in a random or unpredictable pattern. This model is often used to study diffusion and other random processes.

What is a Markov chain?

A Markov chain is a mathematical model that describes a sequence of events where the probability of each event depends only on the state of the previous event. It is used to analyze and predict the behavior of systems that exhibit random behavior over time.

How are random walkers and Markov chains related?

Random walkers can be described using a Markov chain, where the state of the walker at each step is determined by the state of the previous step. This allows for the analysis of the random walker's behavior using mathematical techniques and models.

What are the applications of random walkers and Markov chains?

Random walkers and Markov chains have a wide range of applications in various fields, including physics, biology, economics, and computer science. They can be used to model and study diffusion, financial markets, genetic mutations, and more.

What are the limitations of random walkers and Markov chains?

Although random walkers and Markov chains are useful models for studying random processes, they have limitations. They assume that the future behavior of a system is only dependent on the present state and do not take into account external factors or long-term dependencies. Additionally, these models may not accurately represent complex systems with multiple interacting components.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
925
  • Calculus and Beyond Homework Help
Replies
9
Views
2K
Replies
5
Views
2K
Replies
2
Views
784
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
Replies
16
Views
2K
  • Advanced Physics Homework Help
Replies
7
Views
2K
Replies
12
Views
751
  • Advanced Physics Homework Help
Replies
2
Views
2K
  • Advanced Physics Homework Help
Replies
4
Views
1K
Back
Top