What is the probability associated with a Dirac delta-like distribution?

In summary: In this case, the pdf of X is not a delta function.In summary, the probability of an event (X=6) becomes infinite as the random variable approaches a certain value.
  • #1
coolnessitself
35
0
Hi all,
I have a question about the actual value associated with the probability p(r) where p(r) is infinite for r=0.
I realize that this p(r) can only be a distribution and only exist under an integral, and can't represent a pdf. My p(r) is a radially symmetric laplace distribution in 2d, centered at the origin:
[tex]
p(r) = \frac{1}{2\pi} K_0\left(r\right)
[/tex]

where [tex]K_0(\cdot)[/tex] is a modified Bessel function of the first kind (recall that [tex]K_0(0)=\infty[/tex]). This exact distribution isn't really my question, it's just one with nonzero probability around r=0 and infinite 'probability' at 0.
All moments are defined, and
[tex]
2\pi \int\limits_0^{\infty} p(r) rdr = 1.
[/tex]
But if I sample from this distribution, will I always get r=0, since p(0) is infinitely more likely than any other r? I see the relation to a dirac delta, but in that case [tex]p(r)=0[/tex] for all r other than r=0. Here, [tex]p(r\ne 0)>0[/tex]. Does that make any difference?

In the end, I'm trying to assign a probability to any r. This would be easy if all p(r)<=1, but I don't know what to do in this case. Any question I can think of asking hits a roadblock since the pdf doesn't exist, but I guess I'm simply having trouble interpreting what that means if some p(r) are nonzero and finite.
 
Physics news on Phys.org
  • #2
coolnessitself said:
This exact distribution isn't really my question, it's just one with nonzero probability around r=0 and infinite 'probability' at 0.
...
But if I sample from this distribution, will I always get r=0, since p(0) is infinitely more likely than any other r?

It would help if clarified your goal. The scenarios that come to my mind are:

1. You want to write a computer program to do a Monte Carlo simulation and you want to draw samples from this distribution.

2. You are trying to do a mathematical proof and you want to do a step like divide something by p(r) when r is not 0.

3. You are wondering about a Platonic mathematical concept (such as people who like to discuss whether .9999... = 1.
 
  • #3
Hi Stephen,
Essentially I guess it's #1. I have a jump diffusion process

[tex]
x(t) - x(0) = \int\limits_0^t f(x(s))ds + \int\limits_0^t g(x(s))dw(s) + J(t)
[/tex]

where J(t) specifies both the Pr(jump in [t,t+Delta]) as well as a distribution of the resulting jumps. That latter distribution is my radially symmetric p(r) in the previous post.
If I were to simulate this, I'd have to draw samples from p(r). Would I always draw r=0? Seems like no: If I do inverse cdf sampling, I get a nice curve with a peak around r=0.7, but I don't understand why this is the case.

If I were to try and model how x(t) behaves, I might divide a 2d space into a grid and look at the evolution of p(x,t). In this case, the transition from x(t) to a radially symmetric distribution about x(t) (given by p(r)) would be of interest. But are any of these probabilities defined?
 
Last edited:
  • #4
I don't know about modified Bessel functions of the first kind, so as I read your original post, I was thinking that you had the equivalent of a Dirac delta function. Reading more carefully, I think your question is whether a probability distribution that becomes infinite at some value behaves like a Dirac delta function. I think the answer to that is no. There are functions that have vertical asymptotes that have "improper integrals". The Wikipedia article on improper integrals gives the example
[tex] \int_a^c \frac{e^x}{\sqrt{c-x}} dx. [/tex]. I think you can sample from the cdf of such a distribution.

If you do anything involving the mean, variance etc. of such a distribution you'd have to check that those integrals exist.
 
  • #5
Hi,
thanks for the link.

So the bessel function simply appears in a bivariate laplace distribution, which I'm dealing with. I guess from an intuitive standpoint, I'm confused about its pdf. If the pdf becomes infinite at some point, how does that relate to probability? If the pdf maps R to some probability, how is it possible to have an infinite probability around 0?
For a delta function, it makes sense (kinda) that you can have an infinite probability, since when defined as a distribution you're mapping both a set and R to this probability. I don't see a clear analogy to this case, so I feel like there must be a formal way of treating this that I'm glossing over.
 
Last edited:
  • #6
Imagine a random variable generated this way: I flip a fair coin. If it lands heads, I assign the random variable X the value 6. If it lands tails then I make a draw from a uniform distribution on [0 , 10] and assign X to be that value.

We could represent the density of X as a flat line segment at height y = .05 except that at the point X = 6, it would have a vertical jump to Y = 1/2. However, this density would not integrate to 1 by the normal way of doing integrals - intuitively, in a Riemann sum, the point X = 6 only matters in a "rectangle" whose width approaches zero.

The way I have seen this handled in applied math courses is that the point X = 6 is given the special distinction of being a "point mass" and it is understood that in doing the integral on must add a 1/2 to the total because of the "point mass" probability at X = 6.

I'm sure the measure theory of pure mathematics has ways of justifying this.

For a probability density f(x), it is convenient to think of f(x) as "the probability of x", but the above example illustrates the distinction between a point mass, which is a situation where that is actually true vs the non-point mass case where it is merely a metaphor. Actually, the probability of drawing any particular value of x from a continuous pdf is zero. (I reconcile this with reality by saying that when I draw a random number on a computer, I am really drawing a random interval due to the finite precision of the machine.) So as long as you are not treating the location of a vertical asymptote as any kind of point mass, you don't have any theoretical worries. You might have some numerical methods worries.
 
  • #7
coolnessitself said:
Hi,
thanks for the link.

So the bessel function simply appears in a bivariate laplace distribution, which I'm dealing with. I guess from an intuitive standpoint, I'm confused about its pdf. If the pdf becomes infinite at some point, how does that relate to probability? If the pdf maps R to some probability, how is it possible to have an infinite probability around 0?
For a delta function, it makes sense (kinda) that you can have an infinite probability, since when defined as a distribution you're mapping both a set and R to this probability. I don't see a clear analogy to this case, so I feel like there must be a formal way of treating this that I'm glossing over.

The pdf can be infinite in places provided it's integrable. A simpler example would be 1/2sqrt(x) for 0<=x<=1 which corresponds to the cdf F(x)=sqrt(x). For this example numbers can be drawn from the distribution by cdf inversion, so take U uniform on (0,1) and solve F(X)=U.

For your example two options would be to either find an analytic expression for the cdf and perform cdf inversion (numerically if necessary) as described above, or if the distribution has a single mode you could try more general methods such as the ziggurat algorithm (which is how normal random numbers are implemented in MATLAB).
 

Related to What is the probability associated with a Dirac delta-like distribution?

1. What is a Dirac delta-like distribution?

A Dirac delta-like distribution is a mathematical concept used to model point-like sources or events in a continuous system. It is a generalized function that takes on the value of infinity at a single point, and is zero everywhere else.

2. How is a Dirac delta-like distribution different from a Dirac delta function?

While both the Dirac delta-like distribution and the Dirac delta function are used to represent point sources, the main difference is that the Dirac delta-like distribution has a finite width and can take on values other than infinity at the point of interest. The Dirac delta function, on the other hand, has an infinitely narrow width and is strictly equal to infinity at the point of interest.

3. What are some applications of Dirac delta-like distributions in science?

Dirac delta-like distributions are commonly used in physics and engineering to model point sources, such as electric charges, gravitational masses, or acoustic sources. They are also used in signal processing to represent impulses or sharp changes in a signal.

4. Can a Dirac delta-like distribution be integrated?

No, a Dirac delta-like distribution cannot be integrated in the traditional sense because it is not a function in the usual sense. However, it can be integrated against a well-behaved test function to obtain meaningful results.

5. Are there any limitations or drawbacks to using Dirac delta-like distributions?

One limitation of using Dirac delta-like distributions is that they are not defined for all functions and may lead to nonsensical results when manipulated in certain ways. Additionally, they can be difficult to work with mathematically and may require specialized techniques to handle properly.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
0
Views
424
  • Quantum Physics
Replies
1
Views
614
Back
Top