Information content of Dirac delta function

In summary, the Dirac delta function can be taken as a distribution, but it does not have a well-defined entropy as it is not a function. However, its entropy can be defined as the limit of the entropies of successive approximations, which can be shown to be 0 as the limit approaches 0. This can be seen by approximating the delta function with narrower and higher rectangles, and taking the limit of their entropies. However, if the limit is taken after calculating the entropy, it will approach -\infty rather than 0.
  • #1
friend
1,452
9
I understand that the Dirac delta function can be taken as a distribution. And that one can calculate the Shannon entropy or information content of any distribution. So what is the information content of the Dirac delta function? I think it is probably identically zero, but I'd like to see the proof of it. I could not find anything specific on-line about this. So any help would be appreciated. Thanks.
 
Physics news on Phys.org
  • #2
Those are two entirely different senses of "distribution". The Dirac delta is a distribution in this sense, Shannon entropy relates to probability distributions in this sense.
 
  • #3
I found this information on the Web from the book:
Ecosystem Ecology: A New Systhesis, by David G. Christopher, L.J. Frid, on page 46

He calculated the Shannon entropy of a dirac delta function to be zero. Actually, he seems to be calculating for the discrete Kronecker delta. I wonder how one would go to the continuous case of the Dirac delta. Thanks.
 

Attachments

  • Entropy_of_delta.gif
    Entropy_of_delta.gif
    68.6 KB · Views: 1,016
  • #4
The Dirac delta is not a function, so it does not have a well-defined entropy. (If you define the probability distribution to be p(X<x) rather than p(X<=x), then it does have a corresponding probability distribution - p(X<x) = 0 for x<=0, p(X<x) = 1 for x > 0 - but this probability distribution is not generated by a probability distribution function.) However, you could define its entropy as the limit of the entropies of successive approximations to the Dirac delta. These can be taken to be increasingly narrower and higher bell curves or rectangles of constant unit area. It's easy to show that the limit is zero - the limit is in effect x*log(x) as x goes to 0.
 
  • #5
Preno said:
... However, you could define its entropy as the limit of the entropies of successive approximations to the Dirac delta. These can be taken to be increasingly narrower and higher bell curves or rectangles of constant unit area. It's easy to show that the limit is zero - the limit is in effect x*log(x) as x goes to 0.

So if we take the limit of the delta function after we calculate its entropy, then it goes to zero, right? I'm wondering if its too much trouble you to show me the math of this situation? Thanks.
 
  • #6
Well, as I said, the Dirac delta distribution can be approximated for example by a sequence of rectangles Rn around zero of width 1/n and height n.

[tex]H_n = - \int_{-1/2n}^{1/2n} R_n(x) \cdot \log (R_n(x)) \textrm{d}x = - \frac{1}{n} \cdot n \cdot \log(n) = -\log(n)[/tex]

So actually you get [tex]-\infty[/tex] rather than 0. (But that's okay, because differential entropy can be negative.)
 

Related to Information content of Dirac delta function

1. What is the Dirac Delta Function and why is it important in information theory?

The Dirac Delta Function, also known as the unit impulse function, is a mathematical function that is used to model a concentration of mass or energy at a single point. It is important in information theory because it allows us to represent and analyze signals that are concentrated at a specific point in time or space.

2. How does the Dirac Delta Function differ from other mathematical functions?

The Dirac Delta Function is unique in that it has a value of zero everywhere except at t=0, where it has an infinite value. This makes it a distribution rather than a traditional function, and gives it properties that allow it to be used in a variety of applications in physics, engineering, and mathematics.

3. How is the information content of a Dirac Delta Function calculated?

The information content of a Dirac Delta Function is calculated by taking the logarithm of the amplitude of the function. This value represents the amount of information contained in the signal, with a larger amplitude indicating a higher concentration of information at that point.

4. Can the Dirac Delta Function be used to represent real-world signals?

While the Dirac Delta Function is a mathematical abstraction, it can be used to approximate real-world signals that are highly concentrated at a single point. For example, a sharp impulse in an audio signal can be approximated by a Dirac Delta Function with a high amplitude at that point in time.

5. What are some practical applications of the Dirac Delta Function in information theory?

The Dirac Delta Function is used in a variety of applications in information theory, including signal processing, digital communications, and image processing. It is also used in the analysis of systems with impulse responses, such as filters and feedback loops, and in the formulation of differential equations.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
3K
  • Calculus
Replies
25
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
Replies
4
Views
274
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
3K
  • Quantum Physics
Replies
7
Views
888
  • Introductory Physics Homework Help
Replies
1
Views
899
Back
Top