Welcome to our community

Be a part of something great, join today!

Integrating Delta function

dwsmith

Well-known member
Feb 1, 2012
1,673
Integrating the delta function:
$$
\frac{4}{\pi^2}\int_0^{\pi}\int_0^{\pi}\delta(x - x_0,y - y_0)\sin nx\sin my dxdy
$$
Would the solution be $\frac{4}{\pi^2}\sin nx_0\sin my_0$?
 

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,702
Integrating the delta function:
$$
\frac{4}{\pi^2}\int_0^{\pi}\int_0^{\pi}\delta(x - x_0,y - y_0)\sin nx\sin my dxdy
$$
Would the solution be $\frac{4}{\pi^2}\sin nx_0\sin my_0$?
Yes, provided that $x_0$ and $y_0$ both lie in the interval $[0,\pi]$, otherwise the answer will be $0$.
 

Ackbach

Indicium Physicus
Staff member
Jan 26, 2012
4,191
Yes, provided that $x_0$ and $y_0$ both lie in the interval $[0,\pi]$, otherwise the answer will be $0$.
Wouldn't they need to be in $(0,\pi)$? If $x_{0}=0$ and $y_{0}=0$, I should think you wouldn't get the full "function picking out" feature that the delta function gives you, right? That is, isn't it true that
$$\int_{0}^{1}\delta(x)\,dx=\frac{1}{2}?$$
Or am I being too loose with my notation?
 

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,702
Wouldn't they need to be in $(0,\pi)$? If $x_{0}=0$ and $y_{0}=0$, I should think you wouldn't get the full "function picking out" feature that the delta function gives you, right? That is, isn't it true that
$$\int_{0}^{1}\delta(x)\,dx=\frac{1}{2}?$$
Or am I being too loose with my notation?
Good point, and I think that the answer may depend on how you want to define the delta "function". If you are thinking of it as a point measure, then it is all concentrated at a single point, and there wouldn't be any question of assigning half of it to each side of that point. But if you are defining it in distributional terms as the limit of increasingly spiky normal distributions, then presumably it would split in half as you suggest. (In the OP's question, the delta function is two-dimensional, so I guess you might need to take half the answer along the sides of the square $[0,1\times[0,1]$, and a quarter at the corners? But since the sin functions vanish along the edges, the answer is going to be zero there anyway!)

The Wikipedia discussion on the delta function is worth looking at.
 
Last edited:

chisigma

Well-known member
Feb 13, 2012
1,704
Wouldn't they need to be in $(0,\pi)$? If $x_{0}=0$ and $y_{0}=0$, I should think you wouldn't get the full "function picking out" feature that the delta function gives you, right? That is, isn't it true that
$$\int_{0}^{1}\delta(x)\,dx=\frac{1}{2}?$$
Or am I being too loose with my notation?
If You define the $\delta(*)$ function so that is...

$\displaystyle \int_{- \infty}^{t} \delta (t)\ dt = H (t)$ (1)

... where $H(*)$ is the Heaviside Step Function...

Heaviside Step Function -- from Wolfram MathWorld

... Your relation...

$\displaystyle \int_{0}^{1} \delta (t)\ dt = \frac{1}{2}$ (2)

... is correct...


Kind regards

$\chi$ $\sigma$
 

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,702
If You define the $\delta(*)$ function so that is...

$\displaystyle \int_{- \infty}^{t} \delta (t)\ dt = H (t)$ (1)

... where $H(*)$ is the Heaviside Step Function...

Heaviside Step Function -- from Wolfram MathWorld

... Your relation...

$\displaystyle \int_{0}^{1} \delta (t)\ dt = \frac{1}{2}$ (2)

... is correct...


Kind regards

$\chi$ $\sigma$
True, but note that Wolfram defines the Heaviside function to take the value 1/2 at 0. The Wikipedia article gives the definition $$H(x) = \begin{cases}1&(x\geqslant0),\\ 0&(x<0).\end{cases}$$ That would lead to a delta function with all its weight concentrated on "one side of 0", so to speak.

I think the moral here is that it is necessary to pay attention to the definition used by the book or article that you are working from.
 

CaptainBlack

Well-known member
Jan 26, 2012
890
Good point, and I think that the answer may depend on how you want to define the delta "function". If you are thinking of it as a point measure, then it is all concentrated at a single point, and there wouldn't be any question of assigning half of it to each side of that point. But if you are defining it in distributional terms as the limit of increasingly spiky normal distributions, then presumably it would split in half as you suggest. (In the OP's question, the delta function is two-dimensional, so I guess you might need to take half the answer along the sides of the square $[0,1\times[0,1]$, and a quarter at the corners? But since the sin functions vanish along the edges, the answer is going to be zero there anyway!)

The Wikipedia discussion on the delta function is worth looking at.
There is still a problem in defining the delta functional in terms of integrals involving a sequence of reasonably behaved (positive) increasingly "spiky" functions with integral 1. Because we do not have to assume the functions in this sequence are symetric. For instance we could use the sequence:

\[f_n(c)=\left\lbrace \begin{array}{ll} \frac{1}{\sqrt{2\pi}n^{-1}}e^{-x^2/2n^{-2}}& \text{for } x\le 0 \\
\frac{1}{\sqrt{2\pi}(n/2)^{-1}}e^{-x^2/2(n/2)^{-2}}& \text{for } x \gt 0\\ \end{array} \right. \]

That we have a problem with an integral of the form:

\[\int_0^\infty \delta(x) f(x) \;dx\]

can also be seen because the function \(H(x)f(x)\) for most \(f(x)\) of interest is not in our space of test functions for the definition of distributions on \(\mathbb{R}\).

(there appears to be something wrong with our large braces today, ... and now it is rendering correctly for no apparent reason)

CB
 
Last edited:

chisigma

Well-known member
Feb 13, 2012
1,704
True, but note that Wolfram defines the Heaviside function to take the value 1/2 at 0. The Wikipedia article gives the definition $$H(x) = \begin{cases}1&(x\geqslant0),\\ 0&(x<0).\end{cases}$$ That would lead to a delta function with all its weight concentrated on "one side of 0", so to speak.

I think the moral here is that it is necessary to pay attention to the definition used by the book or article that you are working from.
A concept I tried to explain in...

http://www.mathhelpboards.com/f13/never-ending-dispute-2060/#post9448

... is that there exist 'good' and 'bad' basic definitions, in the sense that a 'bad definition' leads, sooner or later, to contraddictions and logic failures and that doesn't happen with a 'good definition'. Now we have two different definitions of H(0), according to 'Monster Wolfram' is H(0)=1/2, according to 'Wiki" is H(0)=1... and we can't exclude that for someone else is H(0)=0... Very well!... which definition is 'good'?... an answer I think can come from the following example...


Let consider the time function represented here...

MHB20.PNG

... that in term of Heaviside Step Function is written as...

$\displaystyle f(t) = \sum_{n=0}^{\infty} (-1)^{n}\ H(t-n)$ (1)

The Laplace Transform of (1) is computed in standard form as...


$\displaystyle \mathcal{L} \{ f(t) \} = \frac{1}{s\ (1 + e^{- s})}$ (2)
Now we obtain f(t) performing the inverse Laplace Transform using the Bromwich Integral...


$\displaystyle f(t) = \frac{1}{2\ \pi\ i}\ \int_{\gamma - i\ \infty}^{\gamma + i\ \infty} F(s)\ e^{s\ t}\ ds$ (3)

... where $\gamma$ is a constant that lies on the right respect to all the singularities of F(s). In this case the singularities of F(s) are $s=0$ and $s = (2 n + 1)\ \pi\ i$, i.e. all on the imaginary axis, so that any real $\gamma >0$ is 'good'.

The residue of F(s) in $s=0$ is...


$\displaystyle r_{0} = \lim_{ s \rightarrow 0} s\ F(s)\ e^{s t} = \frac{1}{2}$ (4)


... and the residue of F(s) in $s= (2 n + 1)\ \pi\ i$ is...


$\displaystyle r_{n}= \lim_{ s \rightarrow (2 n + 1)\ \pi\ i} \{ s - (2 n + 1)\ \pi\ i \}\ F(s)\ e^{s t} = \frac{e^{(2 n + 1)\ \pi\ i\ t}}{(2 n + 1)\ \pi\ i} $ (5)

... so that the integral (3) supplies...

$\displaystyle f(t) = \frac{1}{2} + \sum_{n= -\infty}^{+ \infty} r_{n} = \frac{1}{2} + \frac{2}{\pi} \sum_{n=0}^{\infty} \frac {\sin (2n + 1)\ \pi\ t}{2n + 1}$ (6)

Of course the result is not a surprise, because the (6) is the Fourier Series of the (1). A 'little suprise' however is the fact that the (6) for t= n with n non negative integer converges to $\frac{1}{2}$ and that means that, comparing (1) and (6), we conclude that is $H(0)= \frac{1}{2}$...


It seems that the 'good definition' is supplied by 'Monster Wolfram'...

Kind regards


$\chi$ $\sigma$