The Dirac delta in squere root of the absolute value

In summary: Secondly, the limit is the Dirac delta function. And finally, as the limit is undefined there is no sensible direct way to multiply them (unless they are independent variables).
  • #1
nykon
7
0
Dear Forum Users,

I have got more math question rather then the physics question. Does someone know if:

[itex]\mid d(x)\mid^2[/itex]

equals just [itex]d(x)[/itex], here [itex]d(x)[/itex] is just the Dirac delta ?

best regards,
nykon
 
Physics news on Phys.org
  • #2
nykon said:
Does someone know if:

[itex]\mid \delta(x)\mid^2[/itex]

equals just [itex]\delta(x)[/itex], here [itex]\delta(x)[/itex] is just the Dirac delta ?
It's undefined. In general, you cannot sensibly multiply distributions at the same point.
 
  • #3
strangerep said:
It's undefined. In general, you cannot sensibly multiply distributions at the same point.

Minor clarification -- strangerep means distributions of the same variable. It's fine to multiply them if they are of independent variables, such as [itex]\delta(x)\delta(y)[/itex] or even [itex]\delta(x) \delta(x-y)[/itex]. (assuming x and y are independent)
 
  • #4
strangerep said:
It's undefined. In general, you cannot sensibly multiply distributions at the same point.

That's true - although I have read somewhere while it is thought you can't give a meaning to such things in general it hasn't actually been proven yet - actually I seem to recall it was in Theory of Distributions: A Non-technical Introduction by Richards and Youn, a copy of which I have.

Thanks
Bill
 
Last edited:
  • #5
bhobba said:
That's true - although I have read somewhere while it is thought you can't give a meaning to such things in general it hasn't actually been proven yet - actually I seem to recall it was in Theory of Distributions: A Non-technical Introduction by Richards and Youn, a copy of which I have.
That book does not have "look inside" enabled on Amazon, so I can't browse it easily.

But... the way you've stated it sounds like a non-sequitur. In maths, one can only prove things based on the precise definitions and axioms one starts with.

Could you look up the precise quote and context, pls?
 
  • #6
Got the book and refreshed my memory about what is said. Its in chapter 7 and says, regarding multiplying any two distributions together, 'there is strong evidence such a general definition does not exist'. However they later express the view 'Finally we mention the thought, and hope, that non standard analysis (a theory that allows the use of infinitesimal quantities) may someday provide a mush simpler treatment of these questions. However that must lie outside the scope of this book.'

In math you often think things are a lot more certain than statements like the above where it is freely admitted people, at this point in time, don't really know. This is not good but the reality of the situation. In QFT you sometimes see expressions containing terms like the Dirac Delta function squared but it is well known in math, at least to the best of my knowledge, no one has figured out how to properly define it. - I suppose that's the reason they say there is strong evidence it can't be done.

In QFT the way I view it is to think of the Dirac delta function as the integral of e^2i pi x t over t from infinity to - infinity. However, in the spirit of a cutoff I think of it as not being over infinity but a very large number so for all practical purposes its behaves like a delta function but is a perfectly good function that you can take the square of etc.

Thanks
Bill
 
  • #7
nykon,
To clarify some points. The Dirac delta is not truly a function (hence asking what [itex]\delta(x) ^2[/itex] means is not that simple).

If you think of real valued functions as elements of an abstract vector space then you can define an inner (dot) product on (some) of them by:
[tex] f\bullet g = \int_{-\infty}^{\infty}f(x)g(x)dx[/tex]
(or we can choose other limits of integration.)
Now when dealing with vector spaces you can define the dual space of linear mappings from vectors to numbers, called linear functionals. In enumerating the linear functionals on a space, if it is finite dimensional all of them can be expressed using the dot product with respect to other vectors: [itex] \phi(\mathbf{v})= \mathbf{u}\bullet \mathbf{v}[/itex] for some [itex] \mathbf{u}[/itex].

This is a very convenient fact but it doesn't hold once you consider infinite dimensional vector spaces like typical function spaces. As an example evaluation of a function at 0 is itself a linear functional. You plug in a function f and get out a number f(0). You can't find another function [itex]g[/itex] such that: [itex] f(0) = g\bullet f[/itex]. So we invent a symbolic representation of such, call it [itex]\delta[/itex] which isn't a function but takes the place of a function in the notation. Its only defined when used with the inner product (inside that integral above) to mean the specific linear functional:
[tex] f \mapsto f(0)=\delta\bullet f = \int_{-\infty}^\infty \delta(x)f(x)dx[/tex]

This and similar notations (such as the derivatives of the delta or dually shifted delta) are called distributions or sometimes generalized functions.

As to your question, firstly we can express the delta distribution as the limit of actual functions as you've likely seen in texts. As such it may be meaningful to say [itex]|\delta(x)|=\delta(x)[/itex] based on properties. However to give [itex]\delta(x)^2[/itex] meaning we would have to be able to evaluate:
[tex]\int_{-\infty}^\infty \delta(x)\delta(x)dx = \delta(0)[/tex]
Since the Dirac delta really isn't a function this value isn't meaningful and so neither is the notation of the square of the delta.

In summary, since the space of functions is infinite dimensional there are more linear mappings from functions to numbers than can be expressed by integrating them times other functions. To express some of these we extend the idea of functions to distributions of which the Dirac delta is an example. In this context squaring distributions such as the Dirac delta is not meaningful. Note however we CAN shift, scale, add and take derivatives of distributions, e.g.
[tex]\int_{\mathbb{R}} \delta'(x-a)f(x)dx = f'(a)[/tex]
Regards,
James
 
  • #8
Nevertheless the square of the delta function (or something related) shows up regularly in physical problems.
Consider for example a unitary transformation [itex]U(x)=\exp(i\pi\theta(x))[/itex] and the Hamiltonian [itex]H=p^2/2m[/itex]. What do you get for [itex] U^+HU[/itex]?
 
  • #9
In any case, whenever the square of the Dirac delta distribution shows up, it is important to investigate why. Are your assumtions correct, and everything well-defined? E.g. maybe you have used a step function where only continuous or C^1 functions are allowed, etc.
 
  • #10
DrDu said:
Nevertheless the square of the delta function (or something related) shows up regularly in physical problems.
Consider for example a unitary transformation [itex]U(x)=\exp(i\pi\theta(x))[/itex] and the Hamiltonian [itex]H=p^2/2m[/itex]. What do you get for [itex] U^+HU[/itex]?

That's true - but the rigorous theory of the Delta function, distribution theory, to the best of my knowledge anyway, leaves it undefined. As I said, the way I handle it, in applied problems, is to think of it as an ordinary function that for all practical purposes behaves like the Delta function but really isn't. It sort of like considering dx etc as actual quantities so you can do informal manipulations.

I don't know if it appears in the example above - but I have seen it appear in QFT problems eg some of the early chapters of Zee.

Thanks
Bill
 
Last edited:
  • #11
DrDu said:
Nevertheless the square of the delta function (or something related) shows up regularly in physical problems.
Consider for example a unitary transformation [itex]U(x)=\exp(i\pi\theta(x))[/itex] and the Hamiltonian [itex]H=p^2/2m[/itex]. What do you get for [itex] U^+HU[/itex]?

? I must be missing something...
[tex]
U^+ H U ~=~ H ~+~ U^+[H,U]
[/tex]
so if we take CCRs to be [itex][x,p]=i[/itex], which implies [itex][p,f(x)] = -i f'(x)[/itex] for a large class of analytic functions f, then we have (redefining [itex]\pi\theta \rightarrow \theta[/itex] to reduce the mess)...

[tex]
2m [H,U] ~=~ p [p,U] + [p,U] p
[/tex]
[tex]
[p,U] ~=~ \left[p, e^{i\theta(x)}\right]
~=~ e^{i\theta(x)} (-i) i\theta'(x)
~=~ U \theta'(x)
[/tex]
Hence
[tex]
U^* [H,U] ~=~ \frac{1}{2m} \Big(U^* p U \theta'(x) + \theta'(x) p \Big)
~=~ \frac{1}{2m} \Big(p \theta'(x) + U^*[p,U] + \theta'(x) p \Big)
[/tex]
[tex]
~~~~~~=~ \frac{1}{2m} \Big(p \theta'(x) + \theta'(x) + \theta'(x) p \Big)
[/tex]
So where is the Dirac delta squared? What have I overlooked?
 
  • #12
bhobba said:
Got the book and refreshed my memory about what is said.
Thanks.

Its in chapter 7 and says, regarding multiplying any two distributions together, 'there is strong evidence such a general definition does not exist'. However they later express the view 'Finally we mention the thought, and hope, that non standard analysis (a theory that allows the use of infinitesimal quantities) may someday provide a mush simpler treatment of these questions. However that must lie outside the scope of this book.'
OK, so it's just speculative wishful thinking on their part. Nonstandard analysis has been around for a long time now, but so far no one (afaik) has applied to QM in a way that shows it to be superior in practice from conventional techniques.

In math you often think things are a lot more certain than statements like the above where it is freely admitted people, at this point in time, don't really know.
But we can say for sure that the desired rigorous definition does not exist at this time.

This is not good but the reality of the situation. In QFT you sometimes see expressions containing terms like the Dirac Delta function squared
Can you give a specific example, pls?

In QFT the way I view it is to think of the Dirac delta function as the integral of e^2i pi x t over t from infinity to - infinity. However, in the spirit of a cutoff I think of it as not being over infinity but a very large number so for all practical purposes its behaves like a delta function but is a perfectly good function that you can take the square of etc.
Generalized functions are typically limits of sequences of ordinary functions. But to have a good theory, the essential properties must survive in the limit. Pointwise multiplication does not, and this can (eg) compromise Lorentz invariance.
 
  • #13
bhobba said:
As I said, the way I handle it, in applied problems, is to think of it as an ordinary function that for all practical purposes behaves like the Delta function but really isn't. It sort of like considering dx etc as actual quantities so you can do informal manipulations.
It's a lot more like claiming that you can divide by zero because you think of 0 as being a really small positive number.

It can work very well if what you're actually doing is similar to what you pretend is happening -- e.g. you can deduce [itex]\lim_{x \to 0} x/x = 1[/itex] by simply plugging in 0, and then dividing a "really small positive number" by itself. But most other times you'll be disasterously wrong.

Incidentally, dx is a "quantity" -- a differential form.


Non-standard analysis doesn't help you multiply distributions in exactly the same way it doesn't help you divide by zero. What it might do is help you avoid having to consider multiplying tempered distributions, in a similar fashion to how it helps you avoid having to consider division by zero. (i.e. because the calculation you're doing leaves you with a nonzero infinitesimal in the denominator)
 
Last edited:
  • #14
strangerep said:
Can you give a specific example, pls?

I have seen it in Zee's textbook on QFT, and it somtimes arises in QFT but I couldn't locate it easily. However its is used with gay abandon in Chapter 7 of QFT Demystified - The Feynman Rules where you have Dirac Delta Functions multiplied by other delta functions with equations of the form integral something delta(p1 + p2 - q) delta(q - p3 + p4) dq then claim it can be done by inspection by applying the second delta function to the first so you have something (with p3 - p4 substituted for q) delta (p1 + p2 - p3 + p4) which you can do if you believe in the fiction it is an ordinary function - which of course it isn't.

strangerep said:
Generalized functions are typically limits of sequences of ordinary functions. But to have a good theory, the essential properties must survive in the limit. Pointwise multiplication does not, and this can (eg) compromise Lorentz invariance.

I know the rigorous definition of delta functions and other generalized functions. The formal manipulations you see with them are sometimes not justified - welcome to applied math.

Thanks
Bill
 
Last edited:
  • #15
Hurkyl said:
But most other times you'll be disasterously wrong.

Actually most of the time its perfectly OK. Yes I know dx can be considered a differential form. I am thinking of the case where you think of dt as a small but actual number like in for example deriving the poison distribution. You write down the equation of what happens in the small time interval and ignore higher powers of dt. It works just fine. Rigorous - no way - but you see it all the time in applied literature. You can however make it rigorous by introducing terms of O(2) and taking a limit at the end. But often applied guys don't worry about it.

Thanks
Bill
 
  • #16
strangerep said:
Hence
[tex]
U^* [H,U] ~=~ \frac{1}{2m} \Big(U^* p U \theta'(x) + \theta'(x) p \Big)
~=~ \frac{1}{2m} \Big(p \theta'(x) + U^*[p,U] + \theta'(x) p \Big)
[/tex]
I should have mentioned that theta is Heavisides step function whose derivative gives the delta function.
Besides this, the last line above is not correct, specifically you seem to have lost a theta' in the second term, which is precisely the delta-square term.

That doesn't seem to relevant as long as x ranges from -infinity to +infinity. But when you replace x by an angle phi, this becomes the Aharonov Bohm Hamiltonian in a special gauge. The strength of the delta function is then the phase shift of an electron upon encircling a magnetic flux line.

The above example also shows how the delta squared has to be interpreted/avoided.
The hamiltonian is defined only on functions with a discontinuity (what is certainly not in any of the function spaces on which distributions are defined normally), so that their derivativative will cancel the delta square.
 
Last edited:
  • #17
bhobba said:
Actually most of the time its perfectly OK.
I was talking about the "pretend 0 is a tiny positive number".

Manipulating dx as an algebraic "quantity" is perfectly okay, because it is an algebraic quantity.

But often applied guys don't worry about it.
The applied guys won't accept it either. If they don't look into a proof, then they are going to go for empirical verification. (they might go for empirical validation anyways even when given a proof)

But honestly, the reason you don't see anything further done is because the aren't isn't to derive the Poisson distribution; it is to help one gain confidence the Poisson distribution is what it claims to be.




Incidentally, your example is not a dx. I believe you can arrange it to be a [itex]\Delta x[/itex], though. (e.g. a tangent vector, if we use the differential geometric semantics of infinitesimal)

And I would bet there is a perfectly rigorous argument that is a rather direct translation of "you expect twice as many hits in intervals twice as wide" -- i.e. a rigorous argument whose idea is more intuitive than the mumbled "and then ignore second-order effects" you suggest.
 
Last edited:
  • #18
DrDu said:
I should have mentioned that theta is Heavisides step function whose derivative gives the delta function.
OK.

Besides this, the last line above is not correct, specifically you seem to have lost a theta' in the second term, which is precisely the delta-square term.
Oops! So weird how one can be blind to things that are obvious when pointed out. (sigh)

That doesn't seem to relevant as long as x ranges from -infinity to +infinity. But when you replace x by an angle phi, this becomes the Aharonov Bohm Hamiltonian in a special gauge.
Hmm. I know A-B of course, but haven't seen it treated this way. So I should probably read up on that. Can you suggest a suitable reference?

The strength of the delta function is then the phase shift of an electron upon encircling a magnetic flux line.

The above example also shows how the delta squared has to be interpreted/avoided.
The hamiltonian is defined only on functions with a discontinuity (what is certainly not in any of the function spaces on which distributions are defined normally), so that their
derivativative
:biggrin:

will cancel the delta square.
OK. That's all making sense now. One must choose the space of functions and its dual quite carefully.

Cheers.
 
  • #19
Hurkyl said:
The applied guys won't accept it either. If they don't look into a proof, then they are going to go for empirical verification. (they might go for empirical validation anyways even when given a proof)

I must disagree with you there. I did a degree in applied math and, at the time, in my hubris, was very concerned with rigour - doing my epsilonics was my favourite subject and I continued it onto Hilbert spaces and stuff. Take my word for it - when questions like that are raised their eyes roll back, they tell you stories about they know books that, while totally rigorous, you would not read, of disrespect of applied guys that use the Dirac Delta function with gay abandon and people like Von Neumann that chided them for it, and other ditties. Read the introduction of Von Neumanns Mathematical Foundation's of QM to see what I mean.

Here is another paper of interest in this discussion:
http://arxiv.org/pdf/quant-ph/9907069v2.pdf

The basic point of the applied guys is it works - so there must be a reason for it and the pure guys need to investigate it. Trouble is issues such as squaring a delta function still remain.

These days however I am rather sanguine about the issue, resolving stuff like multiplying two delta functions together by taking them to be ordinary functions that for all practical purposes are delta functions.

Thanks
Bill
 
Last edited:
  • #20
In an idealized description of the AB effect, the magnetic flux [itex]\Phi[/itex] is concentrated in an infinitely small region around r=0 (using polar coordinates). The line integral [itex]\oint A (r,\phi)r d\phi=\Phi[/itex] whence for r not equal 0 [itex] A(r,\phi)=\Phi \delta (\phi)/2\pi r [/itex] is a valid choice for the vector potential. The hamiltonian for a particle in a magnetic field is [itex]H=(p-eA)^2/2m[/itex]. Expressing this in polar coordinates and splitting off some squareroot of r factor from the wavefunction should yield the hamiltonian we were discussing for the phi dependence.
 

Related to The Dirac delta in squere root of the absolute value

1. What is the Dirac delta in square root of the absolute value?

The Dirac delta is a mathematical function that is used to represent an impulse or spike in a system. In the context of the square root of the absolute value, it can be thought of as a way to measure the concentration of a signal at a specific point.

2. How is the Dirac delta related to the square root of the absolute value?

The Dirac delta is often used in conjunction with the square root of the absolute value because it allows for the representation of a signal that is concentrated at a single point, rather than being spread out over a range of values. This is useful in many areas of science and engineering, such as in signal processing and quantum mechanics.

3. What is the purpose of using the Dirac delta in the square root of the absolute value?

The Dirac delta in the square root of the absolute value allows for the representation of signals that have sharp changes or spikes at specific points. This can help simplify calculations and analysis in certain applications, as well as provide a more accurate representation of the signal.

4. Can the Dirac delta in the square root of the absolute value be negative?

No, the Dirac delta in the square root of the absolute value is always positive. This is because it represents a concentration or spike at a specific point, and therefore cannot have a negative value.

5. How is the Dirac delta in the square root of the absolute value used in real-world applications?

The Dirac delta in the square root of the absolute value has many practical applications, such as in signal processing, quantum mechanics, and image processing. It is also used in various engineering fields, such as control systems and electrical circuits, to model and analyze signals and systems with sharp changes or spikes.

Similar threads

Replies
14
Views
2K
Replies
1
Views
681
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
3K
Replies
4
Views
1K
Replies
58
Views
3K
  • Quantum Physics
Replies
10
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
1K
  • Introductory Physics Homework Help
Replies
1
Views
907
  • Precalculus Mathematics Homework Help
Replies
11
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
755
Back
Top