Rationale behind seperation and integration

In summary, the conversation discusses the use of "multiplying through" by the denominator of the dy/dx ratio in solving differential equations and the mathematical basis for this technique. It is explained that this method is a shorthand notation for exploiting the chain rule of differentiation and is valid as long as the appropriate limits are okay. The conversation also mentions different models for differentials and how physicists often violate mathematical rules in order to make calculations easier. It is suggested to begin with finite size elements and then take the limit to understand why this technique works. There is also a caution against switching the order of limiting processes.
  • #1
Gza
449
0
Ever since my first physics class, I've noticed many apparently sacred rules of calculus being broken before my innocent eyes. since calc I i was told that (dy/dx) was not a ratio, so don't treat it like one. Yet when solving various simple differential equations, it's common to separate and integrate the little differentials by "multiplying through" by the denominator of the dy/dx ratio (yeah i said ratio, so sue me :approve: ). I was wondering where the mathematical basis for this comes from.
 
Last edited:
Physics news on Phys.org
  • #2
The process of "multiplying through by dx" is actually that of passing to the land of "differential forms". A differential form is, more or less, something to that you can integrate.

For example,

[tex]
\frac{dy}{dx} dx = dy
[/tex]

seems strange and mysterious, but if we stick an integral sign out front, we get

[tex]
\int \frac{dy}{dx} dx = \int dy
[/tex]

which we recognize as simply the substitution rule for integrals
 
  • #3
They aren't magic tricks. There is a reason why they can be done, although I'm still not sure since I haven't done any differential geometry. Most books I've read tell you that you can do these kinds of manipulations, but they don't explain why since the explanation will have you wondering "why am I even bothering with this in the first place?" You'll have to take it as is for the most part (be very careful though).

e(ho0n3
 
  • #4
This technique is merely a shorthand notation for exploiting the chain rule of differentiation.
Consider the differential equation:
[tex]\frac{dy}{dx}=\frac{g(x)}{f(y(x))}, \int{f}(y)dy=F(y)+C[/tex]

That is, F is an antiderivative of f.
Then we have:
[tex]\int_{x_{0}}^{x_{1}}f(y(x))\frac{dy}{dx}dx=\int_{x_{0}}^{x_{1}}g(x)dx[/tex]

Invoking the chain rule, we have:
[tex]\int_{x_{0}}^{x_{1}}f(y(x))\frac{dy}{dx}dx=\int_{x_{0}}^{x_{1}}\frac{d}{dx}F(y(x))dx=[/tex]
[tex]F(y(x_{1}))-F(y(x_{0}))=F(y_{1})-F(y_{0})[/tex]
by the fundamental theorem of calculus.
But this yields the same as the formal trick:
[tex]\int_{x_{0}}^{x_{1}}f\frac{dy}{dx}dx=\int_{y_{0}}^{y_{1}}fdy[/tex]
 
  • #5
Thank you arildno, now I no longer have to wash my hands after seperating and integrating :smile:
 
  • #6
dy/dx Not to worry

A derivative is the limit of a ratio -- as the denom goes to zero. All the operations that effectively treat a deriv. as a ratio are fine, as long as the appropriate limits are OK, which they usually are. For all practical purposes, think of the dx as 10 to the minus a zillion. For all practical purposes, a derivative is a ratio. If you review your deltas and epsilons you'll see that you do not have to worry. R. Atkinson
 
  • #7
Berkeley famously castigated differentials as "the ghosts of departed quantities". In his time, there was no solid mathematical basis for them. Today we have a number of justifications such as those mentioned above that work in various limited circumstances, but none that completely covers the cavalier treatment you will see them subjected to in your physics classes.

Differential forms are the most common means of modeling differentials, but they have severe restrictions, for instance, the differential of arclength

[tex]ds = \sqrt{dx^2 + dy^2 + dz^2}[/tex]

is a nonsensical equation in differential forms - the squaring under differential forms does not behave like a simple multiplication, and the square root can only be interpreted as a notational convention for the equation[tex]ds^2 = dx^2 + dy^2 + dz^2[/tex]. In particular, you cannot define a differential form [tex]ds[/tex] by the equation above. But in physics, this sort of thing is done all the time.

There are other models for differentials than differential forms. I once heard about a definition developed by Solomon Leader, based on the Generalized Riemann Integral, that allowed such algebraic manipulations in a natural fashion. It was interesting, but I never heard any more about it.

The fact is, you will see many hard and fast mathematical rules violated freely by physicists, who get away with it, and leave mathematicians scrambling to figure out why. Another classic example is the Dirac delta function. This is a "function", [tex]\delta(x)[/tex], defined by the following properties:

[tex]\delta(x) = 0, x \neq 0[/tex]
[tex]\int_{-\infty}^\infty \delta(x)dx = 1[/tex]

Clearly no actual function has these properties. The first is sufficient to require the integral to equal zero, even if [tex]\delta(0)[/tex] is taken to be infinitely large. Yet the uses the delta function is put to in physics work, and are so useful that the function is also widely used in mathematics. But it required 50 years for mathematicians to come up with a fully functional justification for it.

Some other tricks you will see if follow physics long enough: The derivation of finite values for quantities defined by divergent series, and an integral over an infinite dimensional space which cannot possibly exist, but none-the-less still provides useful answers. Last I heard, not even a weak mathematical justification has been found for these.
 
  • #8
If it bothers you, just begin with finite size elements at the very beginning, then take the limit once you finish all the mathematical manipulations.
 
  • #9
If it bothers you, just begin with finite size elements at the very beginning, then take the limit once you finish all the mathematical manipulations.

Doesn't doing that somehow "break" the definition of the derivative, since no limits are involved with that procedure? I mean this philosophically, I'm sure computationally it would work, but I'm trying to learn why it works at a deeper level.
 
  • #10
This procedure works precisely because no limits are involved. I am not sure what philosophy you are referring. After all, you can choose to shrink the finite size increases to infinitisimal size at any step you wish. What compels you to so at the outset?
 
  • #11
JohnDubYa said:
This procedure works precisely because no limits are involved. I am not sure what philosophy you are referring. After all, you can choose to shrink the finite size increases to infinitisimal size at any step you wish. What compels you to so at the outset?

You can't just go about switching the order in which you apply limiting processes. In general, it's just wrong.
That it works under fairly mild restrictions is beside the issue.
 
  • #12
I tell you what, why not provide a concrete example where this notion fails. If it's convincing, I'll concede.
 
  • #13
JohnDubYa said:
I tell you what, why not provide a concrete example where this notion fails. If it's convincing, I'll concede.

This depends on what you mean by changing the limiting process. Given a situation where you have something like:

[tex]\lim_{x\rightarrow a}[\text{stuff}][/itex]

if you mean that you can rearrange and manipulate things inside of "stuff" and then just apply the limit to the rearranged version of stuff, then you are right.

What you can't do is take something like the above limit and arbitrarily move stuff inside and outside of the limit. So for example you can't rearrange stuff, apply the limit to part of stuff, then rearrange the rest of stuff and apply the limit to that, except within certain restrictions.
 
  • #14
RE: "If you mean that you can rearrange and manipulate things inside of "stuff" and then just apply the limit to the rearranged version of stuff, then you are right."

That is exactly what I mean. And in physics that is essentially what we do when we multiply an equation through by [tex]dx[/tex].

Physicists can get sloppy with the notation, but the mathematics is almost always sound.
 
  • #15
One example in which switching the order of limiting processes can't be done.
Define a sequence of functions
[tex]f_{n}=\frac{1}{n}\sin(n^{2}x)[/tex]
Clearly, we have, for all x:
[tex]\lim_{n\to\infty}{f}_{n}=0=f(x)[/tex]

Hence, the following expression yields the derivative of f(x):
[tex]\lim_{h\to{0}}\lim_{n\to\infty}\frac{f_{n}(x+h)-f(x)}{h}=0[/tex]

Try switching the h and n limiting processes; it doesn't work.
 
  • #16
JohnDubYa said:
RE: "If you mean that you can rearrange and manipulate things inside of "stuff" and then just apply the limit to the rearranged version of stuff, then you are right."

That is exactly what I mean. And in physics that is essentially what we do when we multiply an equation through by [tex]dx[/tex].

Physicists can get sloppy with the notation, but the mathematics is almost always sound.

Note that differentiation and integration are 2 limiting processes.
What you're saying is that a "dx" in the differentiation process always can be canceled by a "dx" in the integration process.
 
  • #17
Well, that IS the Fundamental Theorem of Calculus. But I am really saying is that physicists manipulate [tex]dx[/tex] when in fact they are manipulating [tex]\Delta x[/tex]. The method is sound, just the notation is sloppy. They just hate taking explicit limits of every mathematical statement because it bogs down the discussion.
 
  • #18
I am perfectly aware that the rationale behind the technique is the fundamental theorem of calculus.
I would like to remind you that the original poster asked for an explanation of the separation technique.
To call anything else other than an EXPLICIT reference to the FToC for an EXPLANATION, is simply a misnomer.
 
  • #19
Sorry, but I cannot parse your last sentence. No matter, because I think we are now off-topic.
 
  • #20
the usual mathematician's idea of a differential is that it represenst a function whose values are linear functions. for example the differential dx represents the function whose value at any point a is the identity linear function. The differential f(x)dx has value at a equal to the linear function: f(a) times the identity function.

Then if we have two differentials, say f(x)dx and g(x)dx, we can divide them and get a function whose value at a is f(a)dx/g(a)dx = f(a)/g(a). This is a number so f(x)dx/g(x)dx is a function.

Hence the quotient of two differentials is a function, and a function times a differential is a differential.

since the quotient of two differentials is a function, each differential is a function times any other differential (except where the one on the bottom vanishes). In particular df is df/dx times dx or f'(x)dx. This is the justification for saying if dy/dx = g(x) then dy = g(x)dx, where we think of dy here as df.
 

What is the rationale behind separation and integration?

The rationale behind separation and integration is to break down complex systems or processes into smaller, more manageable parts in order to better understand them. Separation allows for a closer examination of individual components, while integration combines these components to create a more comprehensive understanding of the system as a whole.

Why is separation and integration important in scientific research?

Separation and integration are important in scientific research because they allow scientists to analyze and understand complex systems or phenomena. By breaking down these systems into smaller parts, scientists can more easily identify patterns, relationships, and causal factors, leading to a more accurate understanding of the system.

What are some common methods of separation and integration used in scientific research?

Some common methods of separation and integration used in scientific research include experimental design, statistical analysis, and computer modeling. These methods allow scientists to manipulate variables, analyze data, and simulate complex systems in order to gain a better understanding of the underlying processes.

How does separation and integration contribute to the advancement of scientific knowledge?

Separation and integration contribute to the advancement of scientific knowledge by allowing scientists to uncover new information and insights about complex systems. By breaking down and combining different components, scientists can make new discoveries, develop theories, and improve our understanding of the natural world.

Are there any limitations to the use of separation and integration in scientific research?

Yes, there are limitations to the use of separation and integration in scientific research. For example, some systems may be too complex to fully understand even when broken down into smaller parts, and there may be interactions between components that are difficult to account for. Additionally, the process of separation and integration can be time-consuming and resource-intensive, making it impractical for certain research projects.

Similar threads

Replies
22
Views
2K
Replies
15
Views
2K
Replies
3
Views
1K
Replies
4
Views
2K
  • Calculus
Replies
3
Views
2K
  • Differential Equations
Replies
20
Views
2K
Replies
1
Views
2K
Replies
1
Views
1K
  • Special and General Relativity
Replies
4
Views
1K
Replies
1
Views
1K
Back
Top