# Differential Calculus Tutorial

Status
Not open for further replies.

#### Ackbach

##### Indicium Physicus
Staff member
1. Prerequisites

Before you study calculus, it is important that you have a mastery of the concepts that come before it. I found calculus difficult to master (I basically had to take Differential and Integral Calculus three times in a row!), and I think many students also find it challenging - challenging, but doable. However, if you do not have the underlying concepts down, you will find it next to impossible. Here is a list of things you should be able to do well:

1. Avoid algebra mistakes and know valid algebra techniques. Algebra doesn't go away when you study calculus! If anything, calculus packs down your algebra. You'll be doing plenty of it in calculus, so make sure you're up on it.

2. Know your trigonometry. The vast majority of calculus courses assume you've already had trigonometry, and will differentiate trig functions, integrate trig functions, and in various ways manipulate trig functions. You should be very comfortable with them.

3. Know some basic geometry. You should know areas of basic figures like rectangles, circles, trapezoids, etc.

2. Overview

Calculus is about change: how does one quantity change compared with another quantity? Calculus studies change by solving two problems: finding tangent lines to curves, and finding areas under curves. One of the most powerful theorems in existence, the Fundamental Theorem of the Calculus, shows that solving these two problems are inverse procedures, the one of the other. The Fundamental Theorem of the Calculus is responsible for the modern... technological... age. It's nearly impossible to overestimate its importance. Without it, the computer on which I'm typing this up would not exist. Neither would air conditioning, cars, and many, many technologies many of us now take for granted.

2.1 Overview of Tangent Lines to Curves

Many students are misled, I find, in their geometry courses when they're taught what a tangent line is. I hear a lot, "A tangent line to a curve is a line that touches at only one point." Indeed? What about this: Is the straight line not a tangent line to the function at $-4/5$ simply because it also intersects around $1.6$? There's a "corner" right at $x=0$ for the graph of this function. Which "tangent line" shall we say is really tangent there? There seem to be many more than one candidate.

Evidently, we need to change our concept of what a tangent line is. One of the more important ideas relating to tangent lines is that this is a local phenomenon. This avoids the first issue I raised above, since the second intersection is not "right next to" where we would like to call it a tangent line. As for the second issue, let's take the first graph and zoom in a few times to see what's really happening:   As you can see, the tangent line and the original function are starting to "merge" - it's almost as if they are starting to look like the same curve. If, on the other hand, we zoom in on the corner example, as follows:  ,

we find that the picture doesn't change much. Certainly, there's no "merging" going on. So there's a fundamental difference between these two examples. We will explore that difference more when we get to derivatives.

2.2 Overview of Areas Under Curves

If you've paid attention to the prerequisites I mentioned earlier, you should know how to calculate the area of a rectangle, a trapezoid, and a circle. But how would you compute the area under a parabola? There's not a nice, neat, geometric formula for that (as yet!). However, solving the problem is important for many reasons. Here's an example: The function is $f(x)=-x^{2}+2$, and we'd like to know the area under this curve from $-\sqrt{2}$ to $+\sqrt{2}$. How to find it?

Well, if a problem is too hard, try to break it down into smaller pieces. And that's exactly what we do here. Try fitting rectangles to this curve, and compute the areas of those rectangles. So, you could try one rectangle with a height of $2$, and width $2\sqrt{2}$. You'd get an area of $4\sqrt{2}$. But you know that would overestimate the desired area. You could do better by trying two rectangles, and maybe use the value of the function in the middle of each subinterval for a rectangle height.

Here's a graphical illustration of using 10 rectangles with left-hand endpoints. .

Here's a graphical illustration of using 10 rectangles with right-hand endpoints. .

Finally, here's an example of using 20 rectangles with right-hand endpoints. You can see that the area is more accurately represented with more rectangles. Underlying both procedures (finding tangent lines and finding areas under curves) is the concept of the limit. For the tangent line problem, we are essentially going to be taking the limit of slopes of secant lines. For the area problem, we're going to take the limit as the number of rectangles goes to infinity. It is to the concept of limit that we turn first.

Comments and questions should be posted here:

Last edited by a moderator:

#### Ackbach

##### Indicium Physicus
Staff member
Introduction to Limits Draft

3. Limits

What is a limit? A limit is, intuitively, all about what happens to a function near some value of its independent variable(s) but not at the value. In a limit, you never let the independent variable actually get to its value - only arbitrarily close.

$$\lim_{x\to a}f(x)=L\;\text{iff}\; \forall \epsilon>0,\; \exists \delta>0\; \text{such that if} \;0<|x-a|< \delta, \; \text{then} \;|f(x)-L|< \epsilon.$$

1. When you write $\lim_{x\to a}f(x)=L$, you say, "The limit of $f$ of $x$ as $x$ approaches $a$."

2. This definition gives you no hint how to find $L$. It is produced out of thin air. We'll get to some methods of computing limits later.

3. Both $\delta$ and $\epsilon$ you should think of, in this context, as very small real numbers.

4. The expressions $|x-a|$ and $|f(x)-L|$ are both distances - the first is measuring the distance from $x$ to $a$ on the $x$ axis, and the second the distance from $f(x)$ to $L$ on the $y$ axis.

5. The symbol $\forall$ means "for all", and the symbol $\exists$ means "there exists". The order in which you use them matters. The symbol "iff" means "if and only if". It's not a typo!

6. You can paraphrase this definition as follows, "The limit of $f$ of $x$ as $x$ approaches $a$ is equal to $L$ if and only if whenever the distance from $x$ to $a$ is arbitrarily small (but not zero), the distance from $f(x)$ to $L$ is forced to be arbitrarily small."

3.1 The Box Game

Suppose you have two players, Harry (for horizontal) and Victor (for vertical). They're playing with the function $f(x)=x^{2}+1$ near an $x$ value of $2$. Of course, $f(2)=5$, but neither Harry nor Victor is the least bit concerned about that fact. First, Victor picks an $\epsilon=3.5$, and Harry picks a $\delta=1$: Victor exclaims, "Ha! I've won! The function goes out of the top of the box near $x=2.7$." Harry is pretty glum. But then, Harry changes the game by shrinking his $\delta$ down to $0.5$: Harry says, "Nope, I'm winning. The function only goes out the sides of the box." Victor has caught on to a strategy, however, and changes his $\epsilon$ down to $2.5$: .

Victor: "It's out of the top and the bottom, you nitwit!" Ah, but Harry plays another card by shrinking his $\delta$ down to $0.25$: .

Harry: "Beat that!"

...

Who wins? Well, here's how we say who wins. If Victor can find an $\epsilon$ so small that Harry has no $\delta$ small enough to make the function go out the sides of the box, then Victor wins (and the limit does not exist!). On the other hand, if Harry can find a formula, typically depending on $\epsilon$, so that no matter how small Victor's $\epsilon$ is, Harry can, without even thinking, throw a $\delta$ back at him so that the function comes out the sides of the box, then Harry wins (and the limit exists!).

How are some ways Victor could win? Well, you could have a discontinuity. Here's a jump discontinuity:

$$f(x)=\begin{cases}x^{2}+1, \quad& x<2\\ x^{2}+5, \quad& x\ge 2 \end{cases}.$$

Here's a picture: Mathematica plots a vertical line at the discontinuity, which is not quite accurate: there's nothing there. However, it does illustrate the fact that Victor has won this game, and the limit does not exist. It doesn't matter how narrow (horizontally) Harry tries to make the box, the function is going to come out the top and bottom of the box.

3.2 Limit Proving Examples

Ok, so how about an example where Harry wins? Let's try $f(x)=3x+7$. I like nice prime numbers for examples, because they don't combine very easily. Let's examine whether
$$\lim_{x\to 1}f(x)$$
exists or not. We'd like the limit to be the value of the function there, which is $10$. So our candidate limit is $L=10$. To prove these statements, you must always start out with the following: Let $\epsilon>0$. This is so common in advanced calculus/real analysis, that my teacher designated an "epsilon" person, who was always to provide the class with needed epsilons in class. Great. Now that Victor has provided us with an epsilon, we need to come up with a formula to generate deltas so that Harry doesn't have to think anymore. With linear functions, it turns out that you can just use the slope to get your formula for delta. We're doing to let $\delta=\epsilon/3$. Then, if $0<|x-1|<\delta$, it follows that
$0<3|x-1|=|3x-3|=|3x+7-10|=|f(x)-L|<3\delta=\epsilon$, and we're done.

Let's try a more complicated example. How about the original one: prove that
$$\lim_{x\to 2}(x^{2}+1)=5.$$
Now we're going to have to work backwards on this one. As usual, we first let $\epsilon>0$.

We're going to have to find a $\delta>0$ such that $0<|x-2|<\delta$ forces $|x^{2}+1-5|<\epsilon$. Now that second expression we can rewrite a bit: $|x^{2}-4|<\epsilon$, or $|(x-2)(x+2)|<\epsilon$. The trick here is to recognize that we have to "control the size of" both pieces, the $x-2$ and the $x+2$. The $x-2$ is easy, because we can control it with $\delta$. The $x+2$ is trickier. The way to do it is to narrow your focus. You say to yourself, "Well, delta is a small number, vanishingly small. So we could assume that it's smaller than, say, $1$." Great, let's go with that. What's the biggest that $x+2$ could be when $|x-2|<1$, or $1<x<3$? Simple: the answer is $5$, because the function $x+2$ is just getting bigger all the time. So you just plug in the right-hand endpoint to find the maximum. Aha. So, now we sort of compare the $|(x-2)(x+2)|<\epsilon$ expression to what we have now, and we could say that $|(x-2)(x+2)|<5|x-2|,$ so long as we force $\delta<1$. And, since we need $5|x-2|<\epsilon$, or $|x-2|<\epsilon/5$, it seems that we need $\delta<\epsilon/5$. So, we simultaneously need $\delta<\epsilon/5$ and $\delta<1$. The easiest way to do this is to let $\delta=\min(1,\epsilon/5)$. So that's done it. We have our formula. All we need to do now is complete the proof (working in reverse from what we just did):

Let $\delta=\min(1,\epsilon/5)$. Assume $0<|x-2|<\delta$. Then
$$|x^{2}+1-5|=|x^{2}-4|=|(x-2)(x+2)|<5\delta<\epsilon.$$
And you're done! A bit more work, sure, but doable, right?

3.3 Limit Theorems

Proofs are nice and useful. Period. However, it is also true that you need to be able to compute limits. These proofs show you nothing about how to actually compute limits. So I'm going to give you a few limit theorems that are useful.

3.3.1 "Plug in" Theorem

By far the most important limit theorem is this one, rather loosely stated (we'll firm it up more when we get to continuity): if you can plug in, do. That is, if you can plug the limit value into the function, then that's the way to evaluate the limit. What to watch out for are the usual three culprits (in the current context of real numbers):

1. You can't divide by zero.
2. You can't take the square root of a negative number (better watch out for zero as well - see below).
3. You can't take the logarithm of zero or a negative number.

This works for continuous functions (more on that later).

Examples:
$$\lim_{x\to \frac{\pi}{2}}\sin(x)=1,\quad \lim_{x\to 1}\ln(x)=0,\quad \lim_{x\to 4}\frac{x^{2}-3x}{x^{4}+x^{2}+1}=\frac{4}{273}.$$

Here's a non-example or two:
$$\lim_{x\to 0}\sqrt{x}\;\text{d.n.e. (short for does not exist'')},\quad \lim_{x\to\frac{\pi}{2}}\tan(x)\;\text{d.n.e.}.$$

The first one fails, because when you write $\lim_{x\to 0}$, you really mean the limit as $x$ approaches $0$ from either side. Since negative numbers are not in the domain of the square root function, this is nonsense. You could write, however, that
$$\lim_{x\to 0^{+}}\sqrt{x}=0.$$
This notation is read "The limit of the square root of $x$ as $x$ approaches $0$ from the right." Here, you don't care at all about what's happening to the left of zero. Another equivalent notation you might see is this:
$$\lim_{x\downarrow 0}\sqrt{x}=0.$$
For the second example, the tangent function blows up as you approach $\pi/2$ from the left, and "blows down" as you approach $\pi/2$ from the right. That's the worst!

3.3.2 Sum and Difference Theorems

$$\lim_{x\to a}[f(x) \pm g(x)]=\lim_{x\to a}f(x) \pm \lim_{x\to a}g(x),$$
provided all limits in sight exist. So you can distribute limits, provided everything exists. Warning: $\infty-\infty$ is undefined, so you do have to be careful with this one.

It's not too hard to prove this theorem. Try it yourself! This is one of those "what-do-you-know" proofs, where once you've written out everything you know and everything you've assumed, the solution is pretty straight-forward. Not-so-straight-forward are the

3.3.3 Product and Quotient Theorems

$$\lim_{x\to a}[f(x)\cdot g(x)]=\left[\lim_{x\to a}f(x)\right]\cdot\left[\lim_{x\to a}g(x)\right].$$
Also,
$$\lim_{x\to a}\frac{f(x)}{g(x)}=\frac{\lim_{x\to a}f(x)}{\lim_{x\to a}g(x)},$$
provided $\lim_{x\to a}g(x)\not=0$, and of course, all limits in sight must exist. Note that if some of the limits on the right-hand-sides d.n.e., that does not necessarily imply that the limit on the left-hand-side d.n.e. It might be on a case-by-case basis. One extremely important example is a derivative-type limit, which will always look like $0/0$. That's technically undefined; however, the limit may actually exist, and when it does, it can be rather important.

3.3.4 Two Important Trig Limits

Two limits of importance are the following:
$$\lim_{x\to 0}\frac{\sin(x)}{x}=1,$$
and
$$\lim_{x\to 0}\frac{1-\cos(x)}{x}=0.$$

We will need these to compute the derivatives of $\sin(x)$ and $\cos(x)$.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Limit Computation Examples Draft

3.4 Limit Computation Examples

Now that we have a few limit theorems we can use, we should see how we use them. You've already seen a few examples, but we'll do a few more.

3.4.1 Example: Constant Function Limit

$$\lim_{x\to 6}3.$$
This one is so easy it's hard. You're taking the limit of a constant function. So it's... not... going... to... change. The limit in this case has to be 3, right? Unless you're dealing with large values of 3...

3.4.2 Example: Plug-In Function

$$\lim_{x\to 4}x.$$
Here we get to use our favorite limit rule: plug-in. The answer is $4$, because we can plug in with no repercussions.

3.4.3 Example: Trig Function Cancellation

$$\lim_{x\to \pi/2}\left[\cos(x)\tan(x)\right]=\lim_{x\to \pi/2}\left[\cos(x)\frac{\sin(x)}{\cos(x)}\right] =\lim_{x\to \pi/2}\sin(x)=1.$$
Again, the cancellation works because we're in the limit, and $x\not=\pi/2$.

3.4.4 Example: Square Root Rationalization

$$\lim_{x\to 0}\frac{\sqrt{1+x}-\sqrt{1}}{x}.$$
Here we have to use a dirty trick you might remember from your algebra: rationalize. This limit originally looks like $0/0$. That is undefined; however, the limit might still be defined, since we do not get all the way to $x=0$. Rationalizing yields:
$$\lim_{x\to 0}\left[\frac{\sqrt{1+x}-\sqrt{1}}{x}\cdot\frac{\sqrt{1+x}+\sqrt{1}}{\sqrt{1+x}+\sqrt{1}}\right] =\lim_{x\to 0}\frac{1+x-1}{x(\sqrt{1+x}+\sqrt{1})}=\lim_{x\to 0}\frac{x}{x(\sqrt{1+x}+\sqrt{1})}$$
$$=\lim_{x\to 0}\frac{1}{\sqrt{1+x}+\sqrt{1}}=\frac{1}{2}.$$
You see how we were able to use the "plug-in" theorem at the end. This is a common theme for derivative-type limit computations. They will always initially look like $0/0$, but then, if all goes well, something will cancel (which you can do since you're in the limit!), and with what's left you can plug in.

3.4.5 Example: Derivative-type Limit

Here's another derivative-type limit:
$$\lim_{h\to 0}\frac{(x+h)^{2}-x^{2}}{h}=\lim_{h\to 0}\frac{x^{2}+2xh+h^{2}-x^{2}}{h}=\lim_{h\to 0}\frac{2xh+h^{2}}{h} =\lim_{h\to 0}\frac{h(2x+h)}{h}=\lim_{h\to 0}(2x+h)=2x.$$

3.4.6 Example: Trig Derivative-type Limit

Here's an even trickier one:

$$\lim_{h\to 0}\frac{\sin(x+h)-\sin(x)}{h}=\lim_{h\to 0}\frac{\sin(x)\cos( h )+\sin( h )\cos(x)-\sin(x)}{h}.$$
Where to go from here? Well, remembering the two trig limits I offered in the previous post, let's group the terms so we can try to take advantage of those two limits:
$$=\lim_{h\to 0}\frac{\sin(x)(\cos( h )-1)}{h}+\lim_{h\to 0}\frac{\sin( h )\cos(x)}{h}.$$
Now you can use the product theorems and the limits of constants (remember that in these limits, $h$ is going to zero, and $x$ isn't changing at all!) to get this:
$$=\sin(x)\lim_{h\to 0}\frac{\cos( h )-1}{h}+\cos(x)\lim_{h\to 0}\frac{\sin( h )}{h}.$$
The first limit is just the negative of the $\cos$ limit in the previous post, so that's zero. The second limit is $1$. So we're left with
$$\lim_{h\to 0}\frac{\sin(x+h)-\sin(x)}{h}=\cos(x).$$
This is an extremely important limit. It's essentially saying that the derivative of the sine function is the cosine function. Similarly, you can prove that
$$\lim_{h\to 0}\frac{\cos(x+h)-\cos(x)}{h}=-\sin(x).$$
Note the minus sign there.

3.4.7 Example: Exponential Function

For our last example, we will attempt to compute one of the most important limits in all of math:
$$\lim_{h\to 0}\frac{e^{x+h}-e^{x}}{h}=\lim_{h\to 0}\frac{e^{x}(e^{h}-1)}{h}=e^{x}\lim_{h\to 0}\frac{e^{h}-1}{h}.$$
What is
$$\lim_{h\to 0}\frac{e^{h}-1}{h}?$$
Well, if we graph the function $f( h )=(e^{h}-1)/h$ on the interval $(-1,1)$, we obtain the following: According to the graph, we'd like the limit to be $1$, and indeed, it is. Why it is $1$ will have to wait, however. We can't apply the plug-in rule, because the limit looks like $0/0$.

Notice that this implies the original limit is $e^{x}$. And now, once we prove that $\lim_{h\to 0}\frac{e^{h}-1}{h}=1$, we will have shown that the derivative of $e^{x}$ is $e^{x}$, a fact that has enormous implications for differential equations. Indeed, the fact that the exponential function is its own derivative makes it one of the most important functions in all of mathematics.

So there are some limit computation examples. Our next post will be on continuity.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Calculus Tutorial: Continuity (draft)

4. Continuity

What is continuity? Continuity, in the calculus context, is about whether a function has "gaps" in its graph. An intuitive way of thinking about it is this: can I draw the graph of this function without lifting my pen from the page? (And no cheating with separate sheets of paper!)

4.1 "Calculus" Definition of Continuity

The "calculus definition" of continuity is the following:

A function $f(x)$ is continuous at $a$ if and only if $$\lim_{x\to a}f(x)=f(a).$$

This is the "plug-in" rule for evaluating limits that I mentioned before. This equation says three things, none of them trivial:

1. $\lim_{x\to a}f(x)$ exists. We've seen multiple ways that limits can fail to exist.
2. $f(a)$ is defined. You have to make sure that $a$ is in the domain of $f$, and has a well-defined value.
3. The limit and the function value are equal. You could easily have that fail on you, even if 1. and 2. above hold. Think about a function with one value picked up and moved; something like this:
$$f(x)=\begin{cases}x^{2},\quad&x\not=3\\ 1,\quad&x=3\end{cases}.$$
Now examine the continuity equation. You have $\lim_{x\to 3}f(x)=9$, but $f(3)=1$. The limit and the function value are not the same, so the function isn't continuous at $x=3$.

4.2 "Rigorous" Definition of Continuity

For completeness, I'd like to point out the "rigorous" delta-epsilon definition, which goes like this:
$$f(x)\text{ is continuous at }a\text{ if and only if } \forall \epsilon>0,\; \exists \delta>0 \text{ such that if } |x-a|<\delta,\text{ then } |f(x)-f(a)|<\epsilon.$$
This should be ringing all sorts of bells in your head. It looks an awful lot like the limit definition, and it is. But there are two important differences. I will list them, but see if you can spot them first.

The differences are these:
1. There is no condition that $0<|x-a|$; namely, $x$ is allowed to equal $a$.
2. Instead of the $L$ produced out of thin air (or maybe not produced is more like it!), we actually have a candidate for the "limit", which is $f(a)$.

4.3 Comparison of the Two Definitions

The "calculus" definition is entirely equivalent to the "rigorous" definition, a fact which can be proven, though I won't do so here. So it's a bit of a misnomer to call the delta-epsilon definition the "rigorous" definition, when the "calculus" definition is no less rigorous than the delta-epsilon definition! For practical purposes, I think you'll find the "calculus" definition much easier to apply in most cases. However, if the calculus definition appears to give you no traction in proving a particular function is continuous, you always have the delta-epsilon definition to try instead.

4.4 Applications of Continuity

So, what is the use of discussing continuity? The main use of discussing continuity is because most physical quantities like position, velocity, energy, and so on, are required to be continuous. This has extremely important implications for solving differential equations governing the behavior of physical systems. If the position has to be continuous over two different regions, then what you do is solve the differential equation separately in those two regions, and piece together the solutions at the boundary using your arbitrary constants. That is, you force the functions to be continuous at the boundary between two regions.

• Jameson and sbhatnagar

#### Ackbach

##### Indicium Physicus
Staff member
Derivatives I Draft

5. Derivatives

What is a derivative? The derivative $f'(x)$ of a function $f(x)$, intuitively, is another function such that the derivative's value at $x$ is equal to the slope of the tangent line to $f$ at $x$. We've already seen in earlier posts that tangent lines don't have to exist. We've also seen examples where there are too many possibilities for a "line that touches only at one point." Evidently, we need to redefine tangent line.

5.1 Tangent Line to a Function at a Point

The tangent line to a function $f(x)$ at $x=a$ is the straight line with slope
$$m= \lim_{h\to 0} \frac{f(a+h)-f(a)}{h}$$
and going through the point $(a,f(a))$, provided the limit $m$ exists.
Alternatively, you could write
$$m=\lim_{x\to a} \frac{f(x)-f(a)}{x-a}.$$
The translation between these two ways of writing the slope is that $h=x-a$.
This second way of writing the slope gives us the geometrical intuition that the slope of a tangent line is the limit of slopes of secant lines. If I left off the limit, I'd have the slope of a secant line that intersects $f$ at $(x,f(x))$ and $(a,f(a))$. So when I take the limit, I'm letting one of those two points approach the other along the path described by $f$. However, as it turns out, the first formula for the slope is usually easier to compute - we'll be using it considerably more than the other formula.

5.1.1 Tangent Line Example

Find the equation of the tangent line to the function $x^{2}$ at $x=3$.

I only use one equation for a line: $y=mx+b$. All the formulas for lines are equivalent, so I try to minimize the stock knowledge I need to know, and be able to derive anything else needed on the spot. To find the equation of a line, there are two methods. One is to know two points on the line. The other is to know one point as well as the slope. The latter is what we're going to do here. Now for the slope, we use the formula
$$m=\lim_{h\to 0} \frac{(3+h)^{2}-3^{2}}{h}= \lim_{h\to 0} \frac{3^{2}+6h+h^{2}-3^{2}}{h} = \lim_{h\to 0} \frac{6h+h^{2}}{h} = \lim_{h\to 0}\frac{h(6+h)}{h} = \lim_{h\to 0}(6+h)=6.$$
There's our slope. So the equation of our tangent line so far is $y=6x+b.$ To get the point on the line and thus $b$, we note that the tangent line does indeed need to have the same coordinates as the function does at the point in question, so the point $(3,3^{2})=(3,9)$ is on the tangent line. Plugging this into our formula yields $9=6\cdot 3+b$, which implies $9-18=b$, or $b=-9$. Hence, the equation of the tangent line is $y=6x-9$. Does this make sense? Here's a plot of the function and the tangent line that we just worked out. It seems to work very nicely. 5.1.2 Second Tangent Line Example

Let's try another one: find the tangent line to $f(x)=\sin(x)$ at the point $x=\pi/4$.

For slope, we take the limit
$$\lim_{h\to 0}\frac{\sin(\pi/4+h)-\sin(\pi/4)}{h}=\lim_{h\to 0}\frac{\sin(\pi/4)\cos( h )+\sin( h )\cos(\pi/4)-\sin(\pi/4)}{h}$$
$$=\cos(\pi/4)\underbrace{\lim_{h\to 0}\frac{\sin ( h)}{h}}_{\text{This is }1}+\sin(\pi/4) \underbrace{\lim_{h\to 0}\frac{\cos( h )-1}{h}}_{\text{This is }0}=\cos(\pi/4)=\frac{\sqrt{2}}{2}.$$
There's our slope. Thus, the equation for the tangent currently looks like
$$y=\frac{\sqrt{2}}{2}\,x+b.$$
Now for the point on the line: we have $(\pi/4,\sin(\pi/4))=(\pi/4,\sqrt{2}/2)$. This implies that
$$\frac{\sqrt{2}}{2}=\frac{\sqrt{2}}{2}\cdot\frac{\pi}{4}+b \implies b=\frac{\sqrt{2}}{2}\left(1-\frac{\pi}{4}\right)=\frac{\sqrt{2}(4-\pi)}{8}.$$
It follows that the equation for the tangent line is
$$y=\frac{\sqrt{2}}{2}\,x+\frac{\sqrt{2}(4-\pi)}{8}.$$

Here's a plot of the original function and the tangent line. Note again that this tangent line does touch the original function in more than one place. However, it's that "hugging" the original function that really determines whether or not it's a tangent line at a particular intersection.

5.1.3 Tangent Line Procedure

Finding the equation of a tangent line is the following process:

1. Find the slope of the function at the desired point using the limit above. (Note: we're going to simplify this later on with some theorems.)

2. Use the point $(a,f(a))$ as a point on the line, in order to find the intercept.

3. Write out your line equation.

That does it, then. We've essentially solved one of the two main problems of calculus: finding tangent lines to functions.

5.2 Derivative as a Function

However, that does not mean we're done talking about derivatives. We've got loads more to say about those. For one thing, calculating these limits is getting rather tedious, I'm sure you'd agree. Wouldn't it be nice to be able to compute these slopes a lot more easily? One aspect of this is that we'd like to be able to generate a lot of slopes by calculating just one limit. This we call the derivative. It's a function whose value at any point is the slope of the original function.

The way to do this is to leave the point arbitrary at which we're computing the slope. How do we do that? With this definition:

Given a function $f(x)$, the derivative of $f(x)$, denoted $f'(x)$ or $\frac{df}{dx}$ is given by
$$f'(x)=\frac{df}{dx}=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=\lim_{z\to x}\frac{f(z)-f(x)}{z-x}.$$
Here again, $h=z-x$. The second expression there, with $z\to x$, is again the expression giving better geometric insight: we're still just doing limits of slopes of secant lines.

What's especially important to remember here is that the result of this limit is a function. We don't get a number this time, because there are still $x$'s floating around. This function generates all the slopes we could ever want relating to tangent lines to $f(x)$.

5.2.1 A Derivative Example

Compute the derivative of $f(x)=x^{2}$.
$$f'(x)=\lim_{h\to 0}\frac{(x+h)^{2}-x^{2}}{h}=\lim_{h\to 0}\frac{x^{2}+2xh+h^{2}-x^{2}}{h}=\lim_{h\to 0}\frac{2xh+h^{2}}{h}=\lim_{h\to 0}\frac{h(2x+h)}{h}=\lim_{h\to 0}(2x+h)=2x.$$

5.2.2 Another Derivative Example

How about a more ambitious project? Compute the derivative of $f(x)=x^{n}$. Well, we still go back to our definition:
$$f'(x)=\lim_{h\to 0}\frac{(x+h)^{n}-x^{n}}{h}.$$
What do we do now? We can expand the $(x+h)^{n}$ using the binomial theorem thus:
$$(x+h)^{n}=\sum_{j=0}^{n}{n\choose j}x^{j}h^{n-j}.$$
Here the ${n\choose j}$ notation means
$${n\choose j}\equiv \frac{n!}{j!(n-j)!},$$
where $n!=n(n-1)(n-2)\dots(2)(1)$ is the factorial notation. The easiest way of thinking about this is that the coefficients are picked off of a row of Pascal's Triangle. So, plugging this result into what we have above, we get
$$\lim_{h\to 0}\frac{(x+h)^{n}-x^{n}}{h}=\lim_{h\to 0}\frac{\sum_{j=0}^{n}{n\choose j}x^{j}h^{n-j}-x^{n}}{h} =\lim_{h\to 0}\frac{\sum_{j=0}^{n-1}{n\choose j}x^{j}h^{n-j}}{h}.$$
Every term in the numerator here has an $h$ in it. So we factor that out:
$$=\lim_{h\to 0}\frac{h\sum_{j=0}^{n-1}{n\choose j}x^{j}h^{n-j-1}}{h}=\lim_{h\to 0}\left(\sum_{j=0}^{n-1}{n\choose j}x^{j}h^{n-j-1}\right).$$
Now every term except the $j=n-1$ term has an $h$ in it. So the limit becomes
$$={n\choose n-1}x^{n-1}=\frac{n!}{1!(n-1)!}\,x^{n-1}=nx^{n-1}.$$

There's an easier way to compute this derivative, using the alternate definition
$$f'(x)=\lim_{z\to x}\frac{f(z)-f(x)}{z-x}.$$
I'll leave that one to you. Verify that you get the same result I just did.

Another important thing to note is that the formula $(x^{n})'=nx^{n-1}$ works for all $n\in\mathbb{R}$. So $(x^{\pi})'=\pi x^{\pi-1}$.

5.3 Derivative Theorems

But we're still hungry for more! We want to be able to compute the derivatives of just about any function we see. So, here are some derivative theorems that will make your life easier.

5.3.1 Addition and Subtraction Theorem for Derivatives

This is what you'd expect:
$$\frac{d}{dx}(f(x)\pm g(x))=\frac{df(x)}{dx}\pm\frac{dg(x)}{dx}.$$
The derivative of a sum or difference is the sum or difference of the derivatives.

So, the limit of a sum or difference is the sum or difference of the limits. The derivative of a sum or difference is the sum or difference of the derivatives. The limit of a multiplication or division is the multiplication or division of the limits, with some provisos. So, the derivative of a multiplication or division is the multiplication or division of the derivatives, right?

Wrong. If you thought that, you'd be in good company, though. Leibniz, one of the two co-discoverers of calculus, thought the same way, until he started working with that assumption. It didn't pan out. Here's the

5.3.2 Multiplication Theorem for Derivatives

We have that
$$\frac{d}{dx}(f(x)\,\cdot g(x))=f'(x)\cdot g(x)+f(x)\cdot g'(x).$$
That is, to compute the derivative of a product, you write out two copies of the product, and take a different derivative each time. Can you think what the derivative of a triple product is? That is, what's the derivative of $f(x)\,g(x)\,h(x)$? Once you think you've got it, check it out against the answer below:
$$\frac{d}{dx}(f(x)\,g(x)\,h(x))=f'(x)\,g(x)\,h(x)+f(x)\,g'(x)\,h(x)+f(x)\,g(x)\,h'(x).$$
For fun (or punishment), try the derivative of a product of four functions!

As a Corollary to the Multiplication Theorem, we have the important Constant Multiple Rule:
$$\frac{d}{dx}(c\,f(x))=c\,\frac{d}{dx}\,f(x).$$
You can pull constants out of derivatives. See if you can prove this corollary from the Multiplication Rule and what the derivative of a constant function must be.

Last edited:
• Jameson and sbhatnagar

#### Ackbach

##### Indicium Physicus
Staff member
Derivatives II Draft

5.3.3 Division Theorem for Derivatives

Some people are down on this theorem, because it's technically unnecessary. And it's true. You can get by without it. However, I find it useful because if I'm taking the derivative of a quotient by using the product rule, I'm usually going to have to do more algebra in the simplification because I'm going to have to find a common denominator. However, the quotient rule saves me that step. So your alternatives are, memorize one thing less but do more error-prone algebra, or remember one (fairly easy) rule and do less algebra. I opt for the latter. So here it is, along with a neat mnemonic device for remembering it:
$$\frac{d}{dx}\frac{f(x)}{g(x)}=\frac{g(x)f'(x)-f(x)g'(x)}{g^{2}(x)}=\text{lo dee-high minus high dee-low over the square of what's below.}$$

5.4 Derivative Examples

5.4.1 Polynomial Example

Compute the derivative of $f(x)=5x^{4}-3x^{2}+7x-2$. Well, we can use the addition/subtraction theorems and differentiate term-by-term:
$$\frac{d}{dx}(5x^{4}-3x^{2}+7x-2)= \frac{d}{dx}(5x^{4})+ \frac{d}{dx}(-3x^{2})+ \frac{d}{dx}(7x)+ \frac{d}{dx}(-2).$$
Now we can use the constant multiple rule:
$$=5\frac{d}{dx}\,x^{4}-3 \frac{d}{dx}\,x^{2}+7 \frac{d}{dx}\,x-2 \underbrace{\frac{d}{dx}\,1}_{=0.\;\text{Why?}} =5(4x^{3})-3(2x)+7=20x^{3}-6x+7.$$

Shortcut to differentiating polynomials: Just multiply the coefficients and the exponents term-wise, and reduce the exponents by one. You can just write down the answer:
$$(5x^{4}-3x^{2}+7x-2)'=(5\cdot 4)x^{3}-(3\cdot 2)x+7=20x^{3}-6x+7.$$

5.4.2 Multiplication Example

Differentiate $f(x)=x^{3}\sin(x)+2x^{2}e^{x}$.

First, we recognize that we can differentiate the "big sum" term-by-term. After that, we need to use the product rule twice:
$$f'(x)=(x^{3}\sin(x))'+2(x^{2}e^{x})'.$$
You'll see that I've already used the constant multiple rule on the second expression. The product rule gives us
$$=(x^{3})'\sin(x)+x^{3}(\sin(x))'+2[(x^{2})'e^{x}+x^{2}(e^{x})'].$$
Now we just compute basic derivatives we already know:
$$=3x^{2}\sin(x)+x^{3}\cos(x)+2[2xe^{x}+x^{2}e^{x}] =3x^{2}\sin(x)+x^{3}\cos(x)+2xe^{x}(x+2).$$

5.4.3 Division Example

Differentiate $f(x)=(x^{3}-x)/e^{x}$.

Using the quotient rule, we get
$$f'(x)=\frac{e^{x}(x^{3}-x)'-(x^{3}-x)(e^{x})'}{e^{2x}}=\frac{e^{x}(3x^{2}-1)-(x^{3}-x)e^{x}}{e^{2x}}= \frac{e^{x}(-x^{3}+3x^{2}+x-1)}{e^{2x}}=\frac{-x^{3}+3x^{2}+x-1}{e^{x}}.$$

5.5 Chain Rule

There is one more important operation you can do with functions: compose them. A function, if you remember, tells you to "do something" to the input. If I have $f(x)=x^{2}$, then $f$ is telling me to square the input. But after I've squared the input, there's nothing stopping me from doing something else to the result by slapping on another function. Suppose I want to take the sin of the result: I'd write a new function $g=\sin\circ f$, or $g(x)=\sin(f(x))=\sin(x^{2})$. This is "composition of functions". How do you differentiate it? Well, as it turns out, composition of functions leads to multiplication of derivatives. Here's the rule:

$$\frac{d}{dx}(f(g(x)))=\frac{d}{dg}(f(g(x)))\cdot \frac{d}{dx}(g(x)).$$
A short-hand way of writing it is that
$$\frac{df}{dx}=\frac{df}{dg}\,\frac{dg}{dx}.$$
You can remember the rule by thinking that it looks like the $dx$'s cancel. They do not cancel.​ It's just a mnemonic device.

What this means is that you take the derivative of the outer function with respect to the inner function and evaluate the result at the inner function. You then multiply that by the derivative of the inner function with respect to its independent variable and evaluate at that independent variable.

5.5.1 Chain Rule Example 1

Let's take one we already mentioned: differentiate $\sin(x^{2})$. The outer function is $\sin$, and the inner function is $x^{2}$. Taking the derivative of the outer function with respect to the inner function means I need to take the derivative of the $\sin$ function with respect to its argument. That's $\cos$. I evaluate that at the inner function. So far, then, I have $\cos(x^{2})$. Then I multiply that by the derivative of the inner function, which is $2x$. The result is $2x\cos(x^{2})$. So
$$(\sin(x^{2}))'=2x\cos(x^{2}).$$

5.5.2 Chain Rule Example 2

Let's try a triply-composed function, and differentiate $f(x)=\sin\big(e^{x^{2}}\big).$ You go from the outside in:
$$f'(x)= \underbrace{ \cos \big(e^{x^{2}} \big)}_{ \text{outer}} \cdot \underbrace{e^{x^{2}}}_{ \text{middle}} \cdot \underbrace{(2x)}_{ \text{inner}}=2xe^{x^{2}} \cos \big(e^{x^{2}} \big).$$

Important note: how do I know when to stop multiplying by derivatives? For the example we just did, why wouldn't I multiply by the derivative of $2x$? The answer is that you stop multiplying when you hit the last composition of functions. Another way to think about it is that you stop when you get to a "basic" derivative. In this last example, when we got to where we needed to differentiate $x^{2}$, which was the "innermost function", we took that derivative and stopped, because it was a basic derivative.

5.5.3 Quotient Example Revisited

Let's do a previous example (5.4.3) another way:
$$\frac{d}{dx}\,\frac{x^{3}-x}{e^{x}}=\frac{d}{dx}\left(e^{-x}(x^{3}-x)\right)= \left(e^{-x}\right)'\left(x^{3}-x\right)+\left(e^{-x}\right)\left(x^{3}-x\right)'= e^{-x}\left(x^{3}-x\right)\frac{d}{dx}(-x)+e^{-x}(3x^{2}-1)$$
$$=e^{-x}(x-x^{3}+3x^{2}-1),$$
which is the same answer we got above. A good thing, too!

5.6 Implicit Differentiation

It's almost a sacred rule that once you've done the chain rule, you do implicit differentiation. We will follow that here.

5.6.1 Implicit Function

What is an implicit function? It's a function defined by an equation where you haven't "solved for" the function. For example, in the equation $y^{2}+x^{2}=1$, the function $y=y(x)$ is implicit. Naturally, we could solve explicitly for $y$, though we wouldn't end up with a function on account of having non-unique $y$ values for a particular $x$ value. But what if you have an equation that's just impossible? How about
$\tan(y)-\ln(y+x)+x^{2}=y$? I hope none of you try to solve that one for $y$ - I don't think it's possible. For that matter, here's an equation I know for certain that you won't be able to solve (it's been proven that you can't!): $Ay^{5}+By^{4}+Cy^{3}+Dy^{2}+Ey=x$. So implicitly defined functions do pop up from time to time. Sometimes the only solution to a differential equation that you can get is implicitly defined.

5.6.2 Differentiating Implicitly Defined Functions

It is useful to be able to differentiate implicitly defined functions, and we can do so with the aid of the chain rule: when you differentiate with respect to $x$, and you see a function of $y$, multiply by $y'$.

5.6.3 Implicit Differentiation Example 1

Find $y'$ if $y^{2}+x^{2}=1$. Remember that we're thinking of $y=y(x)$ as a function of $x$. So we have
\begin{align*}
\frac{d}{dx} ( y^{2}+x^{2}&=1 ) \\
\implies \frac{d}{dx}(y^{2}+x^{2})&=\frac{d}{dx}\,1\\
\implies \frac{d}{dx}\,y^{2}+\frac{d}{dx}\,x^{2}&=0\\
\implies 2yy'+2x&=0\\
\implies yy'&=-x\\
\implies y'&=-\frac{x}{y},
\end{align*}
assuming $y\not=0$.

5.6.4 Implicit Differentiation Example 2

Let's try the rather complicated example I gave earlier: $\tan(y)-\ln(y+x)+x^{2}=y.$ Before we begin, we will need to compute the derivative of $\cos(x)$. We have
$$\frac{d}{dx}\,\cos(x)=\lim_{h\to 0}\frac{\cos(x+h )-\cos(x)}{h}=\lim_{h\to 0}\frac{\cos(x)\cos( h )-\sin(x)\sin( h )-\cos(x)}{h}$$
$$=\cos(x)\lim_{h\to 0}\frac{\cos( h )-1}{h}-\sin(x)\lim_{h\to 0}\frac{\sin( h )}{h}$$
$$=-\sin(x).$$
Using the chain rule, you can show that the differentiation chain for $\sin(x)$ goes like this: $\sin\to\cos\to-\sin\to-\cos\to\sin$, and so on. This will be something you will eventually have ingrained into your fibers!

So we have
\begin{align*}
\frac{d}{dx}[\tan(y)-\ln(y+x)+x^{2}&=y]\\
\implies \frac{d}{dx}\,\tan(y)-\frac{d}{dx}\,\ln(y+x)+\frac{d}{dx}\,x^{2}&=\frac{d}{dx}\,y\\
\implies \frac{d}{dx}\,\frac{\sin(y)}{\cos(y)}-\frac{1}{y+x}\frac{d}{dx}(y+x)+2x&=y'\\
\implies \frac{\cos(y)(\sin(y))'-\sin(y)(\cos(y))'}{\cos^{2}(y)}-\frac{y'+1}{y+x}+2x&=y'\\
\implies \frac{\cos^{2}(y)\,y'+\sin^{2}(y)\,y'}{\cos^{2}(y)}-\frac{y'}{y+x}-\frac{1}{y+x}+2x&=y'.
\end{align*}
At this point, the calculus is all done, and all you need do is solve for $y'$, which you can do with standard algebraic techniques. Don't forget common denominators! Check your work against a CAS if you like.

5.7 Some Other Derivatives

5.7.1
Logarithm Function

This one's weird, no doubt about it. You can prove, though, that
$$\frac{d}{dx}\,\ln(x)=\frac{1}{x}.$$

5.7.2 Other Trigonometric Functions

We already sort of did the tangent function. However, if you remember how all your six basic trig functions are defined in terms of $\sin$ and $\cos$, and you know how to differentiate those, it's a matter of applying the quotient rule:
\begin{align*}
\frac{d}{dx}\,\tan(x)&=\frac{d}{dx}\,\frac{\sin(x)}{\cos(x)}\\
&=\frac{\cos(x)\,\cos(x)-\sin(x)(-\sin(x))}{\cos^{2}(x)}\\
&=\frac{\cos^{2}(x)+\sin^{2}(x)}{\cos^{2}(x)}\\
&=\sec^{2}(x).
\end{align*}
See if you can do $(\csc(x))',\;(\sec(x))',\;(\cot(x))'$. Just use the same tricks I showed you.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Derivatives III Draft

5.7.3 Hyperbolic Trigonometric Functions

You may well not have seen these before, but they come up regularly because, although they are essentially just exponential functions, there are some nice identities that can allow you to simplify expressions sometimes. First, the definitions:
\begin{align*}
\sinh(x)&\equiv\frac{e^{x}-e^{-x}}{2},\text{ pronounced like } \unicode{x201C} \text{sinch''}\\
\cosh(x)&\equiv\frac{e^{x}+e^{-x}}{2},\text{ pronounced like } \unicode{x201C} \text{kosh''}\\
\tanh(x)&\equiv\frac{\sinh(x)}{\cosh(x)},\text{ pronounced like } \unicode{x201C} \text{tanch''}\\
\coth(x)&\equiv\frac{\cosh(x)}{\sinh(x)},\text{ pronounced like } \unicode{x201C} \text{koth''}\\
\text{csch}(x)&\equiv\frac{1}{\sinh(x)},\text{ pronounced like } \unicode{x201C} \text{coseech''}\\
\text{sech}(x)&\equiv\frac{1}{\cosh(x)},\text{ pronounced like } \unicode{x201C} \text{seech''}.
\end{align*}

Now a couple of useful identities (try proving them on your own using the definitions!):

\begin{align*}
\sinh(x+y)&=\cosh(x)\sinh(y)+\cosh(y)\sinh(x)\\
\cosh(x+y)&=\sinh(x)\sinh(y)+\cosh(x)\cosh(y).
\end{align*}

Now we do the derivatives using the definitions and the chain rule. Let's try the derivative of $\sinh(x)$:
\begin{align*}
\frac{d}{dx}\,\sinh(x)&=\frac{d}{dx}\,\frac{e^{x}-e^{-x}}{2}\\
&=\frac{1}{2}\left(\frac{d}{dx}(e^{x})-\frac{d}{dx}(e^{-x})\right)\\
&=\frac{1}{2}(e^{x}-e^{-x}(-1))\\
&=\frac{1}{2}(e^{x}+e^{-x})\\
&=\cosh(x).
\end{align*}
Similarly, you can prove that $(\cosh(x))'=\sinh(x)$. So the hyperbolic derivative chain is shorter than the usual trig function chain. See if you can find the derivatives of the rest of the hyperbolic trig functions. I'll do, oh, $\coth(x)$ for you:
\begin{align*}
\frac{d}{dx}\,\coth(x)&=\frac{d}{dx}\,\frac{\cosh(x)}{\sinh(x)}\\
&=\frac{\sinh^{2}(x)-\cosh^{2}(x)}{\sinh^{2}(x)}\\
&=\frac{(1/4)[(e^{x}-e^{-x})^{2}-(e^{x}+e^{-x})^{2}]}{\sinh^{2}(x)}\\
&=\frac{e^{2x}-2+e^{-2x}-(e^{2x}+2+e^{-2x})}{4\,\sinh^{2}(x)}\\
&=-\text{csch}^{2}(x).
\end{align*}
In the process, if you were alert, you might have noticed that I proved that
$$\sinh^{2}(x)-\cosh^{2}(x)=-1,\quad\text{or}\quad \cosh^{2}(x)-\sinh^{2}(x)=1.$$
This is the hyperbolic version of the Pythagorean Theorem $\cos^{2}(x)+\sin^{2}(x)=1$.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Applications of Derivatives I Draft

6. Applications of Derivatives

There are many applications of derivatives; by no means am I going to be able to show you all of them. I shall start with graphing and optimization, related rates, position-velocity-acceleration-jerk, and maybe do a real-world example or two (taken from my actual work).

6.1 Graphing

How do you know, when you're graphing a function, how a function really behaves in a certain region? You can try zooming in, like I did in the first post, but might there not be a nagging question in your mind: how can I be certain this is how it behaves? No amount of zooming in can give you that certainty. No amount of technology can give you that certainty (unless you apply the principles I'm about to show you by using technology!). Instead, it is the application of calculus that can prove, beyond the shadow of a doubt, that that function really is smooth there, or has a particular asymptote, or has a maximum here or there. What are some features of functions that calculus can pick out? We will work from the more obvious features like asymptotes and regions of increase or decrease, to more subtle features like concavity and points of inflection. At the end, I will give you a procedure for graphing a function, which will include a sub-procedure for determining local and global extrema.

6.1.1 Asymptotes

There are a number of different kinds of asymptotes, all coming back to the kinds of straight lines. You can have horizontal asymptotes (function approaching a straight horizontal line), vertical asymptotes (function approaching a straight vertical line), and slant asymptotes (function approaching a straight line that is neither vertical nor horizontal).

All of these are of practical interest. For example, I was recently needing to find the predicted final temperature of a component in a system, because it had a habit of giving me thermal shutdowns when it got too hot. Using Newton's Law of Cooling, which gives you a fairly straight-forward differential equation to solve, I was able to find the predicted final temperature values, given current trends. So then I could determine if, under differing circumstances, the component would end up too hot and shut down on me or not. The final temperature, as predicted by Newton's Law of Cooling, is a horizontal asymptote of the temperature as a function of time.

6.1.1.1 Horizontal Asymptotes

For a normal, single-valued function, there are at most two unique horizontal asymptotes (why?). You find them by computing the following two limits, if they exist:

$$\lim_{x\to-\infty}f(x)\quad\text{and}\quad\lim_{x\to+\infty}f(x).$$

What do these limits mean? Well, here's the equivalent of the delta-epsilon definition, for the limit at positive infinity:

$$\lim_{x\to\infty}f(x)=L\quad\text{iff}\quad \forall\epsilon>0,\; \exists M>0\;\text{such that if}\;x>M,\;\text{then}\;|f(x)-L|<\epsilon.$$
In this case, you think of $M$ as a large, positive number. Can you think up the definition for the limit at negative infinity?

Can you think of a box game for these limits?

Example: $f(x)=1/x$. Here we have
$$\lim_{x\to\pm\infty}\frac{1}{x}=0,$$
so there is one horizontal asymptote for this function: $y=0$. This example, by the way, you can regard as a little tool for helping you to compute limits at infinity (which is what these limits are). If you have a bounded value divided by something that's getting large without bound, the limit will be zero. And that makes sense, right? You're dividing by bigger and bigger numbers.

Example: $f(x)=(x^{2}+2)/(x(x^{2}-4))$. The rule-of-thumb is this: when computing horizontal asymptotes of rational functions (ratio of two polynomials), you want the two polynomials to be fully expanded. When computing vertical asymptotes of rational functions, you want the two polynomials to be fully factored. Here, we're trying to compute limits at infinity for horizontal asymptotes. So, we multiply everything out and expand thus: $f(x)=(x^{2}+2)/(x^{3}-4x)$. The next step for horizontal asymptotes of rational functions is to divided top and bottom by the highest power of $x$ in the denominator. That's $x^{3}$. So we get this:
$$f(x)=\frac{\frac{1}{x}+\frac{2}{x^{3}}}{1-\frac{4}{x^{2}}}.$$
Now we remember our limit theorems: division, multiplication, addition, and subtraction rules. The bottom line is that all the terms that look like a number over a power of $x$ go to zero. The denominator does not go to zero, though, because there's that $1$ there preventing it. So the overall thing is going to zero as $x\to\infty$. As it turns out, it doesn't matter in this case which infinity you approach, the fraction as a whole is going to zero.

Example: $f(x)=(x^{3}-5)/(x^{3}+2)$. This is already multiplied-out. We do the same trick as before to compute
$$\lim_{x\to\pm\infty}\frac{1-\frac{5}{x^{3}}}{1+\frac{2}{x^{3}}}=1.$$
So this function has a horizontal asymptote at $y=1$.

Example: $f(x)=\tan^{-1}(x)$. The arctangent function is a very well-behaved function. It has two different horizontal asymptotes:
$$\lim_{x\to\infty}\tan^{-1}(x)=\frac{\pi}{2},\quad \text{and} \quad \lim_{x\to-\infty}\tan^{-1}(x)=-\frac{\pi}{2}.$$
The horizontal asymptotes are $y=\pm\frac{\pi}{2}$.

​6.1.1.2 Vertical Asymptotes

Vertical asymptotes are places where a function "blows up". Or down. We can say that $f(x)$ has a vertical asymptote at $x=a$ if
$$\lim_{x\to a^{\pm}}f(x)=\pm\infty.$$
The direction in which we approach $a$ is not correlated with the sign of infinity in this case. So there are four possibilities here, any one of which constitutes a vertical asymptote.
Let me give you one of these definitions, and see if you can work out the other three. We say that
$$\lim_{x\to a^{+}}f(x)=-\infty \quad \text{iff} \quad \forall M<0,\;\exists \delta>0\;\text{such that if}\;0<x-a<\delta,\;\text{then}\; f(x)<M.$$
Here we think of $M$ being a large (in magnitude) negative number. Notice that in this definition, we do not have magnitude signs around the $x-a$. That reflects the direction in which we are approaching $a$ in this definition.

Can you think of a box game for these limits?

Example: $f(x)=1/(x-2)$. We can tell by looking at this function that $a=2$ is going to be a problem-spot. And indeed it is a vertical asymptote, because in the limit, we are dividing a finite, fixed number by something arbitrarily small. The result gets arbitrarily big.

Example:
$$f(x)=\begin{cases}1/x,\quad& x>0\\ -1,\quad& x\le 0\end{cases}.$$
Here's an example where the function blows up as you approach $a=0$ from the right, but is nicely behaved as you approach $a=0$ from the left. It's even "continuous from the left"! By that, I just mean that
$$\lim_{x\to 0^{-}}f(x)=f(0).$$
This function, though, does have a vertical asymptote at $a=0$, because
$$\lim_{x\to 0^{+}}f(x)=\infty.$$

Example: $f(x)=\tan(x)$. This function has loads of vertical asymptotes - infinitely many, in fact. They occur everywhere $\cos(x)=0$, because, as you recall, $\tan(x)=\sin(x)/\cos(x)$. Now $\cos(x)=0$ at odd-integer multiples of $\pi/2$, that is, at $(2k-1)\pi/2$, for $k\in\mathbb{Z}$. The set $\mathbb{Z}$ is the integers $\dots,-3,-2,-1,0,1,2,3,\dots$, in case you weren't aware. Now, what's also interesting about these vertical asymptotes is that the $\tan$ function "blows down" as you approach an asymptote from the right, and it "blows up" as you approach from the left. See here: Example: $f(x)=(x^{2}-3x+2)/(x^{2}+x-6)$. Remember our rule of thumb: for horizontal asymptotes (by the way, if the degrees of the two polynomials in a rational function are equal, then the horizontal asymptote is just the ratio of the leading coefficients; so in this case, you can see it has a horizontal asymptote at $y=1$) you multiply the polynomials out. For vertical asymptotes, you factor. Factoring this rational function yields
$$f(x)=\frac{(x-1)(x-2)}{(x+3)(x-2)}.$$
It might be tempting to think that you have two vertical asymptotes, one at $x=-3$, and one at $x=2$. But you'd be wrong. You only have one vertical asymptote. This function has a removable discontinuity (just one point "missing" from its graph) at $x=2$, because the same factor, to the same power, lives in the numerator.

So suppose you have factored a rational function completely, and you have some identical factors in the numerator and denominator. What happens there? It depends on the powers, and their relative size. There are three possibilities, corresponding to the law of trichotomy. Suppose we are examining
$$f(x)=\frac{(x-a)^{s}}{(x-a)^{t}}.$$

Case 1: The power in the numerator is greater than the power in the denominator. That is, $s>t$. Then
$$\lim_{x\to a}f(x)=0.$$

Case 2: The power in the numerator is equal to the power in the denominator. That is, $s=t$. Then there is a removable discontinuity at $x=a$.

Case 3: The power in the numerator is less than the power in the denominator. That is, $s<t$. Then the function blows up (or down) there. It has a vertical asymptote there.

I should point out that in none of these cases is the number $a$ in the domain of $f$.

6.1.1.3 Slant Asymptotes

These we define to be straight lines, of nonzero finite slope, such that a function approaches them as $x\to\pm\infty$. That is, $y=mx+b$, with $m\not=0$, is a slant asymptote of $f(x)$ if and only if
$$\lim_{x\to\pm\infty}(f(x)-(mx+b))=0.$$

The typical example is a rational function where the degree of the numerator is one greater than the degree of the denominator.

Example: $f(x)=(x^{2}-1)/(x+5)$. The rule of thumb for slant asymptotes is to do polynomial long division. I get
$$f(x)=x-5+\frac{24}{x+5}.$$
Then you can see that as $x\to\pm\infty$, the fraction gets small, and what's left approaches the straight line $y=x-5$.

Other examples are similar, so I won't bore you.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Applications of Derivatives II Draft

6.1.2 Regions of Increase and Decrease

Next, after finding all asymptotes, we'd like to know regions where the function is increasing or decreasing. The first derivative (the one we've been computing thus far) tells us that. In particular, the sign of the derivative at any point tells us whether the function is increasing or decreasing. Again, we have a trichotomy:

1. If the first derivative is positive at a point, the function is increasing at that point.
2. If the first derivative is zero at a point, the function is neither increasing nor decreasing, but staying the same.
3. If the first derivative is negative at a point, the function is decreasing at that point.

Imagine a car driving along the function from left to right. When it's going uphill, you have a positive derivative. When it's going downhill, the derivative is negative. When it's going flat, the derivative is zero.

Example: The best example for illustrating this is the sine function. I hope you know its graph. I will plot the sine function in black, and the cosine function on the same axes in red. So when the black curve (sin function) is increasing, you can see that the red curve (cos function, the derivative of the sin function) is positive. When the black curve is decreasing, the red curve is negative. And when the black curve is steady (local max and local min, actually), the red curve is zero.

Example: Now let's do an unknown function. How about something a little more challenging? Let's try this function:
$$f(x)=\frac{x^{2}-3x+7}{-9x+11}.$$
The first thing you must do in these problems is to take the derivative. Use your derivative theorems like the quotient rule, sum/difference rules, and constant multiple rules. I get the following:
$$f'(x)=\frac{(-9x+11)(2x-3)-(x^{2}-3x+7)(-9)}{(-9x+11)^{2}}=\frac{-18x^{2}+27x+22x-33+9x^{2}-27x+63}{(-9x+11)^{2}}= \frac{-9x^{2}+22x+30}{(-9x+11)^{2}}.$$
Next, you want to set this derivative equal to zero. The numerator is the only important part when doing so (unless the denominator is also zero at the same place. However, we know that the denominator of the derivative is zero only at $x=11/9$, so we'll tuck that away for future reference.) Therefore, we are doing
$$-9x^{2}+22x+30=0\quad\implies\quad 9x^{2}-22x-30=0\quad\implies\quad x=\frac{22\pm\sqrt{22^{2}-4(9)(-30)}}{18}$$
$$=\frac{22\pm\sqrt{1564}}{18}=\frac{22\pm 2\sqrt{391}}{18}=\frac{11\pm\sqrt{391}}{9}.$$

Now the sign of the derivative can change anywhere the derivative is zero or where there's a vertical asymptote (for the latter, consider the function $1/x^{2}$ at the origin). So, we must divide up the real line into four regions:
$$\left(-\infty,\frac{11-\sqrt{391}}{9}\right),\;\left(\frac{11-\sqrt{391}}{9},\frac{11}{9}\right),\;\left(\frac{11}{9},\frac{11+\sqrt{391}}{9}\right), \; \text{and} \; \left(\frac{11+\sqrt{391}}{9},\infty\right).$$
How did I know how to do that? Well, the three points
$$\frac{11}{9}, \; \text{and} \;\frac{11\pm\sqrt{391}}{9}$$
are the three points where the sign of the derivative can change. So, if you evaluate those as decimal numbers, I just arranged them in increasing order along the number line, and split the real line according to those numbers. So, in each of the four regions I just mentioned, we know that the derivative is not allowed to change sign. So, we can just pick a number in each region, evaluate the sign of the derivative in each region, and we'll know if the original function is increasing or decreasing there. I get that the function is decreasing in the first and last regions, and increasing in the two middle regions.

6.1.2 Extrema

A local extrema, or local maximum or minimum, is a point around which there is an $x$ interval (might be really small!) such that $f(x)$ achieves its maximum value in the interval at the point. The rigorous definition goes like this:

The function $f(x)$ has a local maximum (minimum) at $x=a$ if and only if there exists a $\delta>0$ such that if $|x-a|<\delta$, then $f(x)\le(\ge)f(a)$.

Note carefully in this definition the less than or equals​. So if you consider a horizontal line on some interval, every point in that interval is both a local min and a local max. Weird, huh?

Definition: A critical point​ of a function $f(x)$ is a point where the function's derivative $f'(x)$ is either zero, or does not exist.

Critical points are, well, critical. They can tell you lots of things like vertical asymptotes (although you probably didn't need the derivative to tell you that), but more importantly, they can help you find local extrema (local maximums or local minimums). A theorem by Fermat says that all local extrema of a function must be critical points of that function. So this enables you quickly (hopefully) to find all the local extrema of a function. Just take the derivative and set it equal to zero, as well as find places where it doesn't exist.

First Derivative Test: You can use the first derivative to determine whether a critical point is a local extremum or not as follows:

1. Suppose the point $c\in(a,b)$ is a critical point of the function $f$, which is defined on the entire interval $(a,b)$. So $f'(c)=0$.
2. Suppose we can construct an interval $(c-\delta,c+\delta)$ about $c$ such that $f'(x)$ is zero, in this interval, only at $c$.
3. Then you've got yourself a local extremum if the product $f'(c-\delta)\cdot f'(c+\delta)<0$. That is, it's a local extremum if the derivative changes sign at $c$.
4. You get a local maximum if the derivative changes from positive to negative (as you go right), and a local minimum if the derivative changes from negative to positive. If the derivative does not change sign in the interval, then $c$ is not a local extremum.

These are sufficient but not necessary conditions. Can you come up with necessary conditions?

Definition: Consider a function $f(x)$ on a closed interval $[a,b]$. The global maximum, if it exists, is a number $M$ such that there exists a $c\in[a,b]$ such that $f(c)=M$ and $f(x)\le M$ for all $x\in [a,b]$.

As usual, see if you can work out the definition for global minimum.

Maximization and minimization (collectively, optimization) are two extremely important problems in real life, and I'm about to show you one method of optimization appropriate for certain kinds of situations. This is by no means a method that will work on every problem. But it's very elegant when it does work.

Here's the method for optimizing a function $f(x)$ on a closed interval $[a,b]$:

1. Find the derivative $f'(x)$ of $f(x)$.
2. Find all critical points of $f(x)$ by setting $f'(x)=0$, and also finding where $f'(x)$ does not exist.
3. Evaluate the function $f(x)$ at all critical points, as well as the boundaries of the interval, $a$ and $b$.
4. Whichever is the greatest of the values found in step 3 is the global max on the interval $[a,b]$, and whichever is the least of the values found in step 3 is the global min on the interval $[a,b]$.

And you're done! Let's do some examples.

Example: Maximize $\sin(x)$ on $[-\pi/2,3\pi/2]$. We take the derivative, which is $\cos(x)$. Setting it equal to zero, we find that $x$ must be odd-integer multiples of $\pi/2$. That is, $x=(2k-1)\pi/2$, for $k\in\mathbb{Z}$. The only three such values in the interval in question are $-\pi/2,\pi/2,$ and $3\pi/2$. The boundaries are already included in this list, so let's just evaluate $\sin(x)$ at these three values. We get $\sin(-\pi/2)=-1$, $\sin(\pi/2)=+1$, and $\sin(3\pi/2)=-1$. Hence, the global max is $1$, occurring at $\pi/2$. You can see that the global min is not unique, as it occurs both at $-\pi/2$ and $3\pi/2$. That's ok.

While we're concerned about uniqueness, it's a bit more important to think about existence: when can we guarantee that a function will have a global max and min? It's not guaranteed that any ol' function will have them. Try $f(x)=1/x$ on the interval $[-1,1]$. There is no global max or min. Or try something simpler: $f(x)=x$ on the open interval $(0,1)$. That also does not have a global max or min (or any local extrema, either!).

Extreme Value Theorem: A continuous function $f(x)$ on a closed interval $[a,b]$ will always achieve its global extrema.

Proving this theorem is beyond the scope of this tutorial. But in analysis or topology, you can prove it.

So you can see where my two "counterexamples" failed. $1/x$ is not continuous, and $(0,1)$ is not closed. But if you go with $f(x)=x$ on $[0,1]$, you see that you do get the global max at $1$, and the global min at $0$.

Example: Let's try a more complicated function: $f(x)=\frac{e^{-x}}{x^{2}+1}$ on the interval $[-2,1]$. You can convince yourself that this function is continuous, because the denominator is never zero, and the numerator is a nice exponential. We take the derivative:
$$f'(x)=\frac{(x^{2}+1)(-e^{-x})-(e^{-x})(2x)}{(x^{2}+1)^{2}}=-\frac{e^{-x}(x^{2}+2x+1)}{(x^{2}+1)^{2}}=-\frac{e^{-x}(x+1)^{2}}{(x^{2}+1)^{2}}.$$
This derivative exists everywhere, for the same reasons that the original function did. The only difference is we've added a polynomial in the numerator and squared the denominator. So where is this derivative zero? I think you can see that it's only zero at $x=-1$. So we've got three points to evaluate: $-2, -1, 1$. I get
\begin{align*}
f(-2)&=\frac{e^{2}}{5} \approx 1.47781\\
f(-1)&=\frac{e}{2} \approx 1.35914\\
f(1)&=\frac{1}{2e} \approx 0.18394.
\end{align*}
Hence, $f(-2)$ is the global max, and $f(1)$ is the global min. If you look at the graph of the function, you will see why the critical point at $-1$ failed to get us a global extremum: To the left, it's higher, and to the right, it's lower. We'll take more about these kinds of points in the next post.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Applications of Derivatives III Draft

6.1.3. Second Derivatives and Concavity

Taking one derivative by no means excludes you from taking another. For example, if my function is $f(x)=x^{3}$, and I take one derivative, I get $f'(x)=3x^{2}$. But this is a function, albeit a different function, just like $f(x)$. I can take its derivative. What do I get? I get $f''(x)=6x$. Note there are now two primes in the function label. That means second derivative. Why stop there? I can do $f'''(x)=6$, and $f^{(4)}(x)=0$. Any further derivatives are just going to be zero. The notation for the $n$th derivative is, as you may have guessed, $f^{(n)}(x)$.

These higher order derivatives (second on up) can tell us something about the shape of the original function $f(x)$. The second and third derivatives, in particular, are useful, although they reveal more subtle features of graphs than does the first derivative.

The title of this section mentions concavity. What is concavity? It is the tendency of a function to be heading up or down. A function, at any point, can be concave up (also called convex), concave down (also called just concave), or neither.

Example: $f(x)=x^{2}$. This function is concave up everywhere. I won't bore you with the graph of this function, as you should be able to graph it in your sleep.

Example: $f(x)=-x^{2}$. This function is concave down everywhere.

Example: $f(x)=x^{3}$. For negative values of $x$, the function is concave down; for positive values of $x$, the function is concave up... What happens at $x=0$? I'll leave that one for later, but for now, just note that $f''(0)=0$ there.

The second derivative can tell us concavity, if it exists. It goes like this: if $f''(c)>0$, then the function is concave up at $c$. If $f''(c)<0$, then $f(x)$ is concave down at $c$. If $f''(c)=0$, then the function could be concave up, concave down, or it could have a point of inflection. A point of inflection occurs when the concavity changes sign. Here's a mnemonic device for remembering the second derivative. The first face has the negative second derivative, and the mouth is concave down. The second face has the positive second derivative, and the mouth is concave up. The third face has the zero second derivative. One eyebrow is concave down, the other is concave up, and the mouth has a point of inflection. A confused individual!

If you apply this to the question of a local extremum, you can say that if the first derivative $f'(x)$ of a function $f(x)$ at a point $c$ is zero $(f'(c)=0)$, and $f''(c)<0$, then you have a local maximum. Or if $f'(c)=0$ and $f''(c)>0$, then you have a local minimum. If the second derivative is zero at a critical point, then the Second Derivative Test fails (it doesn't tell you anything), and you must go back to the First Derivative Test. However, if the Second Derivative Test works, it's often easier and quicker to take another derivative rather than find the critical points of the original function.

6.1.4 Third Derivative Test

This one's not in most calculus books, though its generalization is on wiki (see Higher-order Derivative Test). The Third Derivative Test can be useful for finding points of inflection. Recall that a point of inflection is where the second derivative changes sign. The Third Derivative Test goes like this: suppose the second derivative is zero at $c$, and the third derivative is nonzero at $c$. Then you have a point of inflection at $c$. Again, taking more derivatives can be quite straight-forward, so you might find it easier to do that than to evaluate the second derivative in enough places to ensure you know where the points of inflection are.

6.1.5 Full Analysis of a Function, Including its Plot

We now know enough fully to analyze a function and plot it up, knowing we have examined all interesting features of a function, and knowing we have not missed anything important. Here is the full procedure, given a function $f(x)$.

1. Find the domain.

2. Find all asymptotes. Find horizontal asymptotes by taking the limits as $x\to\pm\infty$. Find vertical asymptotes by finding all points $c$ such that $\lim_{x\to c^{\pm}}f(x)=\pm\infty$. Find slant asymptotes by finding $m$ and $b$ such that $\lim_{x\to\infty}(f(x)-mx+b)=0$.

3. Take the first derivative and set it equal to zero. Use this information, as well as finding the domain of the derivative, to find all critical points (places where the first derivative is zero or d.n.e.). Divide up the domain of the function, according to the critical points, into non-overlapping intervals.

4. Find regions of increase or decrease by evaluating the function at one point in each non-overlapping interval found in the previous step.

5. Take the second derivative and evaluate it, where possible, at all critical points, to determine local extrema. Also find all zeros of the second derivative to find possible points of inflection. Further subdivide the non-overlapping intervals found in step 3 according to zeros of the second derivative. Evaluate the second derivative in each region to determine regions of concavity.

6. Take the third derivative, if possible, and evaluate at all zeros of the second derivative, to determine which zeros of the second derivative correspond to local extrema, and which zeros correspond to points of inflection.

7. Finally, evaluate the function at all points of interest, plot these up, and draw the function in-between to "connect the dots". If you're tasked with maximizing or minimizing a continuous function on a closed interval, then evaluate the function at the critical points and at the endpoints in order to find the max and min. Doing so, incidentally, should enable you to find the range of the original function.

I'll do an example or two in the next post.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Applications of Derivatives IV Draft

6.1.5 Continued

So let's do an example or two illustrating the function analysis procedure.

Example: analyze the function
$$f(x)=\frac{x^{3}-4x}{x^{2}+3}$$
on the interval $[-10,10]$, including finding the global extrema.

1. The domain is all real numbers, normally, but we've restricted it to $[-10,10]$. So that's our domain.

2. There are no vertical asymptotes, because the function is continuous (in particular, the denominator is never zero). There are no horizontal asymptotes, because the degree of the numerator is strictly greater than the degree of the denominator. However, there are slant asymptotes. If we perform polynomial division, we get that
$$f(x)=x-\frac{7x}{x^{2}+3}.$$
Hence, the fraction part goes to zero as $x\to\infty$, and we're left with $x$. Therefore, the slant asymptote for both infinities is $y=x$.

3. The first derivative is
$$f'(x)=\frac{(x^{2}+3)(3x^{2}-4)-(x^{3}-4x)(2x)}{(x^{2}+3)^{2}}=\frac{3x^{4}+5x^{2}-12-2x^{4}+8x^{2}}{(x^{2}+3)^{2}}=\frac{x^{4}+13x^{2}-12}{(x^{2}+3)^{2}}.$$
Setting this equal to zero is tantamount to setting $x^{4}+13x^{2}-12=0$, as the denominator is never zero. To solve this, we let $z=x^{2}$ and use the quadratic formula: $z^{2}+13z-12=0$ implies that
$$z=\frac{-13\pm\sqrt{169-4(-12)}}{2}=\frac{-13\pm\sqrt{217}}{2}.$$
Now, $217=7\times 31$, so we can't simplify the square root any more. Note that $\sqrt{217}\approx 14.73$, so we have one positive $z$ root, and one negative $z$ root. The negative $z$ root yields complex numbers for $x$, so we discard those, and focus on
$$z=\frac{\sqrt{217}-13}{2} \quad \implies \quad x=\pm\sqrt{\frac{\sqrt{217}-13}{2}}.$$
I agree it's not pretty, but in the real world, answers are hardly ever pretty. Be thankful you have an explicit answer like this at all! So, dividing up our domain according to these numbers, we have the following partition of $[-10,10]:$
$$\left[-10,-\sqrt{\frac{\sqrt{217}-13}{2}}\:\right),\quad \left(-\sqrt{\frac{\sqrt{217}-13}{2}},\sqrt{\frac{\sqrt{217}-13}{2}}\:\right),\quad \left(\sqrt{\frac{\sqrt{217}-13}{2}},10 \right].$$

4. We pick the three values $-5,0,5$ as the points in each non-overlapping region. I get that $f'(-5)=67/56 >0$, so the function is increasing there. Then $f'(0)=-4/3<0$, so the function is decreasing there. Finally, $f'(5)=67/56>0$, so we're increasing again.

5. The second derivative is
$$f''(x)=\frac{((x^{2}+3)^{2})(4x^{3}+26x)-2(x^{4}+13x^{2}-12)(x^{2}+3)(2x)}{((x^{2}+3)^{2})^{2}} =\frac{(x^{2}+3)(4x^{3}+26x)-4x(x^{4}+13x^{2}-12)}{(x^{2}+3)^{3}}$$
$$=\frac{4x^{5}+26x^{3}+12x^{3}+78x-4x^{5}-52x^{3}+48x}{(x^{2}+3)^{3}}=\frac{-14x^{3}+126x}{(x^{2}+3)^{3}}=-\frac{14x(x^{2}-9)}{(x^{2}+3)^{3}}.$$
The zeros of this function occur at $x=0,\pm 3$. Evaluating the second derivative at the critical points yields that the smaller critical point is a local maximum (the second derivative is negative there), and the larger critical point is a local minimum (the second derivative is positive there). The function is concave up from $-10$ to $-3$, concave down from $-3$ to $0$, concave up from $0$ to $3$, and concave down from $3$ to $10$. Evaluate the second derivative yourself in each of these regions to confirm these statements.

6. You can verify that the third derivative is
$$f'''(x)=\frac{42(x^{4}-18x^{2}+9)}{(x^{2}+3)^{4}}.$$
This is nonzero at $0$ and $\pm 3$. Hence, all three zeros of the second derivative are inflection points.

7. Plotting up the function, you get this picture. One feature of this function of which I was not aware in advance are the inflection points at $\pm 3$. However, they are necessary in order to approach the slant asymptotes properly. Now, we were tasked with finding the global extrema. We have the critical points and the endpoints. If you evaluate the original function at all those points, you'll find that $-10$ is where the global min occurs, and $10$ is where the global max is.

Summary: we followed the procedure in the previous post, and we have successfully found all local and global extrema, points of inflection, regions of increase or decrease, and regions of concavity of the function. We found one slant asymptote. There is nothing further of interest about this function that we don't already know (for the purposes of the vast majority of applications).

Example: analyze the function
$$f(x)=\frac{e^{x}}{x^{2}-4}$$
on the entire real line.

1. The domain is $\mathbb{R}\setminus\{-2,2\}$. That is, the domain is all real numbers that are not $\pm 2$.

2. There are three asymptotes. There is a horizontal asymptote as $x\to-\infty$, because the numerator is getting small, and the denominator is getting big. It goes to zero very quickly as $x\to-\infty$. In addition, we have two vertical asymptotes at $x=\pm 2$, which are fairly straight-forward to find (just factor the denominator and set it equal to zero to find the vertical asymptotes; since the numerator is never zero, all such points are vertical asymptotes).

3. The first derivative is
$$f'(x)=\frac{(x^{2}-4)(e^{x})-(e^{x})(2x)}{(x^{2}-4)^{2}}=\frac{e^{x}(x^{2}-2x-4)}{(x^{2}-4)^{2}}.$$
Setting it equal to zero, we find that
$$x=\frac{2\pm\sqrt{4-4(-4)}}{2}=\frac{2\pm 2\sqrt{5}}{2}=1\pm\sqrt{5}.$$
So, the critical points are the vertical asymptotes at $\pm 2$, and the points $1\pm\sqrt{5}$. If we divide the real line up according to all these points, we find that we have
$$(-\infty,-2),\;(-2,1-\sqrt{5}),\;(1-\sqrt{5},2),\;(2,1+\sqrt{5}),\;(1+\sqrt{5},\infty).$$

4. We find, by plugging in numbers, that the function $f$ is increasing on $(-\infty,-2),\;(-2,1-\sqrt{5}),$ and $(1+\sqrt{5},\infty)$. It's decreasing on the other two intervals: $(1-\sqrt{5},2)$ and $(2,1+\sqrt{5})$.

5. We find that the second derivative is
$$f''(x)=\frac{e^{x}(x^{4}-4x^{3}-2x^{2}+16x+24)}{(x^{2}-4)^{3}}.$$
Great. How do we deal with finding the roots of a quartic? There is technically a formula for the roots - it's quite complicated. Let's see if we can be a little clever with this one, though. I claim there are no real roots of this quartic (meaning $g(x)=x^{4}-4x^{3}-2x^{2}+16x+24$). Proving this is tricky. I propose dividing up the real line into three regions: $(-\infty,0],\;(0,3],\;(3,\infty)$. I will outline the proof for you, and you can fill in the gaps. Setting $g(x)=0$ is tantamount to setting $x^{4}+16x+24=4x^{3}+2x^{2}$. Why did I choose that way of splitting it up? Because now everything has positive signs. Call the LHS "the quartic" and the RHS "the cubic". For the first region, $(-\infty,0]$, minimize the quartic. It turns out that the quartic is always greater than $4$, which is always greater than the cubic on that interval, hence there are no intersections. For the region $(3,\infty)$, introduce a shift function, $z=x-3$, or $x=z+3$. If you plug this substitution into both the quartic and the cubic, you will find that all the coefficients of the quartic are greater than all the corresponding coefficients of the cubic, and hence the quartic is always greater than the cubic for $z>0$, which corresponds to $x>3$.

The hardest region is $(0,3]$. I will show you two methods for showing the result in this region. One, you do a tangent line to the quartic at $x=0$. The quartic is concave up for all $x>0$, so it lies above its tangent line. You can show that the cubic is less than the tangent line up to $x=2$. Then piece together another straight line from where the tangent line leaves off at $x=2$, and make the second line go through the point $(3,140)$. You can show that the quartic is greater than both these lines on the desired interval, and that the two lines as pieced together are greater than the cubic on the desired interval.

Another method for the interval $(0,3]$ is to make the LHS look like $(x-1)^{4}$, and separate out the terms that do not show up in the actual quartic. That is, you have the following:
$$(x-1)^{4}=x^{4}-4x^{3}+6x^{2}-4x+1.$$
If you compare this to "the quartic", you find that you need
$$(x-1)^{4}=\underbrace{(x^{4}+16x+24)}_{\text{the quartic}}-\underbrace{(4x^{3}-6x^{2}+20x+23)}_{\text{helper function}}=x^{4}-4x^{3}+6x^{2}-4x+1.$$
So, we want to show that
$$x^{4}+16x+24>\underbrace{4x^{3}-6x^{2}+20x+23}_{\text{helper function}}>4x^{3}+2x^{2}$$
on $(0,3]$. The left-hand inequality is obvious, because $(x-1)^{4}\ge 0$. For the right-hand inequality, you get the comparison of a quadratic to zero, which is straight-forward in the region $(0,3]$.

So there are no roots of the second derivative. This makes the rest of step 5 and all of 6 unnecessary.

7. Here's the plot: You can see the local max and min that we found, along with the asymptotes, regions of increase and decrease, and regions of upward and downward concavity.

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Related Rates Draft I

6.2 Related Rates

Some students struggle with related rates, but they're not actually that bad. The idea of related rates is that one quantity is changing, and you want to find out how fast another, related quantity is changing at the same time. The general procedure is as follows:

1. Find the relevant equation that relates the two quantities of interest.

2. Differentiate the relevant equation with respect to time, treating both quantities of interest as functions of time, and using the chain rule and implicit differentiation as necessary.

3. Solve for the target variable; if necessary, plug in values as needed to get numbers.

Great. We've got a procedure. Let's see it in action.

No calculus book or text is complete without this problem, but I'm going to go a bit further with it than is typical. It'll need a little physics, but bear with me.

A 5 m ladder is leaning against a vertical wall. The foot of the ladder is being pulled away from the wall horizontally at 1 m/s. When the foot of the ladder is 3 m away from the wall, how fast is the tip of the ladder (the end that's resting on the wall) moving?

1. We need a formula for relating the position of the foot of the ladder, which we'll call $x$, to the position of the other tip of the ladder that's resting on the wall, which we'll call $y$. The keywords "vertically" and "horizontally" tell us that we have a right triangle, and when we have a right triangle, the Pythagorean Theorem applies. In fact, it is true that $x^{2}+y^{2}=5^{2}=25$. So that's it, then: $x^{2}+y^{2}=25$.

2. Before differentiating, we write each variable as a function of time, so that we actually have $x^{2}(t)+y^{2}(t)=25$. We use the chain rule here and implicit differentiation to discover that
$$2x(t)\,\dot{x}(t)+2y(t)\,\dot{y}(t)=0,\quad\text{or}\quad x(t)\,\dot{x}(t)+y(t)\,\dot{y}(t)=0.$$
Now the question is asking how fast the tip of the ladder resting against the wall is moving. That is $\dot{y}$. Solving for that quantity (and dropping the $t$-dependence from our notation), we have that
$$\dot{y}=-\frac{x\dot{x}}{y}.$$

3. All we need to do now is plug in values to get that
$$\dot{y}=-\frac{(3)(1)}{(4)}=\frac{3}{4}\,\text{m/s}.$$
One question you might have: how did I get the $4$ in the denominator? Well, don't forget that the Pythagorean Theorem still holds, and $y^{2}=25-x^{2}$. If you plug in $x=3$, you get that $y=4$ (you can throw out the negative solution $y=-4$ on physical grounds).

One further note: what happens to this ladder when the tip of the ladder approaches the ground? That is, what happens to $\dot{y}$ when $y\to 0$?
Well, we have that $\dot{x}=1$, as always. It's also true that $x\to 5$, the length of the ladder. Thus,
$$\lim_{y\to 0}\dot{y}=-\lim_{y\to 0}\frac{x\dot{x}}{y}=-\infty.$$
Real ladders don't achieve an infinite negative velocity when they slide down a wall like this, do they? Evidently, something in our model is incorrect. It turns out that what is incorrect is the assumption that the tip of the ladder resting on the wall rests on the wall the whole way down. It has to separate from the wall.​ The big question is, at what position does the ladder have to separate from the wall? Kudos to anyone who PM's me the answer to this question. I might even add your solution to this post!

6.2.2 Balloon Problem

A spherical balloon's volume is increasing constantly (due to being filled with helium) at a rate of 10 cc/min (that's cubic centimeters per minute). When the balloon has a volume of 10 cc's, how fast is its radius changing?

1. The relevant formula here is that
$$V=\frac{4\pi}{3}\,r^{3}.$$
Here $V$ is volume in cc's, and $r$ is the radius in centimeters. Proving this formula, incidentally, is something we will do in the Integral Calculus Tutorial!

2. Differentiating yields
$$\dot{V}=4\pi r^{2}\dot{r}.$$

3. Solving for the target variable yields
$$\dot{r}=\frac{\dot{V}}{4\pi r^{2}},$$
but we can eliminate the $r$ on the RHS by plugging in our original volume formula. Why would we want to do that? Because we are not given an $r$-value in the original question. True, we can figure it out by solving the volume equation for $r$, but it's a little more elegant to do that once algebraically, and get your final target variable in terms of given quantities. That way, you don't have to have a multi-stage computation. In addition, if the problem asks for several different values, you can simply plug in. Finally, it's a little easier to check if you've done your work correctly. Bottom line: don't plug in values until the end, preferably in one step!

We have that
$$\frac{3V}{4\pi}=r^{3}\quad\implies\quad r=\left(\frac{3V}{4\pi}\right)^{1/3}.$$
Plugging this into our formula for $\dot{r}$ yields
$$\dot{r}=\frac{\dot{V}}{4\pi \left(\frac{3V}{4\pi}\right)^{2/3}}=\frac{\dot{V}}{(4\pi)^{1/3}(3V)^{2/3}}.$$
Now we get to plug in our values, to obtain
$$\dot{r}=\frac{(10)}{(4\pi)^{1/3}(30)^{2/3}}\approx 0.4455\;\text{cm/min}.$$

So there's your related rates procedure in action.

6.3 Position, Velocity, and Acceleration

This application is historically one of the most important, as a good deal of calculus was invented to solve these problems.
Kinematics is merely the description of motion, not an attempt to explain why motion happens the way it does. The latter is called dynamics. We'll touch on that a bit in this tutorial, but we'll do more with it in the Integral Calculus Tutorial, since we don't have the tools necessary completely to solve dynamics problems. The entire goal of classical mechanics is to find the position of a particle given the forces on it, an initial position, and an initial velocity. Once you know the position of a particle, you can find out any other quantity of interest (momentum, velocity, energy, etc.) whatever.

6.3.1 One-Dimensional Motion

Let's start with one-dimensional motion: motion on a line. We'll call the particle's position $x$, relative to some origin where $x=0$. If the particle is moving relative to that origin, then it has a velocity, which we'll call $v$. What is the relationship between position and velocity? Well, velocity is the rate of change of position, and is hence its derivative with respect to time. That is, $v=\dot{x}$. Note that this is a definition, not a theorem or something you prove. We are defining velocity this way. We also define the acceleration $a$ to be the rate of change of velocity. That is, $a=\dot{v}$. If you plug in what $v$ is, you find that acceleration is then the second derivative of position, so that $a=\ddot{x}$. Finally, we define the jerk $j$ to be the rate of change of acceleration, so that $j=\dot{a}=\ddot{v}=\dddot{x}$.

The speed of a particle we define to be the magnitude of the velocity. So the speed has no direction information in it, although the velocity does. So we have $s=|v|=|\dot{x}|$. The velocity can tell us which direction a particle is heading as well as its speed, but the speed alone just tells us how fast it's moving.

There are other quantities of interest. One is the kinetic energy, defined by $T:=\frac{1}{2}\,ms^{2}$. The $:=$ there means "defined to be". Another way of saying that would be $T\equiv\frac{1}{2}\,ms^{2}$. Here the $m$ is the mass of the particle having speed $s$ and kinetic energy $T$.

If the one-dimensional motion is parallel to a uniform gravitational field, as is approximately the case on the surface of the earth, then you can define the potential energy to be $U:=mgy$. Note that, due to symmetries, the origin of the $y$-axis might change the absolute value of the potential energy; but since it is only a difference in potential energy that is physically meaningful, the choice of origin doesn't matter.

Let's do some examples.

6.3.1.1 Velocity Example

Suppose the position of a particle is $x=4t^{2}-3t+2$. What is the particle's velocity as a function of time? Well, we differentiate to obtain
$$v=\dot{x}=8t-3.$$

6.3.1.2 Jerk Example

Suppose a particle's velocity is $v=e^{-t}\cos(t)$. What is the particle's jerk? We must differentiate twice to obtain the jerk. We get
\begin{align*}
a&=\dot{v}=-e^{-t}\cos(t)-e^{-t}\sin(t)=-e^{-t}(\cos(t)+\sin(t))\\
j&=\dot{a}=\ddot{v}=e^{-t}(\cos(t)+\sin(t))-e^{-t}(-\sin(t)+\cos(t))=2e^{-t}\sin(t).
\end{align*}

6.3.1.3 Kinetic Energy Example

Suppose a particle's position is $y=\sin(t)$, and its mass is $m$. What is its kinetic energy? We differentiate once to obtain
$$v=\dot{y}=\cos(t).$$
The speed is the magnitude of the velocity, so we get that $s=|\cos(t)|$. Finally, we formulate the kinetic energy as
$$T=\frac{1}{2}\,m|\cos(t)|^{2}.$$
However, since the squaring eliminates the need for the absolute value, we can just say that
$$T=\frac{1}{2}\,m\,\cos^{2}(t).$$

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Applications of Derivatives Continued, Draft

6.3.2 Two-Dimensional Motion

In higher dimensions than one, you must use vectors to describe motion. What is a vector? Think of it as an arrow: it has a magnitude (or length), and a direction in which it's pointing. The stereotypical (and historically important) example of two-dimensional motion is projectile motion, such as a cannonball. Such motion has a horizontal component and a vertical component. So it makes sense that you'd need two independent numbers to record both pieces of information. The idea of a vector is that it keeps track of those kinds of independent variables in a convenient form. Synonyms for "vector" include list, ordered list, one-dimensional matrix, and perhaps a few others. You can write them in different ways. One notation is $(x,y)$, but I dislike that notation since it looks like interval notation. Typically, the context would make it clear whether you mean a vector or an interval. But why not do better? Another common notation is $\langle x,y\rangle$. This is better, but it still has a problem with being identical to the inner product notation in linear algebra and quantum mechanics. The best, most unambiguous notation for vectors of which I am aware is the following:
$$\mathbf{r}=\begin{bmatrix}x\\y\end{bmatrix}.$$
I will attempt to use this notation the most. Now I've just defined a vector has having a magnitude and a direction. What is the magnitude and direction for $\mathbf{r}$? Well, you just use the Pythagorean Theorem for the magnitude. That is, the magnitude of what I just wrote down is $|\mathbf{r}|=\sqrt{x^{2}+y^{2}}=\sqrt{\mathbf{r} \cdot \mathbf{r}}$. The direction you could think of as the angle this vector makes with the positive $x$-axis. And that we can get by writing $\text{atan2}(y,x)$, where the $\text{atan2}$ function is defined as follows:
$$\text{atan2}(y,x)=\begin{cases} \tan^{-1}(y/x)\qquad &x>0\\ \tan^{-1}(y/x)+\pi\qquad &y\ge 0,\; x<0\\ \tan^{-1}(y/x)-\pi\qquad &y<0,\; x<0\\ +\frac{\pi}{2}\qquad &y>0,\; x=0\\ -\frac{\pi}{2}\qquad &y<0,\; x=0\\ \text{undefined}\qquad &y=x=0. \end{cases}$$
Basically, the $\text{atan2}$ function is a more sophisticated version of the $\tan^{-1}$ function in that it gives you quadrant information in all four quadrants. That is, the range of the $\text{atan2}$ function is $(-\pi,\pi]$, instead of the normal $\tan^{-1}$ function, whose range is only $(-\pi/2,\pi/2)$.

The velocity, acceleration, jerk, speed, and kinetic and potential energies are all defined in an analogous fashion. The tricky part, though, is that differentiation is defined component-wise. That is, if you have the position vector
$$\mathbf{r}(t)= \begin{bmatrix}x(t)\\y(t) \end{bmatrix},$$
then
$$\mathbf{v}(t)=\dot{ \mathbf{r}}(t)= \begin{bmatrix} \dot{x}(t)\\ \dot{y}(t) \end{bmatrix}.$$
Also, the speed is defined as a dot product: $s=|\mathbf{v}|=\sqrt{\mathbf{v}\cdot\mathbf{v}}= \sqrt{\dot{x}^{2}+\dot{y}^{2}}$. The kinetic energy is defined as before. If the gravitational field is acting only in the $y$-direction (as is typical), then the potential energy definition does not change: it is independent of $x$.

Some examples:

6.3.2.1 Circular Motion Example

Given a particle's position vector as
$$\mathbf{r}(t)=\begin{bmatrix} \cos(t)\\ \sin(t) \end{bmatrix},$$
what is the particle's acceleration, kinetic, and potential energies, given that the particle's mass is $m$?

We differentiate twice to get
$$\mathbf{v}(t)=\dot{\mathbf{r}}(t)= \begin{bmatrix} -\sin(t)\\ \cos(t) \end{bmatrix}, \quad \text{and}$$
$$\mathbf{a}(t)=\dot{\mathbf{v}}(t)= \begin{bmatrix} -\cos(t)\\ -\sin(t) \end{bmatrix}.$$
The kinetic energy is
$$T=\frac{1}{2}\,m\,s^{2}=\frac{1}{2}\,m\,(\mathbf{v} \cdot \mathbf{v})=\frac{1}{2}\,m\,(\sin^{2}(t)+\cos^{2}(t))=\frac{m}{2}.$$
The potential energy is just
$$U=mgy=mg \sin(t).$$

6.3.2.1 Spiraling Inwards Example

Do the same as the first example, but with position vector
$$\mathbf{r}(t)=e^{-t} \begin{bmatrix} \cos(t)\\ \sin(t) \end{bmatrix}.$$

Differentiating yields
$$\mathbf{v}(t)=\dot{\mathbf{r}}(t)= \begin{bmatrix} -e^{-t} \cos(t) - e^{-t} \sin(t)\\ -e^{-t} \sin(t)+e^{-t} \cos(t) \end{bmatrix} = e^{-t} \begin{bmatrix} -\cos(t) - \sin(t)\\ -\sin(t)+ \cos(t) \end{bmatrix}, \quad \text{and}$$
$$\mathbf{a}(t)= \dot{ \mathbf{v} }(t)= \begin{bmatrix} e^{-t}( \cos(t)+ \sin(t))+e^{-t}( \sin(t)- \cos(t))\\ e^{-t}( \sin(t) - \cos(t)) + e^{-t} ( -\cos(t) - \sin(t))\end{bmatrix}=2\,e^{-t} \begin{bmatrix} \sin(t) \\ -\cos(t) \end{bmatrix}.$$
The kinetic energy is
$$T=\frac{m}{2}\,e^{-2t} \left[ (-\cos(t)-\sin(t))^{2}+(-\sin(t)+\cos(t))^{2} \right]$$
$$= \frac{m}{2}\,e^{-2t} \left[ \cos^{2}(t)+2 \sin(t) \cos(t) + \sin^{2}(t) + \sin^{2}(t) - 2 \sin(t) \cos(t) + \cos^{2}(t) \right] = m e^{-2t}.$$
The potential energy is
$$U=m g e^{-t} \sin(t).$$

6.3.2.3 Projectile Motion Example

A projectile of mass $m$ has a position given by
$$\mathbf{r}(t)= \begin{bmatrix} v_{0x}t\\ v_{0y}t - (g/2)t^{2} \end{bmatrix},$$
where $v_{0x}$ is the initial $x$ component of the velocity, and $v_{0y}$ is the initial $y$ component of the velocity. $g$ is the acceleration due to gravity, assumed to be constant. Find the vector sum of the forces on the particle.

Newton's Second Law states that the vector sum of the forces on the particle (vector sums are just component-wise sums) is equal to the mass of the particle times its acceleration. We know the mass. The acceleration we can find by differentiating to obtain:
$$\mathbf{v}(t)= \begin{bmatrix} v_{0x}\\ v_{0y} - gt \end{bmatrix},\; \text{and}$$
$$\mathbf{a}(t) = \begin{bmatrix} 0\\ - g \end{bmatrix}.$$
Hence, since $\sum \mathbf{F} = m\,\mathbf{a}$, where the $\sum$ indicates "vector sum", we have that
$$\sum \mathbf{F}=m \begin{bmatrix} 0\\ - g \end{bmatrix}.$$
That is, the only force on the particle is gravity acting in the negative $y$ direction, as expected.

6.4 Miscellaneous Real-World Examples

In the course of my work, I have come across one or two real-world examples of applications of differentiation. Here's one of them.

6.4.1 The Mathematics of a Leak Check

Suppose you have a gas, like hydrogen, contained in some system with a fixed volume, and you want to know if the system leaks. You can measure the temperature and pressure of the gas. How would you do it?

Well, the ideal gas law says that $PV=nRT$, where $P$ is the pressure, $V$ is the volume, $n$ is the number of moles of gas, $R$ is the ideal gas constant, and $T$ is the temperature. The volume is constant, so if there is a leak, $n$ would be changing. However, suppose you do this leak check overnight? The temperature would change, and thus the pressure would almost certainly change. How could you be sure that the number of moles was changing? That is, how could you be sure that it was $n$ that was changing? First step: rearrange the ideal gas law equation:
$$\frac{P}{T}=\frac{nR}{V}.$$
Now the only thing on the RHS that might be changing is $n$. If there is no leak, the RHS should be constant. So, you measure the quantity
$$\frac{d(P/T)}{dt}=\frac{R}{V}\,\frac{dn}{dt}$$
to see if you have a leak.

Ah, but does it matter what units you measure in? Suppose you measured pressure in psig instead of psia? Or suppose you measured temperature in degrees Celsius instead of Kelvins? Would it matter? The answer is yes. The ideal gas law is only valid for absolute scales. Why is that? Suppose I measured pressure in psig $(P_{g})$ instead of psia $(P_{a})$. Well, we know that $P_{a}=P_{g}+P_{0}$, where $P_{0}$ is the atmospheric pressure. I claim that
$$\frac{d(P_{g}/T)}{dt}\not=\frac{R}{V}\,\frac{dn}{dt}.$$
Proof:
$$\frac{d(P_{g}/T)}{dt}=\frac{d}{dt} \left[ \frac{P_{a}-P_{0}}{T} \right]=\frac{T( \dot{P}_{a}-\dot{P}_{0} )-(P_{a}-P_{0}) \dot{T}}{T^{2}}.$$
We will assume the barometric pressure doesn't change much relative to the system pressure, so that $\dot{P}_{0}=0$. Then we get
$$\frac{d(P_{g}/T)}{dt} \;= \frac{T \dot{P}_{a}- \dot{T} P_{a}+ \dot{T}P_{0}}{T^{2}}$$
$$= \frac{T \dot{P}_{a}- \dot{T} P_{a}}{T^{2}}\;+ \frac{\dot{T}P_{0}}{T^{2}}$$
$$=\frac{d}{dt} \left[ \frac{P_{a}}{T} \right]\;+ \frac{\dot{T}P_{0}}{T^{2}}$$
$$\not= \frac{R}{V}\,\frac{dn}{dt}.$$

Kudos to the reader if you can show that using degrees Celsius doesn't work, either. So the moral of the story is that you must use absolute pressure and absolute temperature, just like the equation is meant to be used!

Last edited:

#### Ackbach

##### Indicium Physicus
Staff member
Final Applications Draft.

6.4.2 Temperature Dependence of a Wheatstone Bridge

A Wheatstone bridge is a resistor configuration often used in pressure transducers, force transducers, and other applications. It looks like a diamond, with voltage applied at two opposite corners. You measure the voltage between the two other corners. Here is a circuit diagram for the Wheatstone bridge: Now the transfer function is defined as the ratio of the output voltage to the input voltage. For a Wheatstone bridge, using mesh current analysis, you can show that the transfer function is
$$H=\frac{V_{0}}{V_{1}}=\frac{R_{3}}{R_{2}+R_{3}}-\frac{R_{4}}{R_{1}+R_{4}}.$$
Now each of these resistor values depends on temperature in a linear (actually, affine) fashion thus:
\begin{align*}
R_{1}(T)&=\epsilon_{1}(T-T_{0})+R_{01}\\
R_{2}(T)&=\epsilon_{2}(T-T_{0})+R_{02}\\
R_{3}(T)&=\epsilon_{3}(T-T_{0})+R_{03}\\
R_{4}(T)&=\epsilon_{4}(T-T_{0})+R_{04}.
\end{align*}
The question is, how does the transfer function depend on temperature? You can show that the transfer function as a function of temperature is
$$H(T)=\frac{R_{03}+(T-T_{0})\epsilon_{3}}{R_{02}+R_{03}+(T-T_{0})(\epsilon_{2}+\epsilon_{3})} \quad -\frac{R_{04}+(T-T_{0})\epsilon_{4}}{R_{01}+R_{04}+(T-T_{0})(\epsilon_{1}+\epsilon_{4})}\quad .$$
Finally, you just take the derivative of this expression with respect to $T$ to obtain the rate of change of the transfer function (which governs how the pressure transducer or force transducer behaves) with respect to temperature:
$$\frac{dH}{dT}= \frac{R_{02}\epsilon_{3}-R_{03}\epsilon_{2}}{(R_{02}+R_{03}+(T-T_{0})(\epsilon_{2}+\epsilon_{3}))^{2}} \quad - \frac{R_{04}\epsilon_{1}-R_{01}\epsilon_{4}}{(R_{01}+R_{04}+(T-T_{0})(\epsilon_{1}+\epsilon_{4}))^{2}}\quad .$$

Knowing this equation allows you to find out how sensitive the measurement is relative to temperature changes. Does a small temperature change cause a big change in the measurement value?

This marks the end of the differential calculus tutorial. I hope it's been useful to you, even as it has been fun to write!

Last edited:
Status
Not open for further replies.