Welcome to our community

Be a part of something great, join today!

The Strange Behaviour of Numbers Close to Unity

Perplexed

New member
Feb 7, 2014
6
I have been looking at material properties such as thermal expansion of metals which usually involves very small coefficients. The general equation of thermal expansion is usually
\(\displaystyle L_\theta = L_0 ( 1 + \alpha \theta)\)
where L is the length and theta is the temperature change. The coefficient alpha is usually pretty small, 11E-6 for steel, so one ends up with a lot of numbers like 1.000011.

This is where I seem to have entered a strange world, where
\(\displaystyle \sqrt{(1 + x)} \rightarrow 1 + x/2\)
\(\displaystyle \dfrac{1}{ \sqrt{(1 - x)}} \rightarrow 1 + x/2\)
\(\displaystyle (1 - x)^3 \rightarrow 1-3x\)

Is there a name for this area of maths, and somewhere I can look up more about it?

Thanks for any help.

Perplexed
 

Evgeny.Makarov

Well-known member
MHB Math Scholar
Jan 30, 2012
2,492
Taylor series is one topic where (Infinite) polynomials are used to approximating functions. For example,
\[
(1+x)^{1/2}=1+\frac{x}{2}+R_1(x)
\]
where $R_1(x)$ is called the remainder and is infinitely small compared to $x$ when $x$ is small. More precisely,
\[
(1+x)^{\alpha }=1+\alpha x+{\frac {\alpha (\alpha -1)}{2!}}x^{2}+\cdots+
\frac{\alpha\cdot\ldots\cdot(\alpha-n+1)}{n!}x^n+R_n(x)
\]
where $R_n(x)$ is infinitely small compared to $x^n$ when $x$ tends to $0$.
 

HallsofIvy

Well-known member
MHB Math Helper
Jan 29, 2012
1,151
"Linear approximation". Any function, f, having a derivative at x= a, can be approximated by the "tangent line" [tex]y= f'(a)(x- a)+ f(a)[/tex]. The error will be proportional to [tex](x- a)^2[/tex] and f''(a).

For example, if [tex]f(x)= \sqrt{1+ x}= (1+ x)^{1/2}[/tex] then [tex]f'(x)= (1/2)(1+ x)^{-1/2}[/tex] so that with x= 0, [tex]f(0)= \sqrt{1+ 0}= 1[/tex] and [tex]f'(0)= (1/2)/\sqrt{1+ 0}= 1/2[/tex]. So y= f(x) is approximated, around x= 0, by [tex]y= (1/2)x+ 1[/tex] or [tex]1+ x/2[/tex].

If [tex]f(x)= \frac{1}{\sqrt{1+ x}}= (1+ x)^{-1/2}[/tex] then [tex]f'(x)= -(1/2)(1+ x)^{-3/2}[/tex] so that [tex]f(0)= \frac{1}{\sqrt{1+ 0}}= 1[/tex] and then [tex]f'(0)= -(1/2)(1+ 0)^{3/2}= -1/2. So y= f(x) is approximated, around x= 0, by [tex]y= -(1/2)x+1[/tex] or [tex]1- x/2[/tex]. Notice the negative sign- what you have is NOT correct.

If [tex]f(x)= (1- x)^3[/tex] then [tex]f'(x)= 3(1- x)^2(-1)= -3(1- x)^2[/tex]. [tex]f(0)= (1- 0)^3= 1[/tex] and [tex]f'(0)= -3(1- 0)^2= -3[/tex]. So y= f(x) is approximated by -3x+ 1 or 1- 3x.

You could also do the last one by actually multiplying it out: [tex](1- x)^3= 1- 3x+ 3x^2- x^3[/tex]. If x is small enough (i.e. close enough to 0) that higher values of x can be ignored in the approximation, y= 1- 3x.

Again, these are all first order or linear approximations to the functions, not exact values.

(You can get the Taylor's polynomial and series that Evgeny- Makarov refers to by extending those same ideas to higher powers.)
 
Last edited:

Perplexed

New member
Feb 7, 2014
6
If [tex]f(x)= \frac{1}{\sqrt{1+ x}}= (1+ x)^{-1/2}[/tex] then [tex]f'(x)= -(1/2)(1+ x)^{-3/2}[/tex]

so that [tex]f(0)= \frac{1}{\sqrt{1+ 0}}= 1[/tex] and then [tex]f'(0)= -(1/2)(1+ 0)^{3/2}= -1/2[/tex].

So y= f(x) is approximated, around x= 0, by [tex]y= -(1/2)x+1[/tex] or [tex]1- x/2[/tex]. Notice the negative sign- what you have is NOT correct.
Thank you for your reply, it is very helpful.

Just to clear things up so that someone else looking at this doesn't get confused, in my second approximation I had [tex]f(x)= \frac{1}{\sqrt{1 - x}}[/tex] rather than the [tex]f(x)= \frac{1}{\sqrt{1+ x}}[/tex] that you started with: notice the "-" rather than "+" in the square root. It was the simple change of sign in arriving at the reciprocal that first intrigued me on this one, and your explanation makes the reason why this works clear.

Less Perplexed now
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Allow me to make another observation regarding this:

Scientific measurements are often given in "significant figures", the reasoning being, we can only take measurements up to a certain degree of accuracy.

So, suppose our input data can only give 6 decimal places.

If we expect we can model a function (and for many functions this is true) by:

$f(x) = a_0 + a_1x + a_2x^2 +\cdots$

And that the coefficients $a_k$ either stay "about the same size" or even better, decrease, then if we measure $x$ to 6 decimals places, the "correction term" for $x^2$ is around 12 decimal places, in other words, much much smaller than our standards of accuracy allow.

For certain classes of "well-behaved" functions, there are means to estimate (or "bound") the size of the error, which in turn lets us know "how many terms to go out".

For small enough $x$, this kind of reasoning lets us use the approximation:

$\sin(x) = x$

often used in simplifying wave equations that govern oscillators, and if more accuracy is needed, the approximation:

$\sin(x) = x - \dfrac{x^3}{6}$ is pretty darn good, as you can see here.