Welcome to our community

Be a part of something great, join today!

A never-ending dispute...

chisigma

Well-known member
Feb 13, 2012
1,704
Dear friends of MHB, in a well known mathematical site…

0^0

… a new chapter of the never-ending saga ‘ $0^{0}$’ has been written. Of course in the past I’m also have been involved in discussions about this specific problem and almost ever these discussions have terminated with an exchange of insults. I’m sure that MHB is no exception and that’s why I ask You to be left free to write this and the next post, where my personal opinion about the correct formulation of the problem are described. After I will be glad to answer to Your objections… if any…

Let’s start with the ‘standard definition’ of exponentiation that is reported in almost all the HolyBooks…

Standard definition: exponentiation is a mathematical operation, written as $b^{n}$, involving two numbers, the base b and the exponent (or index or power) n. When n is a positive integer , exponentiation corresponds to repeated multiplication. In other words, a product of n factors, each of which is equal to b (the product itself can also be called power)...

$ b^{n}= \underbrace {b \cdot b \cdot\ ... \cdot\ b}_{\text{n times}}$

Very well!... The successive step of course is to derive the rule of multiplication, division, exponentiation of powers with the same base b and doing that one 'discovers' that...

$\displaystyle \frac{b^{n}}{b^{n}}= b^{n-n}=b^{0}=1$ (1)

... so that hi can 'extend' the exponent's domain to n=0. Another property that is easily 'discovered' is that for all n>0 is $0^{n}=0$ and at this point a devil makes an entrance: '... why don't add a further 'extension' and don't find what is $0^{0}$?'... and immediately we are in panic because if we try to write the (1) with b=0 we find $\frac{0}{0}$ that is an 'indeterminate form'... and adding panic to panic we think at the well known derivation rule...

$\displaystyle \frac{d}{d x} x^{n} = n\ x^{n-1}$ (2)

... that for n=1 and x=0 gives the result $0^{0}$ but that we know is 1 for all x... so is $0^{0}=1$?... or (2) holds for all x except x=0?... a sort of 'chain reaction' with unpredictable consequences has been 'inflamed' (Wasntme)...

Now let me try do modify just an insignificant bit of the original definition of exponentiation as follows and observe the results…

Modified definition: exponentiation is a mathematical operation, written as $b^{n}$, involving two numbers, the base b and the exponent (or index or power) n. When n is a non negative integer, exponentiation corresponds to repeated multiplication, in other words, a product of n+1 factors, the first equal to 1 and the others equal to b (the product itself can also be called power)...

$ b^{n}= 1 \cdot \underbrace {b \cdot b \cdot\ ... \cdot\ b}_{\text{n times}}$

What is changed?... nothing except to the fact that for all real b [including b=0...] is by definition $b^{0}=1$... and the whole phobia disappears (Wasntme)...

On the basis of exponentiation's rules [in standard or modified version...] it is possible to define the exponential function as...

$\displaystyle e^{x}= \lim_{n \rightarrow \infty} (1+\frac{x}{n})^{n}$ (3)

... and also its inverse that is the natural logarithm function as the function $\ln x$ for which for all x is $e^{\ln x}= x$. The exponential function is then 'generalized' defining...

$\displaystyle a^{x}= e^{x\ \ln a}$ (4)

... as well as its inverse function $\log_{a} x$. On the basis of (4) the derivative of the generalized exponential function is...

$\displaystyle \frac{d}{d x} a^{x} = a^{x} \ln a$ (5)

For the moment we suppose that a is a real number greater than 0…

All these definitions are a very useful preliminary to answering to the ‘critic’ question: what’s the value of $0^{0}$?... It is evident that the question itself is ambiguous because it is not specified if we are intending exponentiation or generalized exponential, so that we have two possibilities…

a) exponentiation: given a real b and a non negative integer n, for b=0 and n=0 is, by definition $b^{n}=0^{0}=1$...

b) generalized exponential: given two real non negative numbers x and y, it is possible to evaluate $f(x,y)= x^{y}$ for x=y=0?...

A possible answer to the question b) will be examined in a successive post…

Kind regards

$\chi$ $\sigma$
 

chisigma

Well-known member
Feb 13, 2012
1,704
Now we dedicate great attention to the function...

$\displaystyle x^{y}= e^{y\ \ln x}= e^{f(x,y)}$ (1)

... which of course is 'one to one' definied if f(x,y) is defined. A first important detail is the fact that the limit $\displaystyle \lim_{(x,y) \rightarrow (0,0)} f(x,y)$ doesn't exist. That is easily demonstrated with the variable substitution $x= \rho\ \cos \theta\ ,\ y+\rho\ \sin \theta$ that leads to the identity...

$\displaystyle f(\rho, \theta)= \rho\ \sin \theta\ (\ln \rho + \ln \cos \theta)$ (2)

Here, when $\rho$ tends to 0, an appropriate choice of $\theta$ [the term $\ln \cos \theta$ is negatively unbounded...] can produce any limit You wants. That means that the function f(x,y) isn't continous in (0,0), no matter what is its value in (0,0). Now You can ask me : '... but does it exist a way to compute f(0,0)?...'. Well!... in my opinion the answer is yes. The way to arrive to that result is the use of the Taylor expansion of a two variable function, according to the following theorem...

Let be $f(x,y)$ a two variable function with the partial derivative of the first n+1 orders continous in a point $(x_{0},y_{0})$. In this case it exists a region in the (x,y) plane where is...

$\displaystyle f(x_{0}+h,y_{0}+k) = f(x_{0},y_{0}) + [h\ \frac{\partial }{\partial x} f (x_{0},y_{0})+ k\ \frac{\partial }{\partial y} f (x_{0},y_{0})] + \frac{1}{2!}\ [h^{2}\ \frac{\partial^{2} }{\partial x^{2}} f (x_{0},y_{0}) + 2\ h\ k\ \frac{\partial^{2} }{\partial x\ \partial y} f (x_{0},y_{0}) + k^{2}\ \frac{\partial^{2} }{\partial y^{2}} f (x_{0},y_{0})] +…$

$\displaystyle …+ \frac{1}{n!}\ \sum_{j=0}^{n} \binom{n}{j}\ h^{n-j}\ k^{j}\ \frac{\partial^{n} }{\partial x^{n-j}\ \partial y^{j}} f (x_{0}, y_{0}) + R_{n}$ (3)

If for a pair of real numbers h and k is $\displaystyle \lim_{n \rightarrow \infty} R_{n}=0$ then the series (3) converges to $f(x_{0}+h, y_{0}+k)$. Now we set $x_{0}=y(0)=1$ and try to construct the expansion (3) for $f(x,y)= y\ \ln x$ computing the partial derivatives in (1,1). At first it seems an hard task but... never say never again!...

For m=0 is...

$\displaystyle f(x,y)=y\ \ln x = 0\ \text{in}\ x=y=1$

For m=1 is...

$\displaystyle \frac{\partial f}{\partial x}= \frac{y}{x} =1\ \text{in}\ x=y=1$

$\displaystyle \frac{\partial f}{\partial y}= \ln x =0\ \text{in}\ x=y=1$

For m=2 is...

$\displaystyle \frac{\partial^{2} f}{\partial x^{2}}= - \frac{ y}{x^{2}} =-1\ \text{in}\ x=y=1$


$\displaystyle \frac{\partial^{2} f}{\partial x \partial y}= \frac{1}{x} = 1\ \text{in}\ x=y=1$

$\displaystyle \frac{\partial^{2} f}{\partial y^{2}}= 0\ \text{everywhere}$

For m=3 is...

$\displaystyle \frac{\partial^{3} f}{\partial x^{3}}= \frac{2 y}{x^{3}} =2\ \text{in}\ x=y=1$

$\displaystyle \frac{\partial^{3} f}{\partial x^{2} \partial y}= - \frac{1}{x^{2}} =-1\ \text{in}\ x=y=1$

$\displaystyle \frac{\partial^{3} f}{\partial x \partial y^{2}}= 0\ \text{everywhere}$

$\displaystyle \frac{\partial^{3} f}{\partial y^{3}}= 0\ \text{everywhere}$

And [finally...] for m=n is...

$\displaystyle \frac{\partial^{n} f}{\partial x^{n}}= (-1)^{n-1}\ (n-1)!\ \frac{y}{x^{n}}= (-1)^{n-1}\ (n-1)!\ \text{in}\ x=y=1$

$\displaystyle \frac{\partial^{n} f}{\partial x^{n-1} \partial y}= (-1)^{n-2}\ (n-1)!\ \frac{1}{x^{n-1}}= (-1)^{n-2}\ (n-2)!\ \text{in}\ x=y=1$

$\displaystyle \frac{\partial^{n} f}{\partial x^{n-j} \partial y^{j}}= 0\ \text{everywhere for }\ j>1$

Very well!... now we are able to write...

$\displaystyle f(1+h,1+k)= (1+k)\ \ln (1+h) = h - \frac{1}{2} (h^{2}+2\ h\ k) + \frac{1}{3!} (2\ h^{3} -3\ h^{2}\ k) -…+ \frac{(-1)^{n-1}}{n!} \{(n-1)!\ h^{n}-n\ (n-2)!\ h^{n-1}\ k\}+…$ (4)

... and that allows us, setting h=k=-1, to arrive to the desired goal...

$\displaystyle f(0,0)= -1 + 1-\frac{1}{2} + \frac{1}{2} - \frac{1}{3} +… +\frac{1}{n} - \frac{1}{n+1}+… =0$ (5)

The result we have now achieved allows, in my opinion, to extablishes that the 'only possible value' of $x^{y}$ in (0,0) is 1. Now I will be glad to receive any possible feedback from You...

Kind regards

$\chi$ $\sigma$
 
Last edited:

chisigma

Well-known member
Feb 13, 2012
1,704
This thread wouldn't be complete without a discussion about the [false...] argument used to 'demonstrate' that no real number corresponds to $0^{0}$...


Because $\displaystyle \lim_{x \rightarrow 0} x^{0}=1$ and $\displaystyle \lim_{x \rightarrow 0} 0^{x}=0$, the limits are different so that no single value can be assigned to $0^{0}$...
As explained in my previous posts the function $x^{y}$ isn't continuos in (0,0) so that the limits to (0,0) of different trajectories in the plane x-y no information supply about the value of the function in (0,0). Anyway, in order to enforce the result of the previous posts, is interesting the behavior of the function $f(x)=0^{x}$ for non negative values of x. Let's recall a formula for the function $\ln (a^{x})$ we have obtained in post #2...


$\displaystyle \ln (a^{x})= (1+k)\ \ln (1+h) = h - \frac{1}{2} (h^{2}+2\ h\ k) + \frac{1}{3!} (2\ h^{3} -3\ h^{2}\ k) -…+ \frac{(-1)^{n-1}}{n!} \{(n-1)!\ h^{n}-n\ (n-2)!\ h^{n-1}\ k\}+…$ (1)

... where h=a-1 and k=x-1. At this point a simple consideration is essential: the (1) is an infinite series that supplies the logarithm of a function f(a,x) so that we have the following cases...

a) if the series converges to a real $\alpha$, then is $f(a,x)= e^{\alpha}$

b) if the series diverges and the sum tends to minus infinity then is $f(a,x)=0$

c) in all cases different from a) and b) f(a,x) is undefined...

Very well!... now the only we have to do is setting in (1) $a=0 \implies h=-1$ and we obtain the 'magic result'...

$\displaystyle \ln (0^{x}) = - 1 - \frac{1}{2} (1+2 k) - \frac{1}{3!} (2! + 3!\ k) -...- \frac{1}{n!}\ \{(n-1)! + n\ (n-2)!\ k\} -... = - (1+k) - \frac{1}{2}\ (1+k) - ...- \frac{1}{n}\ (1+k) -... =$

$\displaystyle = -x\ \sum_{n=1}^{\infty} \frac{1}{n}$ (2)

... and the [very simple to understand...] result is that for x>0 we are in case b), for x=0 in case a) and for x<0 in case c) so that is...

$\displaystyle 0^{x}=\begin{cases}0\ \text{if}\ x>0\\ 1\ \text{if}\ x=0\\ \text{undefined if}\ x<0 \end{cases}$ (3)

An illustrative example of the obtained result is represented in the figure...

MHB18.PNG

... where the function $a^{x}$ is represented for a=.1, a=.01 and a=.001. If a tends to 0 the approaching of the function to (3) is fully evident...

Kind regards

$\chi$ $\sigma$
 

Attachments

Last edited:

CaptainBlack

Well-known member
Jan 26, 2012
890
All very well, but still non-sense. The symbol \(0^0\) of itself has no meaning, we give it meaning to suit the context. In practice this means we assign it to the limit of \(x^y\) along a trajectory in the \((x,y)\) plane relevant to the problem at hand. That there are different contexts and hence meanings that we wish \(0^0\) to convey is what we mean when we say that the symbol \(0^0\) is undefined. It is undefined when we consider it independent of context.

CB