Welcome to our community

Be a part of something great, join today!

Euler's formula.

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
Deduce a proof of Euler's formula $e^{i \theta} = \cos(\theta) + i\sin(\theta)$ without using angle addition formula for sine and cosine.

Balarka
.
 

chisigma

Well-known member
Feb 13, 2012
1,704
Deduce a proof of Euler's formula $e^{i \theta} = \cos(\theta) + i\sin(\theta)$ without using angle addition formula for sine and cosine.

Balarka
.
Let suppose to have a point moving along the unity circle with constant angular speed $\frac{1}{2\ \pi}$ so that, indicating the point with a complex variable z, is...

$\displaystyle z(t) = \cos t + i\ \sin t\ (1)$

If You derive z(t) respect to t You have...

$\displaystyle z^{\ '} (t) = - \sin t + i\ \cos t = i\ z(t)\ (2)$

Now if we consider the ODE...

$\displaystyle z^{\ '} = i\ z,\ z(0)=1\ (3)$

... its solution is...

$\displaystyle z(t) = e^{i\ t}\ (4)$


Kind regards

$\chi$ $\sigma$
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
First things first, thanks for participating, chisigma.

Your initial start was good, but the solution isn't complete. Can you verify step (2) and show that you are not using angle sum identity while computing $z'(t)$?
 

chisigma

Well-known member
Feb 13, 2012
1,704
First things first, thanks for participating, chisigma.

Your initial start was good, but the solution isn't complete. Can you verify step (2) and show that you are not using angle sum identity while computing $z'(t)$?
Thr formal derivative is...

$\displaystyle \frac{d}{dt}\ z(t)= \frac{d}{dt}\ (\cos t + i\ \sin t) = - \sin t + i\ \cos t = i\ z(t)\ (1)$

Kind regards

$\chi$ $\sigma$
 

ZaidAlyafey

Well-known member
MHB Math Helper
Jan 17, 2013
1,667
\(\displaystyle e^{z} = \sum_{n\geq 0}\frac{z^n}{n!}\)

\begin{align}
e^{i \theta } &= \sum_{n\geq 0}\frac{(i\theta)^n}{n!}\\
& = \sum_{n\geq 0}\frac{(-1)^n \theta ^{2n}}{2n!}+i\sum_{n\geq 0}\frac{(-1)^n \theta ^{2n+1}}{(2n+1)!}\\
& = \cos \theta +i\sin \theta
\end{align}
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
Yes, but note that you are deriving sine and cosine while deriving $z(t)$, can you do that without the angle-sum identity?

- - - Updated - - -

Thanks for participating Zaid.

I'd ask you the same as chisigma - can you justify your steps doesn't include the angle sum identity in the background?
 

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,725
In a rigorous approach to analysis, Zaid Alyafey's answer is the only acceptable way. The functions $e^{i\theta}$, $\cos\theta$ and $\sin\theta$ must all be defined by their power series, and all their other properties (including the angle sum identity) derived from that. In that context, Euler's relation becomes totally trivial.
 

ZaidAlyafey

Well-known member
MHB Math Helper
Jan 17, 2013
1,667
I'd ask you the same as chisigma - can you justify your steps doesn't include the angle sum identity in the background?
That is a difficult task unless you are claiming that I am using it in a specific step .
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
@ Opalg : First, Zaid's answer is not acceptable unless he clarifies as my question asks about deriving it without angle-sum identity. Second, angle-sum identity is usually not derived from Euler's formula, as it makes things ambiguous.

This is really a very deep question of calculus and I will ask everyone to think a little far than an usual analyst. The problem is quite hard to see, but the answer lies in the very basics.

- - - Updated - - -

ZaidAlyafey said:
That is a difficult task unless you are claiming that I am using it in a specific step .
I am claiming it, yes. If you point the error out, though, you can bypass it.
 

ZaidAlyafey

Well-known member
MHB Math Helper
Jan 17, 2013
1,667
[1] It is either you are claiming that the Taylor expansion depends on trigonometric identities.
That is impossible because we can define it for arbitrary differentiable functions.

[2] The power expansion \(\displaystyle e^z= \sum_{n\geq 0}\frac{z^n}{n!}\) is not valid unless we use the trig-identities.
The power expansion of \(\displaystyle e^z\) exists and is unique because of the analyticity of the function. Since it CR is satisfied (use z=x+iy) and all partial derivatives exist and are continuous.

[3] You are claiming that \(\displaystyle \lim_{z \to 0}\frac{e^z-1}{z}=1\) is not valid unless we use trig-identities.


Claim [3] might work ,though, I will think about it later.
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
No, none of the point to you refer to has anything to do with angle-sum identity. [3] can be proved by applying L'Hospital, and doesn't involve angle-sum identity. You miss something vital.
 

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,725
angle-sum identity is usually not derived from Euler's formula, as it makes things ambiguous.
I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
Opalg said:
I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.
That depends on how you derive the Euler's formula. If you find out a way to derive it without angle-sum formula, then yes, the above can be applied. Otherwise ambiguity occurs, as I have indicated before.
 

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,725
I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.
That depends on how you derive the Euler's formula. If you find out a way to derive it without angle-sum formula, then yes, the above can be applied. Otherwise ambiguity occurs, as I have indicated before.
I don't understand what you mean by this ambiguity. If you define the functions $e^{i\theta}$, $\cos\theta$ and $\sin\theta$ by their power series, then the real part of the power series for $e^{i\theta}$ is the power series for $\cos\theta$, and the imaginary part is the series for $\sin\theta$. So Euler's formula $e^{i\theta} = \cos\theta + i\sin\theta$ is practically a tautology.
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
If you define the functions $e^{i\theta}$, $\cos\theta$ and $\sin\theta$ by their power series ...
I am not defining them by their power series. They are defined as usual by the properties of right-angled triangle throughout the context. The power series can still be derived from it, though.


Opalg said:
I don't understand what you mean by this ambiguity.
That is the whole point of this problem. Most of the problem would be solved if I point this out to you, the tautology that occurs.
 
Last edited:

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,725
I am not defining them by their power series. They are defined as usual by the properties of right-angled triangle throughout the context.
Okay, if you want to define trig functions in terms of triangles, then you must first define what you mean by an angle. And if you want to define angles (as radians), you have to know what is meant by arc length. But arc length, on a curve $y=f(x)$, is defined in terms of an integral \(\displaystyle \int\sqrt{1+f'(x)^2}dx\). If you try to do that for the unit circle, you will find yourself having to integrate $\dfrac1{\sqrt{1-x^2}}$, which leads you to $\arcsin x$. This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.
 

chisigma

Well-known member
Feb 13, 2012
1,704
Yes, but note that you are deriving sine and cosine while deriving $z(t)$, can you do that without the angle-sum identity?

- - - Updated - - -
All right!... I'll try to complete my answer considering again the complex function...

$\displaystyle z(t) = \cos t + i\ \sin t\ (1)$

... the derivative of which is...

$\displaystyle z^{\ '} (t) = \frac{d}{dx}\ \cos t + i\ \frac{d}{dx}\ \sin t\ (2)$

Now we suppose don't to know $\frac{d}{d t} \cos t$ and $\frac{d}{d t} \sin t$ and will try to arrive at them. A fundamenthal property of the complex functions extablishes that...

$\displaystyle \frac {d}{d t} \ln z(t) = \frac{z^{\ '}(t)}{z(t)}\ (2)$

... and in our case is...

$\displaystyle \frac{d}{d t} \ln z(t)= i\ (3)$

Inserting (1) and (2) in (3) we arrive directly to the equations...

$\displaystyle \frac{d}{d x} \cos t\ \cos t + \frac{d}{dx} \sin t\ \sin t =0$

$\displaystyle \frac{d}{d x} \sin t\ \cos t - \frac{d}{d x} \cos t\ \sin t = 1\ (4)$

... and the solutions of (4) are...

$\displaystyle \frac{d}{d x} \sin t = \cos t$

$\displaystyle \frac{d} {d x} \cos t = - \sin t\ (5)$

Kind regards

$\chi$ $\sigma$
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
Opalg said:
This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.
Yes, I know of this tautology. In fact, geometry is a whole lot filled with tautology -- you are given a point. To define a line, you draw another point, but where do the second is coming from? Why, for you have an infinite number of points now I define them. Then where do you put 'em? Why, in a space -- and here comes the concept of a space in which all the points are kept, i.e., the \(\displaystyle \mathbb{R}^3\) and you lost everything you were keeping in order. That's why modern mathematics use better definitions than that of geometric ones. I know of this, but the scope of this challenge is not going into that mess. :D
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
@chisigma : How do you derive (3)?
 

chisigma

Well-known member
Feb 13, 2012
1,704
@chisigma : How do you derive (3)?
For any complex function is $\displaystyle \ln z = \ln |z| + i\ \text{arg}\ (z) $ so that in our case is...

$\displaystyle \ln z = \ln (\sin^{2} t + \cos^{2} t) + i\ \tan^{-1} \frac{\sin t}{\cos t} = i\ t\ (1)$

... so that is...

$\displaystyle \frac{d}{d t} \ln z = \frac{z^{\ '}(t)}{z(t)} = i\ (2)$

Kind regards

$\chi$ $\sigma$
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
$(1)$ is true if and only if $\sin(t) > 0$.
 

chisigma

Well-known member
Feb 13, 2012
1,704
$(1)$ is true if and only if $\sin(t) > 0$.
Of course is...

$\displaystyle i\ \tan^{-1} \frac{\sin t}{\cos t} = i\ \tan^{- 1} \tan t = i\ (t + k\ \pi)\ (1)$

... but in any case the derivative of the constant term is 0, so that in any case is ...

$\displaystyle \frac{d}{d t} \ln z = \frac{z^{\ '} (t)}{z(t)} = i\ (2)$

Kind regads

$\chi$ $\sigma$
 

mathbalarka

Well-known member
MHB Math Helper
Mar 22, 2013
573
Well done! Your proof is correct, I like this one. In fact, mine is rather cumbersome compared to yours.

Thanks very much for participating,
Balarka
.
 

Deveno

Well-known member
MHB Math Scholar
Feb 15, 2012
1,967
Okay, if you want to define trig functions in terms of triangles, then you must first define what you mean by an angle. And if you want to define angles (as radians), you have to know what is meant by arc length. But arc length, on a curve $y=f(x)$, is defined in terms of an integral \(\displaystyle \int\sqrt{1+f'(x)^2}dx\). If you try to do that for the unit circle, you will find yourself having to integrate $\dfrac1{\sqrt{1-x^2}}$, which leads you to $\arcsin x$. This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.
I almost agree with this.

It is actually possible to define the cosine function by first defining:

$\displaystyle A: [-1,1] \to \Bbb R,\ A(x) = \frac{x\sqrt{1-x^2}}{2} + \int_x^1 \sqrt{1 - t^2}\ dt$

and then defining cosine on the interval:

$\displaystyle [0, 2\int_{-1}^1 \sqrt{1 - u^2}\ du]$

as the unique (function of) $x$ such that:

$(A \circ \cos)(x) = \dfrac{x}{2}$

and then finally, defining:

$\displaystyle \pi = 2\int_{-1}^1 \sqrt{1 - u^2}\ du$

and on $[0,\pi]$ defining:

$\sin(x) = \sqrt{1 - (\cos x)^2}$.

At this point we extend the domain of these two functions by defining, for $x \in (\pi,2\pi]$:

$\cos(x) = \cos(2\pi - x)$
$\sin(x) = -\sin(2\pi - x)$,

and finally extend by periodicity.

Now, granted this construction bears little resemblence to the usual geometric definitions of the trigonometric functions, and why we would do such a thing is a bit unmotivated, but it IS rigorous, and it DOESN'T use power series.

One advantage of this method is that angle-sum identities are no longer required to derive the derivatives of these two functions, it follows (more or less straight-forwardly) that:

$\cos'(x) = -\sin x$
$\sin'(x) = \cos x$

by applying the inverse function theorem to $B = 2A$ (first on the interval $[0,\pi]$ and then using the "extended" definitions to compute the derivatives on all of $\Bbb R$...care has to be taken to apply the correct one-sided limits at the "stitch points", of course).

A clear disadvantage of this, is that it is NOT clear how to extend these definitions analytically to $\Bbb C$, but now the Taylor series in Zaid's posts can be computed without fear of reference to the angle-sum formulae, and THOSE series can easily be shown to be convergent on $\Bbb C$.

Just sayin'
 
Last edited:

Opalg

MHB Oldtimer
Staff member
Feb 7, 2012
2,725
I almost agree with this.

It is actually possible to define the cosine function by first defining:

$\displaystyle A: [-1,1] \to \Bbb R,\ A(x) = \frac{x\sqrt{1-x^2}}{2} + \int_x^1 \sqrt{1 - t^2}\ dt$

and then defining cosine on the interval:

$\displaystyle [0, 2\int_{-1}^1 \sqrt{1 - u^2}\ du]$

as the unique $x$ such that:

$(A \circ \cos)(x) = \dfrac{x}{2}$

and then finally, defining:

$\displaystyle \pi = 2\int_{-1}^1 \sqrt{1 - u^2}\ du$

and on $[0,\pi]$ defining:

$\sin(x) = \sqrt{1 - (\cos x)^2}$.

At this point we extend the domain of these two functions by defining, for $x \in (\pi,2\pi]$:

$\cos(x) = \cos(2\pi - x)$
$\sin(x) = -\sin(2\pi - x)$,

and finally extend by periodicity.

Now, granted this construction bears little resemblence to the usual geometric definitions of the trigonometric functions, and why we would do such a thing is a bit unmotivated, but it IS rigorous, and it DOESN'T use power series.

One advantage of this method is that angle-sum identities are no longer required to derive the derivatives of these two functions, it follows (more or less straight-forwardly) that:

$\cos'(x) = -\sin x$
$\sin'(x) = \cos x$

by applying the inverse function theorem to $B = 2A$ (first on the interval $[0,\pi]$ and then using the "extended" definitions to compute the derivatives on all of $\Bbb R$...care has to be taken to apply the correct one-sided limits at the "stitch points", of course).

A clear disadvantage of this, is that it is NOT clear how to extend these definitions analytically to $\Bbb C$, but now the Taylor series in Zaid's posts can be computed without fear of reference to the angle-sum formulae, and THOSE series can easily be shown to be convergent on $\Bbb C$.

Just sayin'
Very ingenious! But I can't help thinking that power series are neater and more useful.

The only awkward thing about the power series approach is that (as in your method) the definition of $\pi$ is a bit messy. You have to show that $\cos x$ (as defined by its power series) is a decreasing function on the interval [0,2], with $\cos 0 = 1>0$ and $\cos2<0$. It follows from the intermediate value theorem that there is a unique number $x_0$ between 0 and 2 such that $\cos x_0 = 0$. The number $\pi$ is then defined to be $2x_0.$