- Thread starter
- #1

- Mar 22, 2013

- 573

Balarka

.

- Thread starter mathbalarka
- Start date

- Thread starter
- #1

- Mar 22, 2013

- 573

Balarka

.

- Feb 13, 2012

- 1,704

Balarka

.

$\displaystyle z(t) = \cos t + i\ \sin t\ (1)$

If You derive z(t) respect to t You have...

$\displaystyle z^{\ '} (t) = - \sin t + i\ \cos t = i\ z(t)\ (2)$

Now if we consider the ODE...

$\displaystyle z^{\ '} = i\ z,\ z(0)=1\ (3)$

... its solution is...

$\displaystyle z(t) = e^{i\ t}\ (4)$

Kind regards

$\chi$ $\sigma$

- Thread starter
- #3

- Mar 22, 2013

- 573

Your initial start was good, but the solution isn't complete. Can you verify step (2) and show that you are not using angle sum identity while computing $z'(t)$?

- Feb 13, 2012

- 1,704

Thr formal derivative is...

Your initial start was good, but the solution isn't complete. Can you verify step (2) and show that you are not using angle sum identity while computing $z'(t)$?

$\displaystyle \frac{d}{dt}\ z(t)= \frac{d}{dt}\ (\cos t + i\ \sin t) = - \sin t + i\ \cos t = i\ z(t)\ (1)$

Kind regards

$\chi$ $\sigma$

- Jan 17, 2013

- 1,667

\begin{align}

e^{i \theta } &= \sum_{n\geq 0}\frac{(i\theta)^n}{n!}\\

& = \sum_{n\geq 0}\frac{(-1)^n \theta ^{2n}}{2n!}+i\sum_{n\geq 0}\frac{(-1)^n \theta ^{2n+1}}{(2n+1)!}\\

& = \cos \theta +i\sin \theta

\end{align}

- Thread starter
- #6

- Mar 22, 2013

- 573

- - - Updated - - -

Thanks for participating Zaid.

I'd ask you the same as chisigma - can you justify your steps doesn't include the angle sum identity in the background?

- Moderator
- #7

- Feb 7, 2012

- 2,785

- Jan 17, 2013

- 1,667

That is a difficult task unless you are claiming that I am using it in a specific step .I'd ask you the same as chisigma - can you justify your steps doesn't include the angle sum identity in the background?

- Thread starter
- #9

- Mar 22, 2013

- 573

This is really a very deep question of calculus and I will ask everyone to think a little far than an usual analyst. The problem is quite hard to see, but the answer lies in the very basics.

- - - Updated - - -

I am claiming it, yes. If you point the error out, though, you can bypass it.ZaidAlyafey said:That is a difficult task unless you are claiming that I am using it in a specific step .

- Jan 17, 2013

- 1,667

That is impossible because we can define it for arbitrary differentiable functions.

[2] The power expansion \(\displaystyle e^z= \sum_{n\geq 0}\frac{z^n}{n!}\) is not valid unless we use the trig-identities.

The power expansion of \(\displaystyle e^z\) exists and is unique because of the analyticity of the function. Since it CR is satisfied (use z=x+iy) and all partial derivatives exist and are continuous.

[3] You are claiming that \(\displaystyle \lim_{z \to 0}\frac{e^z-1}{z}=1\) is not valid unless we use trig-identities.

Claim [3] might work ,though, I will think about it later.

- Thread starter
- #11

- Mar 22, 2013

- 573

- Moderator
- #12

- Feb 7, 2012

- 2,785

I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.angle-sum identity is usually not derived from Euler's formula, as it makes things ambiguous.

- Thread starter
- #13

- Mar 22, 2013

- 573

That depends on how you derive the Euler's formula. If you find out a way to derive it without angle-sum formula, then yes, the above can be applied. Otherwise ambiguity occurs, as I have indicated before.Opalg said:I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.

- Moderator
- #14

- Feb 7, 2012

- 2,785

I don't understand what you mean by this ambiguity. If you define the functions $e^{i\theta}$, $\cos\theta$ and $\sin\theta$ by their power series, then the real part of the power series for $e^{i\theta}$ is the power series for $\cos\theta$, and the imaginary part is the series for $\sin\theta$. So Euler's formula $e^{i\theta} = \cos\theta + i\sin\theta$ is practically a tautology.That depends on how you derive the Euler's formula. If you find out a way to derive it without angle-sum formula, then yes, the above can be applied. Otherwise ambiguity occurs, as I have indicated before.I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.

- Thread starter
- #15

- Mar 22, 2013

- 573

I am not defining them by their power series. They are defined as usual by the properties of right-angled triangle throughout the context. The power series can still be derived from it, though.If you define the functions $e^{i\theta}$, $\cos\theta$ and $\sin\theta$ by their power series ...

That is the whole point of this problem. Most of the problem would be solved if I point this out to you, the tautology that occurs.Opalg said:I don't understand what you mean by this ambiguity.

Last edited:

- Moderator
- #16

- Feb 7, 2012

- 2,785

Okay, if you want to define trig functions in terms of triangles, then you must first define what you mean by anI am not defining them by their power series. They are defined as usual by the properties of right-angled triangle throughout the context.

- Feb 13, 2012

- 1,704

All right!... I'll try to complete my answer considering again the complex function...Yes, but note that you are deriving sine and cosine while deriving $z(t)$, can you do that without the angle-sum identity?

- - - Updated - - -

$\displaystyle z(t) = \cos t + i\ \sin t\ (1)$

... the derivative of which is...

$\displaystyle z^{\ '} (t) = \frac{d}{dx}\ \cos t + i\ \frac{d}{dx}\ \sin t\ (2)$

Now we suppose don't to know $\frac{d}{d t} \cos t$ and $\frac{d}{d t} \sin t$ and will try to arrive at them. A fundamenthal property of the complex functions extablishes that...

$\displaystyle \frac {d}{d t} \ln z(t) = \frac{z^{\ '}(t)}{z(t)}\ (2)$

... and in our case is...

$\displaystyle \frac{d}{d t} \ln z(t)= i\ (3)$

Inserting (1) and (2) in (3) we arrive directly to the equations...

$\displaystyle \frac{d}{d x} \cos t\ \cos t + \frac{d}{dx} \sin t\ \sin t =0$

$\displaystyle \frac{d}{d x} \sin t\ \cos t - \frac{d}{d x} \cos t\ \sin t = 1\ (4)$

... and the solutions of (4) are...

$\displaystyle \frac{d}{d x} \sin t = \cos t$

$\displaystyle \frac{d} {d x} \cos t = - \sin t\ (5)$

Kind regards

$\chi$ $\sigma$

- Thread starter
- #18

- Mar 22, 2013

- 573

Yes, I know of this tautology. In fact, geometry is a whole lot filled with tautology -- you are given a point. To define a line, you draw another point, but where do the second is coming from? Why, for you have an infinite number of points now I define them. Then where do you put 'em? Why, in a space -- and here comes the concept of a space in which all the points are kept, i.e., the \(\displaystyle \mathbb{R}^3\) and you lost everything you were keeping in order. That's why modern mathematics use better definitions than that of geometric ones. I know of this, but the scope of this challenge is not going into that mess.Opalg said:This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.

- Thread starter
- #19

- Mar 22, 2013

- 573

@chisigma : How do you derive (3)?

- Feb 13, 2012

- 1,704

For any complex function is $\displaystyle \ln z = \ln |z| + i\ \text{arg}\ (z) $ so that in our case is...@chisigma : How do you derive (3)?

$\displaystyle \ln z = \ln (\sin^{2} t + \cos^{2} t) + i\ \tan^{-1} \frac{\sin t}{\cos t} = i\ t\ (1)$

... so that is...

$\displaystyle \frac{d}{d t} \ln z = \frac{z^{\ '}(t)}{z(t)} = i\ (2)$

Kind regards

$\chi$ $\sigma$

- Thread starter
- #21

- Mar 22, 2013

- 573

$(1)$ is true if and only if $\sin(t) > 0$.

- Feb 13, 2012

- 1,704

Of course is...$(1)$ is true if and only if $\sin(t) > 0$.

$\displaystyle i\ \tan^{-1} \frac{\sin t}{\cos t} = i\ \tan^{- 1} \tan t = i\ (t + k\ \pi)\ (1)$

... but in any case the derivative of the constant term is 0, so that in any case is ...

$\displaystyle \frac{d}{d t} \ln z = \frac{z^{\ '} (t)}{z(t)} = i\ (2)$

Kind regads

$\chi$ $\sigma$

- Thread starter
- #23

- Mar 22, 2013

- 573

Thanks very much for participating,

Balarka

.

- Feb 15, 2012

- 1,967

I almost agree with this.Okay, if you want to define trig functions in terms of triangles, then you must first define what you mean by anangle. And if you want to define angles (as radians), you have to know what is meant byarc length. But arc length, on a curve $y=f(x)$, is defined in terms of an integral \(\displaystyle \int\sqrt{1+f'(x)^2}dx\). If you try to do that for the unit circle, you will find yourself having to integrate $\dfrac1{\sqrt{1-x^2}}$, which leads you to $\arcsin x$. This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.

It is actually possible to define the cosine function by first defining:

$\displaystyle A: [-1,1] \to \Bbb R,\ A(x) = \frac{x\sqrt{1-x^2}}{2} + \int_x^1 \sqrt{1 - t^2}\ dt$

and then defining cosine on the interval:

$\displaystyle [0, 2\int_{-1}^1 \sqrt{1 - u^2}\ du]$

as the unique (function of) $x$ such that:

$(A \circ \cos)(x) = \dfrac{x}{2}$

and then finally, defining:

$\displaystyle \pi = 2\int_{-1}^1 \sqrt{1 - u^2}\ du$

and on $[0,\pi]$ defining:

$\sin(x) = \sqrt{1 - (\cos x)^2}$.

At this point we extend the domain of these two functions by defining, for $x \in (\pi,2\pi]$:

$\cos(x) = \cos(2\pi - x)$

$\sin(x) = -\sin(2\pi - x)$,

and finally extend by periodicity.

Now, granted this construction bears little resemblence to the usual geometric definitions of the trigonometric functions, and why we would do such a thing is a bit unmotivated, but it IS rigorous, and it DOESN'T use power series.

One advantage of this method is that angle-sum identities are no longer required to derive the derivatives of these two functions, it follows (more or less straight-forwardly) that:

$\cos'(x) = -\sin x$

$\sin'(x) = \cos x$

by applying the inverse function theorem to $B = 2A$ (first on the interval $[0,\pi]$ and then using the "extended" definitions to compute the derivatives on all of $\Bbb R$...care has to be taken to apply the correct one-sided limits at the "stitch points", of course).

A clear disadvantage of this, is that it is NOT clear how to extend these definitions analytically to $\Bbb C$, but now the Taylor series in Zaid's posts can be computed without fear of reference to the angle-sum formulae, and THOSE series can easily be shown to be convergent on $\Bbb C$.

Just sayin'

Last edited:

- Moderator
- #25

- Feb 7, 2012

- 2,785

Very ingenious! But I can't help thinking that power series are neater and more useful.I almost agree with this.

It is actually possible to define the cosine function by first defining:

$\displaystyle A: [-1,1] \to \Bbb R,\ A(x) = \frac{x\sqrt{1-x^2}}{2} + \int_x^1 \sqrt{1 - t^2}\ dt$

and then defining cosine on the interval:

$\displaystyle [0, 2\int_{-1}^1 \sqrt{1 - u^2}\ du]$

as the unique $x$ such that:

$(A \circ \cos)(x) = \dfrac{x}{2}$

and then finally, defining:

$\displaystyle \pi = 2\int_{-1}^1 \sqrt{1 - u^2}\ du$

and on $[0,\pi]$ defining:

$\sin(x) = \sqrt{1 - (\cos x)^2}$.

At this point we extend the domain of these two functions by defining, for $x \in (\pi,2\pi]$:

$\cos(x) = \cos(2\pi - x)$

$\sin(x) = -\sin(2\pi - x)$,

and finally extend by periodicity.

Now, granted this construction bears little resemblence to the usual geometric definitions of the trigonometric functions, and why we would do such a thing is a bit unmotivated, but it IS rigorous, and it DOESN'T use power series.

One advantage of this method is that angle-sum identities are no longer required to derive the derivatives of these two functions, it follows (more or less straight-forwardly) that:

$\cos'(x) = -\sin x$

$\sin'(x) = \cos x$

by applying the inverse function theorem to $B = 2A$ (first on the interval $[0,\pi]$ and then using the "extended" definitions to compute the derivatives on all of $\Bbb R$...care has to be taken to apply the correct one-sided limits at the "stitch points", of course).

A clear disadvantage of this, is that it is NOT clear how to extend these definitions analytically to $\Bbb C$, but now the Taylor series in Zaid's posts can be computed without fear of reference to the angle-sum formulae, and THOSE series can easily be shown to be convergent on $\Bbb C$.

Just sayin'

The only awkward thing about the power series approach is that (as in your method) the definition of $\pi$ is a bit messy. You have to show that $\cos x$ (as defined by its power series) is a decreasing function on the interval [0,2], with $\cos 0 = 1>0$ and $\cos2<0$. It follows from the intermediate value theorem that there is a unique number $x_0$ between 0 and 2 such that $\cos x_0 = 0$. The number $\pi$ is then defined to be $2x_0.$