From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Subject: Re: Sin(a+b) = sinAcosB + cosAsinB
Date: 17 Mar 2000 17:39:23 -0500
Newsgroups: sci.math
Summary: [missing]
In article <38D2A21C.D845436C@cwcom.net>,
Peter Percival wrote:
:"Zdislav V. Kovarik" wrote:
[...]
:> In my Calculus book, this was part of the definition of trig
:> functions; then, with a limit statement about sin(x)/x going to 1 as x
:> goes to 0, you could develop the rest of calculus and trigonometry.
:
:Dear Professor Kovarik,
:I'm being dim. Are you saying that "sin(A + B) = sinA.cosB + cosA.sinB"
:is part of the definition of sin and cos (one, both, more...) in a
:book? Please elucidate before I loose my faith in you as one of the
:better expositors of sci.math!
:Sincerely
:Peter Percival
:
[...]
Hard to believe, I agree. But the book is
"Introduction to Differential Calculus" (in Czech: Uvod do poctu
diferencialniho) by Vojtech Jarnik (Prague 1955). It is the first of four
books, two on differential calculus, two on integral calculus, to keep
students busy for four semesters.
His philosophy concerning trigonometric functions was: present functional
equations, as they were partially sort-of proved in high school
trigonometry (that is, only for acute angles whose sums were also acute
angles, and with the help of pictures) and a simple limit, and use them as
a foundation for rigorous treatment of these functions, defined for all
real values of the argument, with no further reference to pictures,
angles, etc.
The first stage was (I wish I had the first book, but it ended up in the
hands of someone who "forgot" to return it, so by memory:
Theorem. There is at most one pair of functions C and S, defined on the
whole real line, with the following properties:
C(a+b) = C(a)*C(b) - S(a)*S(b) for all real a, b;
S(a+b) = S(a)*C(b) + C(a)*S(b) for all real a, b;
lim[as x -> 0] S(x)/x = 1 .
Then he proves everything we cherish about these functions, such as
continuity everywhere, derivatives, existence of minimal period (whose
half he calls pi), further identities, etc. - and calls C(x)=cos(x) and
S(x)=sin(x), to conform with tradition.
Finally, he proves the existence of S and C by deriving the familiar power
series and checking them out.
In the third book, he goes further: first, using the power series for
exp(i*x) to extend consistently sin and cos to all complex arguments, and
as a delicacy, he solves a "stand-alone" functional equation for cosine
and related functions (following good old Cauchy):
Theorem (249): Suppose f(x) is defined for all real x, with the following
property:
(I) For all real x, y,
2 * f(x) * f(y) = f(x+y) + f(x-y)
(II) f is continuous at least at one point.
Then f is one of the following functions (c is a positive number):
0, 1, cos(c*x), (exp(c*x) + exp(-c*x))/2
Remark: To narrow it down to cosine, one just needs to require
(III) There exists a number p such that |f(p)| < 1.
He also shows that if we drop (II) (continuity), and if we accept the
Axiom of Choice, we can get an everywhere discontinuous function
satisfying (I) alone.
More about functional equations can be found in excellent books by Marek
Kuczma (Functional Equations) and others.
As an exercise, try to solve the functional equation (g differentiable)
g(x) + g(y) = g((x+y)/(1+x*y)), for all x, y from (-1, 1)
and find out how it can be used in the Special Theory of Relativity.
Hope it helps, ZVK(Slavek).
==============================================================================
From: Ronald Bruck
Subject: Re: Proving sin^2 + cos^2=1 for complex numbers ... was: Easy Trig Identity
Date: Sun, 30 Jul 2000 06:28:34 -0700
Newsgroups: sci.math
In article <39834EB1.855449CA@bibimus.edu>, Aristotle Petroleum
wrote:
:dcaywood@my-deja.com wrote:
:
:> > > > sin^2 x + cos^2 x = 1 but how do you show that :
:> > >>
:> > > > sin^2 z + cos^2 z = 1 where z, is a complex number
:> > >
:> > > Each side is an analytic function, and they agree on a set
:> > > with a limit point (namely the real line) so
:> > > they agree everywhere.
:> >
:> > How does one check the agreement on a set with a limit
:> > point, without already having proved the identity using the
:> > power series coefficients? (The argument from the previous
:> > posting that you presumably wanted to circumvent.)
:> >
:> The original poster was in possession of the power series expansions
:> for sin and cos from which it can be seen that sin and cos are entire
:> functions. From well known theorems it follows that sin^2 and cos^2
:> are entire as well and this we can know without having proved the
:> identity.
:
:That doesn't answer the question.
:
:Knowing the functions are entire doesn't help unless you know
:the identity for real numbers. How do you show sin^2 + cos^2 = 1
:for real arguments without performing a calculation with the
:series coefficients?
:
:
:> So your argument is disarmed immediately.
:
:You haven't understood the argument. Here it is explicitly:
:
:1. You can have for free (as an easy consequence of definitions) either
:the series expansions of sine and cosine, or the pythagorean identity
:that they satisfy. Not both.
:
:2. Getting from the series to the real identity apparently requires a
:combinatorial calculation with coefficients. The same calculation
:also immediately proves the complex- or operator-valued version
:of the formula and so obviates the discussion of analytic continuation.
:
:3. Getting from the pythagorean identity to the series is a difficult
:project
:much harder than the stated problem. It's not even obvious that sine
:and cosine are defined on the whole real line in this approach.
:
:If there is some other method not subject to problems #2 or #3 that
:would be *very* interesting.
I'll repeat the argument, which I've posted a couple of times now, for
defining sin and cos without any power series and without any appeal to
geometry; just analysis.
The key is to define a function
A(x) = \int_0^x 1/sqrt(1-t^2) dt (-1 < x < 1).
This is, "of course", the arcsin function; we'll DEFINE the sine to be
the inverse of A. The Fundamental Theorem of Calculus guarantees that
A'(x) = 1/sqrt(1-x^2) for -1 < x < 1,
thus A is a strictly increasing function on (-1,1). A is clearly an odd
function of x, thus A maps (-1,1) onto an open interval (-a,a) (where we
"know" that a = pi/2, except of course we have no idea what pi is yet).
In principle, a could be infinity.
Define S(x) = A^{-1}(x). (This exists, since A is strictly increasing,
and is differentiable by the Inverse Function Theorem--which is easy to
prove on the real line.) We have S : (-a,a) --> (-1,1).
Next show that S(x)^2 + S'(x)^2 = 1: from the identity A(S(x)) = x, on
differentiating we get
A'(S(x))S'(x) = 1,
i.e.
S'(x)
-------------- = 1,
sqrt(1-S(x)^2)
i.e. we get the more precise identity S'(x) = sqrt(1-S(x)^2).
Differentiating both sides of S(x)^2 + S'(x)^2 = 1, we get
2 S(x) S'(x) + 2 S'(x) S''(x) = 0.
Now S'(x) is always nonzero (because -1 < S(x) < 1), hence we can divide
by S'(x) and deduce that
(*) S''(x) = -S(x).
We also have
(**) S(0) = 0, S'(0) = 1,
(which follow from the A(0) = 0 and S'(0) = sqrt(1-S(0)^2)).
Now (*) and (**) present a candidate for the sine function, except
defined only on an interval (-a,a), not on R. To get the function
defined on the whole real line, we need a
LEMMA. On a given interval I containing 0, and for given alpha and
beta, there is AT MOST one function u : I --> R satisfying
u'' = -u,
u(0) = alpha, u'(0) = beta.
Of course the lemma is a standard result in differential equations, but
it has an elementary proof using elementary calculus: if we had two
such solutions, their difference (call it v) would satisfy
v'' = -v, v(0) = 0, v'(0) = 0.
Differentiating the expression v^2 + v'^2 we get 0 (because v'' = -v),
hence v^2 + v'^2 is a constant; using the initial conditions, the
constant is 0; since v is real, therefore v \equiv 0.
OK, we'll use this to prove the double-angle formula: if v is a
function which satisfies
v'' = -v, v(0) = 0, v'(0) = 1 on an interval (-b,b),
then
w(x) := 2 v(x/2) v'(x/2)
also satisfies
w'' = -w, w(0) = 0, w'(0) = 1 on the interval (-2b,2b)
(hence, by the Lemma, must agree with v on (-b,b); i.e. is an EXTENSION
of v).
This is easy:
w'(x) = 2 v'(x/2) * 1/2 * v'(x/2) + 2 v(x/2) v''(x/2) * 1/2
= v'(x/2)^2 - v(x/2)^2,
hence
w''(x) = 2 v'(x/2) v''(x/2) * 1/2 - 2 v(x/2) v'(x/2) * 1/2
= -v'(x/2) v(x/2) - v(x/2) v/(x/2)
= -w(x),
and the boundary conditions are easy.
So: given a solution of (*), (**) on a given interval (-b,b), we can
extend it to the interval (-2b,2b). Repeating this, we get a function,
which I shall NOW call "sin", which satisfies
sin'' = -sin, sin(0) = 0, sin'(0) = 1, ON THE WHOLE REAL LINE.
We define cos = sin'.
It's now easy to prove that sin^2 + cos^2 = 1 (when you differentiate
the LHS, you get 0, hence it's constant; then use the boundary values).
It's easy to prove that
sin(x+y) = sin(x) cos(y) + cos(x) sin(y)
(fix y, show that both sides satisfy u'' = -u with the same boundary
conditions, then use the Lemma), and that
cos(x+y) = cos(x) cos(y) - sin(x) sin(y)
(fix y and differentiate the previous expression wrt x). The hard part
is to define \pi.
The best way to define \pi is to put it to the SMALLEST POSITIVE ROOT OF
cos. You have a problem here; namely, why does cos HAVE a positive
root. One argues by contradiction: suppose it DOESN'T have a root.
Then cos(x) > 0 for x > 0 (since cos(0) = 1, this follows from the
Intermediate Value Theorem). And thus sin is increasing on [0,\infty).
Thus sin(x) > 0 on (0,+\infty). But for x > 1,
cos(1) - cos(x) = (x-1)*sin(z) for some 1 < z < x;
and since sin is increasing on (0,+\infty), it follows that
cos(1) - cos(x) > (x-1) * sin(1),
i.e.
cos(x) < cos(1) - (x-1) * sin(1).
But since sin(1) > 0, this forces cos(x) < -1 for sufficiently large x,
and this is impossible, since sin^2 + cos^2 = 1 ==> |cos x| <= 1.
Since the zeros of cos form a closed set, and since cos(0) = 1 forces
them to a positive distance from 0, it follows that there is a smallest
positive root of cos. This is the number we call \pi.
I leave it to you to verify that cos(pi/2) = 0, sin(pi/2) = 1, that sin
has period 2pi, etc. etc.
It **can** all be done without appeal to geometry or power series. Is
it worth it? As a prelude to elliptic functions, perhaps.
--Ron Bruck
--
Due to University fiscal constraints, .sigs may not be exceed one
line.