**Concluding Note** Prior comments (by Jason Fordham especially; also Alex; also William E. Emba) in aggregate amount to the above algebraic dynamics framework. And more can be said. The matrix

evidently satisfies ; thus *almost-complex structure* is present. Writing the dynamical equation in Hamiltonian form for a suitable tangent vector , symplectic form , and Hamiltonian function , and moreover defining as a Riemannian curve-length, then exposes the complete Kählerian triple of metric, symplectic, and complex structures.

The starting problem “give a power series for trigonometric functions” then can be appreciated (by beginning students) as a natural meeting-ground for classical analysis, classical algebra, and their various modern syntheses that include algebraic geometry and algebraic dynamics. That’s why it’s such an illuminating topic for students to study!

]]>**Note** The numerical integration can be structured as a demonstration (both formal and numerical) of the delightful identity

Show this by establishing that the (unit) radius and (unit) velocity of the integral curve both are unchanging; this can be done both formally and empirically (the latter by numerical integration) without reference to the power series.

Then as the second step, verify by explicit substitution that the power series solves this equation. QED.

]]>One way, articulated in great detail by Norman Wildberger, would be to avoid the notion of angle altogether, replacing it with a potentially simpler concept – spread (cf: http://web.maths.unsw.edu.au/~norman/ and in particular http://web.maths.unsw.edu.au/~norman/papers/WrongTrig.pdf and his YouTube channel). Then elementary problems of triangle geometry (as opposed to uniform circular motion) can be treated without appeal to transcendental notions such as the sin & cos functions.

This approach is so unconventional that it likely invites the reader to immediate skepticism – that would be a mistake. Wildberger carefully and clearly elucidates his approach, and he extends it to Hyperbolic and Spherical geometry obtaining some remarkable and truly novel results (cf: Universal Hyperbolic Geometry II, KoG 14 (2010) )

]]>Whoops!! That last bit was full of typos and thinkos and I apologise to anyone who tried to read it. If you are for some reason still reading, here is how I should have defined the notion of a sinoid function:

Call a function f: R –> R a sinoid function if and only if the following conditions are met:

1. For any circle *sector* S such that its area A(S) (which I’ll define below) is at most pi/4, f(2*A(S)) is equal to the length (which I’ll also define below) of a line segment perpendicular to one radius delimiting S going through the the end of the other radius delimiting S.

2a. f(pi-x) = f(x) for all x

2b. f(-x) = – f(x) for all x

2c. f(x + 2pi) = f(x – 2pi) = f(x) for all x

Then sin: R –> R is the unique continuous sinoid function. (Again, uniqueness is guaranteed by Gowers’ angle bisection argument).

Here is a definition of A:

Let X be the set of finite regions (or shapes) definable in our Euclidean space (whatever it is). Then A:X –> R is the function such that

1. A(x) = 1 if x is a square with one of the radii of our chosen unit circle as a side

2. A(x) = A(y) if x and y are congruent

3. A(x) = A(y) + A(z) if no point is both inside (i.e. within the boundary of) y and inside z, and the points inside x are all and only those which are either inside y or inside z or on their shared boundary.

That should give you the right function, (by Gowers’ squares argument). Now for length:

Let Y be the set of line segments and circular arcs definable in our space (strictly speaking we only need line segments for our definition of sin). Then l:Y –> R is the function such that

1. l(x) = 1 if x is a square with one of the radii of our chosen unit circle as a side

2. l(x) = l(y) if x and y are congruent

3. l(x) = l(y) + l(z) if y and z share at most an endpoint and the points on x are all and only those which are either on y or on z

4. l(x) < l(y) < l(w) + l(z) if y is an arc, x, w, z are three line segments forming a triangle such that x connects y's endpoints, and w and z are tangents to y touching y at its endpoints.

I was actually thinking that the argument did not require a definition of area as a function from shapes in the plane to real numbers at all. I thought you could simply make do with the primitive mereological axioms you find in Euclid (“Area of the part ≤ area of the whole”, “Congruent shapes have equal areas”, and stuff like that). Because, like you say, you can approximate the circle to arbitrary accuracy using little rational squares, these axioms in effect already give you the Dedekind cut for pi as the area of the unit circle.

That way you wouldn’t have to assume the plane has the complete structure of R^2 at all. We know from Galois theory that you can have perfectly good Ruler and Compass spaces that are much smaller than that.

But on further reflection I guess I’d retract that: sin is a function from real numbers to real numbers, so in order for our geometrical definition of sin to even work, we do need it to be the case that the plane has something like the structure of R^2.

But if you wanted to keep the geometric bits of the argument pristinely Euclidean, an alternative would be to take the geometric definition of sin to be a partial definition of sin, as follows:

* Call a function f: R –> R a sinoid function if and only if, for any segment S of the unit circle definable in Euclidean geometry, the following conditions are met, (here A(S) is S’s area, defined by the sort of Dedekind construction hinted at above):

1. f(A(S)) = the length of a line segment perpendicular to one radius delimiting S going through the the end of the other radius delimiting S

2. f(x+pi) = – sin x for all x, and sin(x+2pi) = sin x for all x

(That’s not very elegant, but it does the job)

* Let sin x: R –> R be the continuous sinoid function

Your argument about angle bisection guarantees that this last step returns a well defined function.

]]>I was about to object that your definition of angle requires you to define the area of curvilinear shapes, and therefore requires integration. In general, I think that once angle has been defined, everything else is reasonably easy. But perhaps my instant reaction was wrong: we can define area by chopping up into small squares and taking limits, and we don’t really need to calculate areas, so integration may not be necessary.

It also occurs to me that there is a way of defining angle that requires a limiting argument, but not integration. We can bisect angles, and that makes it easy to define all multiples of by a dyadic rational, and then taking limits gives us the rest. I think it should be reasonably straightforward to use that definition and plug it in.

]]>Sorry I meant to link to this diagram for the angle sum formulae:

http://en.wikipedia.org/wiki/File:AngleAdditionDiagram.svg

Now you can show show that in the first quadrant sin x ≤ x ≤ tan x by comparing the areas in this diagram (using the fact that the radius of the larger circle is 1/cos x): http://www.proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Geometric_Proof.

Thus we get 1 ≥ sin x / x ≥ cos x. By squeezing, letting x go to 0, we have sin’ 0 = 1.

Now by Pythagoras, sin^2 x + cos^2 x = 1. Take the derivative of both sides to get sin x sin’ x = – cos’ x cos x. Put x = 0 to get cos’ 0 = 0.

Now show that sin(a+b) = sin a cos b + sin b cos a using this diagram:

http://www.proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Geometric_Proof

Then we get sin(x + h) – sin x = sin x (cos h – 1)/h + cos x (sin h)/h

Thus sin’ x = sin x * cos’ 0 + cos x * sin’ 0 = cos x. Similarly cos’ x = – sin x.

Thus we get all the values for d^n/dx^n sin 0, and by the argument from your post on Taylor’s theorem, we get the power series…

]]>*Thanks very much — I’ve removed it.*

Dieudonne, in his Linear Algebra and Geometry, goes to an extreme degree to show that “angle” is taken too much for granted, and that it is surprisingly difficult to do rigorously.

]]>My main point is that somewhere one must define angles and that work is needed to do that. I used arc length, which required me to say what lengths of curves are. But the actual formula I wrote down does look very like saying “Let’s see how long it takes to get here if we go round the unit circle at unit speed.”

I suppose that the work in your case comes in making sense of the notion of “Where you get to after time t if you go round the circle at constant speed 1.” That seems to boil down to a need to prove that the differential equations , with the initial conditions , have a unique solution. I agree that it’s obvious geometrically, but I think that work has to be done to convert that geometric intuition into a proof.

]]>Haha. Hahaha. “it appears to be circular.” What a perfect–dare I say it–straight line.

I believe Alex is using the convention that since we all know what’s what, (cos(t),sin(t)) is simply short-hand for “you know, that point on the unit circle that does the trick” combined with the convention that (f(t),g(t)) is implicitly defining f and g, combined with the convention of leaving everything to the reader, in this case, defining “that point” in a non-(hahaha)-circular way, and giving the traditional names to its x-and-y projections.

My favorite approach is to define sine of an angle for an arbitrary triangle by circumscribing the triangle, and then sine(angle) is “opposite/circumdiameter”. It’s well-defined (by elementary geometry) and there are very nifty proofs of addition formulas in this context (because A+B=180-C).

From addition formulas you derive sin(nx) in terms of sin(x) and cos(x) (just the imaginary part of de Moivre’s theorem) and from this you derive the Taylor series by expressing sin(x) in terms of sin(x/n) and cos(x/n) for very large n. The former is approximately x/n, the latter is approximately 1.

This is almost formally identical to deriving the Taylor series of e^x from the binomial expansion of (1+x/n)^n for arbitrarily large values of n.

]]>What I was thinking was this:

Say you have a particle moving traveling CCW around the unit circle at a speed of 1, and that it begins at the point (1,0).

We can define the position vector as ( C(t), S(t) ).

So by definition C(0) = 1, S(0) = 0, and .

I think (although you’re probably right that there are some gaps in my logic), that from this definition, and some relevent facts about circles (that a tangent is orthogonal to a radius, mainly) you can determine that the derivative must be ( -S(t), C(t) ). Since all the derivatives will repeat after 4 iterations, you can compute the power series from just these assumptions.

I’m sure this is an approach that isn’t appropriate for an analysis course, but I would love to see how I can make it a bit more precise.

]]>Can you be more precise about how you are defining sine and cosine? I don’t understand what you’ve written, especially as it appears to be circular.

]]>Define sine and cosine to be the point on the unit circle

The derivative of this will be:

A. orthogonal to the vector from the origin to the point on the circle.

B. Unit length

C. Oriented CCW around the circle.

The derivatives are easily derived from this, and the power series follow soon after.

]]>In particular, the inverse sine and cosine functions care very much about radians. The idea of radians as a ratio helps clarify why this should eb so, but you’ll need to draw circles to get this across. And so we are back to the trouble of relating the series to the geometric constructions.

]]>There has to be a step like this because the simple triangle formulas don’t care about radians vs degrees, but the power-series definition does.

]]>Using the power-series definitions, we proved several facts about trigonometric functions, such as the addition formulae, their derivatives, and the fact that they are periodic.

I could go on a tirade here about how this is completely backwards, and and how all of these properties need to be proven before then deriving the derivatives of sin/cos and then and only then should these derivatives be used to find the taylor series for sin and cosine, but then I noticed that you are teaching an analysis course; so carry on I suppose.

(However, I note that by giving the series by default, you effectively avoid one of the throny issue in early analysis — namely that lim x -> sin(x)/x=1. *

In effect, this difficulty cannot be avoided and the principle of conservation of mental effort will cause a rebound eventually; in this case, the difficulty of relating the series definition to a geometrical function.

* Analytically, this is done by starting from the bound sin(theta) < theta < tan(theta) and using the squeezing theorem. I prefer a geometric argument myself, but this may be the way around the difficulty for your course.

)

I would go back to the idea of cos and sin in a circle. The suggestion above about using the complex circle is also a good one.

Also, if all you want is the first few terms of the Taylor series for tan(x), the best thing to do is use polynomial long division on the series for sin(x) and cos(x).

]]>I defined it as , which is *using* power series even if it isn’t expressing as a power series in .

Here is a blog post by Wen Jia Liu that explains how to work out the power series of .

]]>