I hope that most of you have either asked yourselves this question explicitly, or at least felt a vague sense of unease about how the definitions I gave in lectures, namely

and

relate to things like the opposite, adjacent and hypotenuse. Using the power-series definitions, we proved several facts about trigonometric functions, such as the addition formulae, their derivatives, and the fact that they are periodic. But we didn’t quite get to the stage of proving that if and is the angle that the line from to makes with the line from to , then and . So how does one establish that? How does one even *define* the angle? In this post, I will give one possible answer to these questions.

### A couple of possible approaches that I won’t attempt to use

A cheating and not wholly satisfactory method would be to define the angle to be . Then it would be trivial that and we could use facts we know to prove that . (Or could we? Wouldn’t we just get that it was ? The fact that many angles have the same and creates annoying difficulties for this approach, though ones that could in principle be circumvented.) But if we did this, how could we be confident that the notion of angle we had just defined coincided with what we think angle should be? The problem has not been fully solved.

Another approach might be to define trigonometric functions geometrically, prove that they have the basic properties that we established using the power series definitions, and prove that these properties characterize the trigonometric functions (meaning that any two functions and that have the properties must be and ). However, this still requires us to make sense of the notion of angle somehow, and we might also feel slightly worried about whether the geometric arguments we used to justify the addition formulae and the like were truly rigorous. (I’m not saying it can’t be done satisfactorily — just that I don’t immediately see a good way of doing it, and I have a different approach to present.)

### Defining angle

How are radians defined? You take a line L starting at the origin, and it hits the unit circle at some point P. Then the angle that line makes with the horizontal (or rather, the horizontal heading out to the right) is defined to be the length of the circular arc that goes anticlockwise round the unit circle from to P. (This defines a number between 0 and , but we can worry about numbers outside this range later.)

### Calculating the length of a circular arc

There is nothing wrong with this definition, except that it requires us to make rigorous sense of the length of a circular arc. How are we to do this?

For simplicity, let’s assume that our point P is and that both and are positive. So P is in the top right quadrant of the unit circle. How can we define and then calculate the length of the arc from to , or equivalently from to ?

One non-rigorous but informative way of thinking about this is that for each between and , we should take an interval , work out the length of the bit of the circle vertically above this interval, and sum up all those lengths. The bit of the circle in question is a straight line (since is infinitesimally small) and by similar triangles its length is .

How did I write that down? Well, the big triangle I was thinking of was one with vertices , and the point on the circle directly above , which is , by Pythagoras’s theorem. The little triangle has one side of length , which corresponds to the side in the big triangle of length . So the hypotenuse of the little triangle is , as I claimed.

Adding all these little lengths up, we get , so it remains to evaluate this integral.

This is of course a very standard integral, usually solved by substituting or for . If you do that, you find that the length works out as , which is just what we hoped. However, we haven’t discussed integration by substitution in this course, so let us see it in a more elementary way (not that proving an appropriate form of the integration-by-substitution rule is especially hard).

Using the rules for differentiating inverses, we find that

and since , this gives us . So the integrand has as an antiderivative, and therefore, by the fundamental theorem of calculus,

So the angle between the horizontal and the line joining the origin to is (by definition) the length of the arc from to , which we have calculated to be . Therefore, .

### How close was that to being rigorous?

The process I just went through, of saying “Let’s add up a whole lot of infinitesimal lengths; that says we should write down the following integral; calculating the integral gives us L, so the length is L,” is a process that one often goes through when calculating similar quantities. Why are we so confident that it is OK?

I sometimes realize with mathematical questions like this that I have been a mathematician for many years and never bothered to worry about them. It’s just sort of obvious that if a function is reasonably nice, then writing something down that’s approximately true with and turning into and writing a nice sign in front gives you a correct expression for the quantity in question. But let’s try to think a bit about how we might define length rigorously.

#### Curves

First, we should say what a curve is. There are various definitions, according to how much niceness one wants to assume, but let me take a basic definition: a curve is a continuous function from an interval to . (I haven’t defined continuous functions to , but it simply means that if , then and are both continuous functions from to .)

This is an example of a curious habit of mathematicians of defining objects as things that they clearly aren’t. Surely a curve is not a function — it’s a special sort of subset of the plane. In fact, shouldn’t a curve be defined as the *image* of a continuous function from to ? It’s true that that corresponds more closely to what we are thinking of when we use the word “curve”, but the definition I’ve just given turns out to be more convenient, though it’s important to add that two curves (as I’ve defined them) and are *equivalent* if there is a strictly increasing continuous bijection such that for every . In this situation, we think of and as different ways of representing the same curve.

Incidentally, if you want a reason not to identify curves with their images, then one quite good reason is the existence of objects called *space-filling curves*. These are continuous functions from intervals of reals to that fill up entire two-dimensional sets. Here’s a picture of one, lifted from Wikipedia.

It shows the first few iterations of a process that gives you a sequence of functions that converge to a continuous limit that fills up an entire square.

#### Lengths of curves

Going back to lengths, let’s think about how one might define them. The one thing we know how to define is the length of a line segment. (Strictly speaking, I’m not allowed to say that, since a line segment isn’t a function, but let’s understand it as a particularly simple function from an interval to a line segment in the plane.) Given that, a reasonable definition of length would seem to be to approximate a given curve by a whole lot of little line segments. That leads to the following idea for at least approximating the length of a curve . We take a dissection and add up all the little distances . Here I am defining the distance between two points in in the normal way by Pythagoras’s theorem. This gives us the expression

for the approximate length given by the dissection. We then hope that as the differences get smaller and smaller, these estimates will tend to a limit. It isn’t hard to see that if you refine a dissection, then the estimate increases (you are replacing the length of a line segment that joins two points by the length of a path that consists of line segments and joins the same two points).

Actually, that hope is not always fulfilled: sometimes the estimates tend to infinity. Indeed, for space-filling curves, or fractal-like curves such as the Koch snowflake, the estimates *do* tend to infinity. In this case, we say that they have infinite length. But if the estimates tend to a limit as the maximum of the differences tends to zero, we call that limit the length of the curve. A curve that has a finite length defined this way is called *rectifiable*.

Suppose now that we have a curve given by and that the two functions and are continuously differentiable. Then both and are bounded on , so let’s suppose that is an upper bound for and . Then by the mean value theorem,

Therefore, for every dissection, which implies that the curve is rectifiable. (Remark: I didn’t really use the continuity of the derivatives there — just their boundedness.)

We can say slightly more than this, however. The differentiability of tells us that for some . And similarly for with some . Therefore, the estimate for the length can be written

This looks very similar to the kind of thing we write down when doing Riemann integration, so let’s see whether we can find a precise connection. We are concerned with the function . If we now *do* use the continuity of and , then is continuous too, so it can be integrated. Now since and belong to the interval , and both lie between the lower and upper sums given by the dissection. That implies the same for

Since is integrable, the limit of as the largest (which is often called the *mesh* of the dissection) tends to zero is .

We have shown that the length of the curve is given by the formula

Now, finally, let’s see whether we can justify our calculation of the length of the arc of the unit circle between and . It would be nice to parametrize the circle as , but we can’t do that, since we are defining using length, so we would end up with a circular definition (in more than one sense). [Actually, we *can* do something very close to this. See the final section of the post for details.] So let’s parametrize it as follows. We’ll define on the interval and we’ll send to . Then and , so

So the length is , which is exactly the expression we wrote down earlier.

Let me make two quick remarks about that. First, you might argue that although I have shown that the final *expression* is indeed correct, I haven’t shown that the informal *argument* is (essentially) correct. But I more or less have, since what I have effectively done is calculate the lengths of the hypotenuses of the little triangles in a slightly different way. Before, I used the fact that one side was and used similar triangles. Here I’ve used the fact that one side is and another side is and used Pythagoras.

A slightly more serious objection is that for this calculation I used a general result that depended on the assumption that both and are continuously differentiable, but didn’t check that the appropriate conditions held, which they don’t. The problem is that , so , which tends to infinity as and is undefined at .

However, it is easy to get round this problem. What we do is integrate from to , in which case the argument is valid, and then let tend to zero. The integral between and is , and that tends to .

One final remark is that this length calculation explains why the usual substitution of for in an integral of the form is not a piece of unmotivated magic. It is just a way of switching from one parametrization of a circular arc (using the x-coordinate) to another (using the angle, or equivalently the distance along the circular arc) that one expects to be simpler.

### An easier argument

Thanks to a comment of Jason Fordham below, I now realize that we can after all parametrize the circle as . However, this is not the I’m trying to calculate, so let’s call it . I’m just taking to be an ordinary real number, and I’m defining and using the power-series definition. Then the arc of the unit circle that goes from to can be defined as the curve defined on the interval by the formula . The general formula for the length of a curve then gives us

So the length of the arc satisfies .

March 2, 2014 at 12:31 pm |

Reblogged this on Math Online Tom Circle.

March 2, 2014 at 2:34 pm |

This was an interesting read, relating sin and cos to geometrical definitions was something I noticed that didn’t get covered in lectures – so I’m glad to see it here. A couple of things I’m not sure about in the final section ‘lengths of curves’: Is M meant to be the upper bound for the moduli of g'(t) and h'(t) (rather than g and h as you’ve written)? Why is it the same c_i for g and h?

Many thanks. I’ve made some adjustments to deal with these slips.March 2, 2014 at 4:00 pm |

Reblogged this on Singapore Maths Tuition.

March 2, 2014 at 4:09 pm |

Tim, isn’t all the geometry you could ever want available quickly from f() = exp(i theta)? Establish that it’s a circle from dynamics: exp(0) is 1, first derivative is i.f() at right angles to f(), second derivative is -f(). Separate real and imaginary parts, and we are done, as my 1A lecturer used to say.

March 2, 2014 at 6:14 pm

That sounds like a good suggestion. The fact that it’s a circle follows from the fact that , which I proved in the course. The main point that needs to be established is that the angle between the vectors from 0 to 1 and from 0 to is , in the normal geometrical sense. If we define angles as lengths of circular arcs, then some kind of definition of length is again going to be necessary. We could I suppose define the length to be the integral of the modulus of the derivative of , which is constant at 1, and therefore integrates to . But if one wants to justify that definition of length, then some kind of approximation by piecewise linear paths is (I think) necessary.

However, I agree that this approach looks simpler, since it avoids the integral of . Having said that, I’m quite pleased to have had an excuse to discuss briefly why that integral is a natural one and why the usual substitution into it is also natural.

March 2, 2014 at 7:01 pm

Thinking about this further, I realize that we don’t need to involve the complex numbers. I was just wrong when I said that we can’t parametrize the circle as . We can, provided we use the power-series definition. If we then calculate the length using that parametrization, the calculation is simpler. I’m about to add to the post to make this point.

March 2, 2014 at 6:34 pm |

Do you need the integration in the version?

If you take a on the circle and form a right-angled triangle with the origin 0 and a point on the x-axis, then from the fact that you have the coordinates of as for the geometric definition of and , and for the power-series definition of and , so that the geometric and power-series functions agree when the is the geometric angle.

March 2, 2014 at 6:59 pm

I’m trying to understand this argument, but I’m not sure I do. I agree that the coordinates of are what you say they are in geometric terms. I also agree that they are for

somewith the power-series definition. But what I can’t rule out trivially is that the geometric functions and the power-series functions don’t coincide, so and are not the same.March 3, 2014 at 3:34 pm |

Under “Calculating the length of a circular arc” there seems to be a small typo: “for each t between x and y”. Shouldn’t that be between x and 1, or even between 0 and 1?

Thanks — I’ve corrected it now.March 4, 2014 at 1:29 pm |

So sir, how does one define TAN X using power series given that TAN(0)=0?

March 4, 2014 at 5:19 pm

I defined it as , which is

usingpower series even if it isn’t expressing as a power series in .Here is a blog post by Wen Jia Liu that explains how to work out the power series of .

March 4, 2014 at 9:54 pm |

I could go on a tirade here about how this is completely backwards, and and how all of these properties need to be proven before then deriving the derivatives of sin/cos and then and only then should these derivatives be used to find the taylor series for sin and cosine, but then I noticed that you are teaching an analysis course; so carry on I suppose.

(However, I note that by giving the series by default, you effectively avoid one of the throny issue in early analysis — namely that lim x -> sin(x)/x=1. *

In effect, this difficulty cannot be avoided and the principle of conservation of mental effort will cause a rebound eventually; in this case, the difficulty of relating the series definition to a geometrical function.

* Analytically, this is done by starting from the bound sin(theta) < theta < tan(theta) and using the squeezing theorem. I prefer a geometric argument myself, but this may be the way around the difficulty for your course.

)

I would go back to the idea of cos and sin in a circle. The suggestion above about using the complex circle is also a good one.

Also, if all you want is the first few terms of the Taylor series for tan(x), the best thing to do is use polynomial long division on the series for sin(x) and cos(x).

March 4, 2014 at 10:00 pm |

I see the problem. All I’ve shown is that there is some monotone function so that the power series give and . You still need something like the angle addition formulas to show is the identity.

There has to be a step like this because the simple triangle formulas don’t care about radians vs degrees, but the power-series definition does.

March 5, 2014 at 2:48 pm

In particular, the inverse sine and cosine functions care very much about radians. The idea of radians as a ratio helps clarify why this should eb so, but you’ll need to draw circles to get this across. And so we are back to the trouble of relating the series to the geometric constructions.

March 6, 2014 at 5:10 pm |

I am not an analyst, but I know you can do this backwards pretty easily. That is, start with the geometric definition of sine and cosine and derive the power series using the standard power series formula. The only thing you need to show is what the derivatives of sine and cosine are. (I think you actually wrote a post about this some time ago.)

Define sine and cosine to be the point on the unit circle

The derivative of this will be:

A. orthogonal to the vector from the origin to the point on the circle.

B. Unit length

C. Oriented CCW around the circle.

The derivatives are easily derived from this, and the power series follow soon after.

March 6, 2014 at 6:57 pm

Can you be more precise about how you are defining sine and cosine? I don’t understand what you’ve written, especially as it appears to be circular.

March 6, 2014 at 8:38 pm

What I was thinking was this:

Say you have a particle moving traveling CCW around the unit circle at a speed of 1, and that it begins at the point (1,0).

We can define the position vector as ( C(t), S(t) ).

So by definition C(0) = 1, S(0) = 0, and .

I think (although you’re probably right that there are some gaps in my logic), that from this definition, and some relevent facts about circles (that a tangent is orthogonal to a radius, mainly) you can determine that the derivative must be ( -S(t), C(t) ). Since all the derivatives will repeat after 4 iterations, you can compute the power series from just these assumptions.

I’m sure this is an approach that isn’t appropriate for an analysis course, but I would love to see how I can make it a bit more precise.

March 6, 2014 at 9:00 pm

Haha. Hahaha. “it appears to be circular.” What a perfect–dare I say it–straight line.

I believe Alex is using the convention that since we all know what’s what, (cos(t),sin(t)) is simply short-hand for “you know, that point on the unit circle that does the trick” combined with the convention that (f(t),g(t)) is implicitly defining f and g, combined with the convention of leaving everything to the reader, in this case, defining “that point” in a non-(hahaha)-circular way, and giving the traditional names to its x-and-y projections.

My favorite approach is to define sine of an angle for an arbitrary triangle by circumscribing the triangle, and then sine(angle) is “opposite/circumdiameter”. It’s well-defined (by elementary geometry) and there are very nifty proofs of addition formulas in this context (because A+B=180-C).

From addition formulas you derive sin(nx) in terms of sin(x) and cos(x) (just the imaginary part of de Moivre’s theorem) and from this you derive the Taylor series by expressing sin(x) in terms of sin(x/n) and cos(x/n) for very large n. The former is approximately x/n, the latter is approximately 1.

This is almost formally identical to deriving the Taylor series of e^x from the binomial expansion of (1+x/n)^n for arbitrarily large values of n.

March 6, 2014 at 11:44 pm

My main point is that somewhere one must define angles and that work is needed to do that. I used arc length, which required me to say what lengths of curves are. But the actual formula I wrote down does look very like saying “Let’s see how long it takes to get here if we go round the unit circle at unit speed.”

I suppose that the work in your case comes in making sense of the notion of “Where you get to after time t if you go round the circle at constant speed 1.” That seems to boil down to a need to prove that the differential equations , with the initial conditions , have a unique solution. I agree that it’s obvious geometrically, but I think that work has to be done to convert that geometric intuition into a proof.

March 7, 2014 at 1:07 pm

Dieudonne, in his Linear Algebra and Geometry, goes to an extreme degree to show that “angle” is taken too much for granted, and that it is surprisingly difficult to do rigorously.

March 7, 2014 at 2:56 am |

Lengths of curves are an excellent example to demonstrate the need for caution when converting sums to integrals in plausible ways. See http://math.stackexchange.com/questions/12906/is-value-of-pi-4 for a limiting procedure which looks as plausible as this one, but is very invalid.

March 9, 2014 at 3:55 pm |

I believe you have an extra factor of $(x_i – x_{i-1})$ in “Since $k$ is integrable, the limit of $\sum_{i=1}^n(x_i-x_{i-1})(x_i-x_{i-1})(g'(c_i)^2+h'(d_i)^2)^{1/2}$ …”

Thanks very much — I’ve removed it.March 15, 2014 at 10:21 pm |

I think a geometrical proof can be done perfectly rigorously, and I think it’s much nicer to do it without using integration. Define the angle between two radii of the circle as double the area of the segment delimited by the two radii (or the outside segment for the outside angle). Define sin x, cos x and tan x as usual.

Now you can show show that in the first quadrant sin x ≤ x ≤ tan x by comparing the areas in this diagram (using the fact that the radius of the larger circle is 1/cos x): http://www.proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Geometric_Proof.

Thus we get 1 ≥ sin x / x ≥ cos x. By squeezing, letting x go to 0, we have sin’ 0 = 1.

Now by Pythagoras, sin^2 x + cos^2 x = 1. Take the derivative of both sides to get sin x sin’ x = – cos’ x cos x. Put x = 0 to get cos’ 0 = 0.

Now show that sin(a+b) = sin a cos b + sin b cos a using this diagram:

http://www.proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Geometric_Proof

Then we get sin(x + h) – sin x = sin x (cos h – 1)/h + cos x (sin h)/h

Thus sin’ x = sin x * cos’ 0 + cos x * sin’ 0 = cos x. Similarly cos’ x = – sin x.

Thus we get all the values for d^n/dx^n sin 0, and by the argument from your post on Taylor’s theorem, we get the power series…

March 15, 2014 at 10:25 pm

Sorry I meant to link to this diagram for the angle sum formulae:

http://en.wikipedia.org/wiki/File:AngleAdditionDiagram.svg

March 15, 2014 at 11:59 pm

I was about to object that your definition of angle requires you to define the area of curvilinear shapes, and therefore requires integration. In general, I think that once angle has been defined, everything else is reasonably easy. But perhaps my instant reaction was wrong: we can define area by chopping up into small squares and taking limits, and we don’t really need to calculate areas, so integration may not be necessary.

It also occurs to me that there is a way of defining angle that requires a limiting argument, but not integration. We can bisect angles, and that makes it easy to define all multiples of by a dyadic rational, and then taking limits gives us the rest. I think it should be reasonably straightforward to use that definition and plug it in.

March 16, 2014 at 1:06 am

I was actually thinking that the argument did not require a definition of area as a function from shapes in the plane to real numbers at all. I thought you could simply make do with the primitive mereological axioms you find in Euclid (“Area of the part ≤ area of the whole”, “Congruent shapes have equal areas”, and stuff like that). Because, like you say, you can approximate the circle to arbitrary accuracy using little rational squares, these axioms in effect already give you the Dedekind cut for pi as the area of the unit circle.

That way you wouldn’t have to assume the plane has the complete structure of R^2 at all. We know from Galois theory that you can have perfectly good Ruler and Compass spaces that are much smaller than that.

But on further reflection I guess I’d retract that: sin is a function from real numbers to real numbers, so in order for our geometrical definition of sin to even work, we do need it to be the case that the plane has something like the structure of R^2.

But if you wanted to keep the geometric bits of the argument pristinely Euclidean, an alternative would be to take the geometric definition of sin to be a partial definition of sin, as follows:

* Call a function f: R –> R a sinoid function if and only if, for any segment S of the unit circle definable in Euclidean geometry, the following conditions are met, (here A(S) is S’s area, defined by the sort of Dedekind construction hinted at above):

1. f(A(S)) = the length of a line segment perpendicular to one radius delimiting S going through the the end of the other radius delimiting S

2. f(x+pi) = – sin x for all x, and sin(x+2pi) = sin x for all x

(That’s not very elegant, but it does the job)

* Let sin x: R –> R be the continuous sinoid function

Your argument about angle bisection guarantees that this last step returns a well defined function.

March 16, 2014 at 1:16 pm

Whoops!! That last bit was full of typos and thinkos and I apologise to anyone who tried to read it. If you are for some reason still reading, here is how I should have defined the notion of a sinoid function:

Call a function f: R –> R a sinoid function if and only if the following conditions are met:

1. For any circle *sector* S such that its area A(S) (which I’ll define below) is at most pi/4, f(2*A(S)) is equal to the length (which I’ll also define below) of a line segment perpendicular to one radius delimiting S going through the the end of the other radius delimiting S.

2a. f(pi-x) = f(x) for all x

2b. f(-x) = – f(x) for all x

2c. f(x + 2pi) = f(x – 2pi) = f(x) for all x

Then sin: R –> R is the unique continuous sinoid function. (Again, uniqueness is guaranteed by Gowers’ angle bisection argument).

Here is a definition of A:

Let X be the set of finite regions (or shapes) definable in our Euclidean space (whatever it is). Then A:X –> R is the function such that

1. A(x) = 1 if x is a square with one of the radii of our chosen unit circle as a side

2. A(x) = A(y) if x and y are congruent

3. A(x) = A(y) + A(z) if no point is both inside (i.e. within the boundary of) y and inside z, and the points inside x are all and only those which are either inside y or inside z or on their shared boundary.

That should give you the right function, (by Gowers’ squares argument). Now for length:

Let Y be the set of line segments and circular arcs definable in our space (strictly speaking we only need line segments for our definition of sin). Then l:Y –> R is the function such that

1. l(x) = 1 if x is a square with one of the radii of our chosen unit circle as a side

2. l(x) = l(y) if x and y are congruent

3. l(x) = l(y) + l(z) if y and z share at most an endpoint and the points on x are all and only those which are either on y or on z

4. l(x) < l(y) < l(w) + l(z) if y is an arc, x, w, z are three line segments forming a triangle such that x connects y's endpoints, and w and z are tangents to y touching y at its endpoints.

March 20, 2014 at 6:23 pm |

All this discussion goes to show that defining the concept of “angle” in a suitable way is problematic. High school students who are given a non-rigorous definition, based mostly on intuition would be amazed at the technicalities described here – so how to define sin, cos & tan in an elementary but rigorous way in the context of triangle geometry, without the notion of arc-length ?

One way, articulated in great detail by Norman Wildberger, would be to avoid the notion of angle altogether, replacing it with a potentially simpler concept – spread (cf: http://web.maths.unsw.edu.au/~norman/ and in particular http://web.maths.unsw.edu.au/~norman/papers/WrongTrig.pdf and his YouTube channel). Then elementary problems of triangle geometry (as opposed to uniform circular motion) can be treated without appeal to transcendental notions such as the sin & cos functions.

This approach is so unconventional that it likely invites the reader to immediate skepticism – that would be a mistake. Wildberger carefully and clearly elucidates his approach, and he extends it to Hyperbolic and Spherical geometry obtaining some remarkable and truly novel results (cf: Universal Hyperbolic Geometry II, KoG 14 (2010) )

April 11, 2014 at 8:58 pm |

[…] Weblog is definitely more on the technical side overall, with a few exceptions. However, you will find that Gowers likes to write posts for lots of different […]

April 12, 2014 at 7:48 pm |

Hoping that the is OK, a dynamics-themed demonstration might proceed in two steps. First, show that the integral curve of the following equation, for initial conditions and , is a unit circle traversed at unit velocity:

Show this by establishing that the (unit) radius and (unit) velocity of the integral curve both are unchanging; this can be done both formally and empirically (the latter by numerical integration) without reference to the power series.

Then as the second step, verify by explicit substitution that the power series solves this equation. QED.

April 12, 2014 at 8:05 pm

NoteThe numerical integration can be structured as a demonstration (both formal and numerical) of the delightful identityApril 13, 2014 at 6:07 pm

Concluding NotePrior comments (by Jason Fordham especially; also Alex; also William E. Emba) in aggregate amount to the above algebraic dynamics framework. And more can be said. The matrixevidently satisfies ; thus

almost-complex structureis present. Writing the dynamical equation in Hamiltonian form for a suitable tangent vector , symplectic form , and Hamiltonian function , and moreover defining as a Riemannian curve-length, then exposes the complete Kählerian triple of metric, symplectic, and complex structures.The starting problem “give a power series for trigonometric functions” then can be appreciated (by beginning students) as a natural meeting-ground for classical analysis, classical algebra, and their various modern syntheses that include algebraic geometry and algebraic dynamics. That’s why it’s such an illuminating topic for students to study!

September 24, 2014 at 9:32 pm |

It’s nice to apply the arccos formula to show that the arc length from (1,0) to (cos(t),sin(t)) is indeed just t, but there is also a cute version using the sector area. This can be calculated in two ways, which I will write somewhat elliptically as A=sc/2+int(s(-dc)) and A=int(cds)-sc/2, and adding them together gives 2A=int(s^2+c^2)=t

June 26, 2015 at 2:48 pm |

i can find out cos series without help of derivation & integration …with geometric construction ..& find many more trigonometric formulae with same construction .it can be invention..

April 1, 2022 at 4:50 am |

[…] do you prove that the points (cos(𝑥), sin(𝑥)) for 𝑥∈[0,2𝜋] form a circle?” see https://gowers.wordpress.com/2014/03/02/how-do-the-power-series-definitions-of-sin-and-cos-relate-to…. wrt earlier questions see the prologue, the exponential function, in Rudin’s book Real and […]