First: try to get the left-hand inequality, but rewrite it as

i.e.

Rewrite this as

(Originally I made an indexing error here, and it held me up a little, later on, but the analysis continued as in the correct version)

Now we can guess the maximum that can be such that this is true (here I was thinking in terms of differences of terms, but didn’t notice that they formed a sequence of natural numbers just yet). That is, what is the largest number of terms on the LHS with this true? Namely when the gaps between the successive s are all 1, so we can solve for . This gives a finite bound on the possible solution once we know (this fact turned out not to be useful, but encouraging to know the search space was finite). At this point, might as well write this inequality as

where (except first time round, this expression was slightly different, due to the mistake above: the factor was instead).

Now let’s try the second inequality. Again, change it to

.

Playing the same trick with the differences gets us the second inequality in the form

Now I thought to perhaps find some extremum for in this case as well, but it was quickly apparent that nothing would come of it. Then I fixed the first rewritten inequality to what is above, and put it together to get

and it was clear I could reformulate the question as follows: given any sequence of natural numbers and any (chosen independently!), there must be a unique $n$ such that the inequalities hold. The previous inequality could be rewritten as where , whence, since is an increasing unbounded function (and, now that I think of it, is the only sensible definition given the original statement of the problem), given any natural number it must necessarily fall into precisely one such interval .

]]>As an aside, who makes up all these problems anyway? I imagine that it is not an easy task. I recently found old problems of this sort, though probably not quite as hard, in a box in my attic that I had worked on at a math program for high school students at Berkeley about 45 years ago. I gave copies of the problems to UW Math because they deal with talented kids and I knew that creating new problems is not easy.

]]>

is almost an average. I want to work with averages, so I’ll replace it with

.

(Here denotes the arithmetic mean.)

Actually I compressed it a little further and wrote

.

Then I wrote two columns, one containing the ‘s and one containing the expressions. With a little inspection, it was clear what was going on: at the start, the first column is less than the second column. But the sequence is increasing, so eventually, the first column must grow bigger than the second column. And once it does, it will stay that way. The crossing-over point corresponds to the unique integer in the problem.

]]>1) Is it a statement about averages?

2) Ohh, I see. Maybe it will be useful to write the left inequality as ?

3) is the last* n* to satisfies the left inequality the unique one to satisfy the right?

4) Could computer experimentation help here (and in other Olympiad problems)?

5) Can we automatize a proof. More generally, can we create a good computerized IMO participant?

6) Is it part of a more general interesting question? If yes what? If it is not interesting are the problems I study more interesting? Why?

7) Did I know this question? (Or a very similar one.)

8) Am I still interested in these kind of questions? Should I? I was quite interested when I was 16 years old, but should I be commited to such questions now? (along with others interests from that time)? Is keeping interested a way to keep one youth?

]]>

or else

or else

or else

It immediately jumped out at me that if the RHS of the first thing was false, then that automatically made the LHS of the next thing true, and similarly between the next pair in a way that would obviously extend inductively.

So I tried a couple of trivial rearrangements such as writing down the first differences of the sequence of not-quite-averages, and didn’t go far before I thought to multiply up all the statements to get rid of the denominators

or else

or else

or else

and then subtracted from all three sides of the first statement, from all three sides of the second, from the third and so on to get

or else

or else

or else

and now it’s clear that what we’re really saying is that must lie between exactly one pair of terms of this secondary sequence, so if we can show the sequence is increasing then we’re done. And it obviously is: if you set (for , so we include ), by increasingness of . So clearly there is a unique such that .

]]>So this is saying that the mean of the first n terms is between the nth term and the (n+1)th term. Well that can’t be right, obviously the mean is smaller than the biggest term, hmm, must have missed something…

…

…Oh ok, there are actually n+1 terms in the sum so it’s not the mean, just something a bit like the mean… in fact I suppose it sort of tends towards the mean

Decided to call the mean-like term y_i. Did some vague thinking about what the y_is were for different sequences – 1,2,3…; 101,102,103… etc. Started thinking about the y_is in terms of whether they’re “too small” (y_i x_(i+1)) or ”just right” (between x_i and x_(i+1) as required). Imagined the sort of standard thing that might be possible where you start with an (in)equality and replace terms with things bigger/smaller to get a new more useful inequality. Thought about things like how a large x in the sequence makes the next y be large.

Now with pencil and paper, tried some of this, and in among some confusing myself, got that if y_i is too small, y_(i+1) is too small, and that if y_i is just right, y_(i+1) is to small (actually these were identical arguments but I did it twice). So there can’t be more than one just right y, and once it gets too small it stays there. But y starts too big (y_1 = x_0+x_1>x_1) so I’d better check it doesn’t stay too big forever.

Some more faffing and thinking, it seems to make sense the ys can’t stay too big, since that means the xs are all bounded by the ys, which are bearing down on the mean, so eventually the whole thing will get trapped. But the same sort of thing I was doing above isn’t yelling me anything (in retrospect for very obvious reasons). After a while I got my head around what needed doing and started doing the right things. It seemed that for y_2 to be too big then x_3 seemed to need to be less than y_1. Similarly then x_4 needed to be less than y_1. Okay, this seemed to be it. The xs increase so they can’t all be less than y_1. I formalised this into an induction argument, showing that the ys can’t stay too big forever. Done!

No wait, not done. The ys could go straight from too big to too small right. Hmm I could do with proving that if y_i is too big, y_(i+1) can’t be too small. Thankfully this worked as an easy argument along the same lines as the too-small-implies-too-small one. Done!

]]>The journey started at Oxford at the end of the 1960′s (after I had switched from Natural Sciences and thus already a mongrel mathematician). My extremely romantic and neo-Platonic approach emphasised the aesthetic over the technical, i.e. concept over the ability to actually do anything. The one exception being Commutative Algebra where the mechanics did seem to possess an inner harmony and simplicity. Remember this was the 60s! Lectures (when one had nothing else to do) were to be experienced rather than followed and the psychological tricks referred to above were much in evidence at tutorials (as supervisions were called then). Nobody cared too much and I somehow managed a poorish 2nd. Although this could hardly be called a period of study, I was somehow nonetheless imbued with a love of the subject.

In subsequent years, perhaps not surprisingly, I came to regret this naively superficial approach (I was occasionally heard to remark that “education was wasted on the young”). So some 20 years later I embarked on an MSc at the OU (whilst working full-time in industry). This took 6 years, one module at a time, each of which was based on a pretty good textbook and OU notes. However it was the TMAs that were a revelation. With study-time embedded in so much elapsed time there was considerable scope for mulling over a problem, whilst doing the washing-up say, until some key aspect was revealed. (More in keeping with Grothendieck’s description of the action of the “Rising Sea” than Poincare stepping onto a bus). The crowning pleasure was then the leisurely crafting of the technical solution to be as clear and elegant as possible. How different from the pressures of the Tripos!

Whilst the OU experience used bits of mathematics as a vehicle for intellectual satisfaction, it did not provide that coherent experience of the modern subject to undo my youthful profligacy. So in 2009 (aged just 59) I took a sabbatical from work and enrolled for Part III of the Tripos (then designated CASM but later commuted to yet another degree of MASt. The fact that my son was at Kings at the time gave an irresistible impetus to this venture). This was then a third type of study experience, focusing almost exclusively on lectures. With memory and speed of thought being laughably poor by now, keeping up was out of the question. The idea was to take comprehensive notes whilst hanging on to a sense of the direction of travel. In addition the wonderful library at the CMS provided for as much background browsing of classical texts as you could wish. It goes without saying that I was in the fantastically privileged position of this year having absolutely no relevance to my career! (I really did feel sorry for some of the young students, several of whom already wore look of years of stress and singularity of focus).

One discipline that I imposed upon myself was that I would try and somehow pass the exam at the end. Once again that frisson of trying to store away retrievably a few bits of technique, underpinned by a modest essay meant that I scraped through, very near the bottom of the list. The ceremony at Senate House with the doffing of caps and fluttering of lists was a definite high point as I waited with almost as much excitement as those hoping for distinction and a road to PhDs.

Needless to say, I found the material in the various (pure) courses that I took impressively difficult and supposed a lot of technical knowledge that I didn’t have. No matter. My lecture notes have fulfilled their purpose and five years into retirement I am still working on them, back-filling where necessary (often via Wikipedia and the produce of a carefully constructed Amazon wish list over the years) and sometimes enjoying an “ah ha!” moment after a few days contemplation.

So I offer this personal endorsement of the joys of mathematical study, certainly in my case for the less able, as a counterpoint to the hothouse of full-on competition. Yer makes yer choice and then takes yer pleasure where you can find it. ]]>

I must have made some mistake because I got that sometimes two $n$s are possible (see last example).

]]>Let as the vertical distance between the frog and the lower bank. Note that . Assume the frog never cross the river, i.e. for all n. This is equivalent to , or (*). But this is impossible since is finite and is increasing. Let $n$ be the time one step before the frog first time land the lower side of the river. Mathematically, we have and . We immediately obtain and prove the existence of n making the required inequality hold. To prove the uniqueness, observe from (*) that if for some n, . That is, the frog will never go back.

]]>I think the total elapsed time was something like 20 minutes or so, but it felt like a lot of it was spent floundering. (Though actually something like that much floundering is usually needed to get one's brain engaged with a problem.)

I had the same initial mental image as Qiaochu Yuan, though I didn't end up with a proof that really matched it. I had the feeling at the end that what I'd done was probably equivalent to something neater — rewriting it in terms of the differences would, I think, translate it into Qiaochu's proof, and I now feel bad for failing to apply the "increasing is awkward; work with differences and positivity" heuristic he did.

I like Qiaochu's proof a lot, but Tim's trick of combining the first two terms to repair the denominator is perhaps even nicer — to me, it seems to express the essence of why the key point (in my terminology, that P(n) ends up false) has to be so.

]]>