I do, however, have some idea of the source of the trick. It came in the section near the beginning labelled “Question”, where I took , and then found that everything else was . That suggested that was more important than either or individually.

]]>We are at the last point before we cross the river but we do not know that do we?

Thank you

]]>Also, you were forced to record your thoughts to a linear transcript, whereas surely you are having multiple ideas interconnecting in various ways. This inadequacy of a linear medium is probably what led you to initially not record the idea that ended up playing a key role, as you yourself mention in the second red comment. This is not to say that everything else was a waste of time. There were many constructive insights before you chose to revisit the idea that on the surface appears only to be a slight notational change.

This dichotomy between linearity (of how we learn and communicate mathematics) and interconnection (of how we deeply understand areas of mathematics) plays a key role in the difficulty in teaching mathematics.

]]>It should be:

**with** **if** and if **for**

I watched this great and inspiring talk a few weeks ago so it would be a pleasure to share my experience with the problem. Unfortunatley my approach is not to different to the ones alleady mentioned. But since i already wrote it down, i’ll post it anyway :-).

**Here’s how i approached problem:**

**Insight 1:**

I started by focussing on the first inequality and trying to build the proof up from there. If you decompose the right side of the inequality into , you get and the arithmetic mean of . I noticed that the inequality holds for but has to break eventually as the arithmetic mean declines in proportion to and you have as gets large.

This insight led me to propose a helping **Lemma 1:**

**for** : **with** **if** and if

It’s relatively straightforward to see, that the proof of Lemma 1 can be divided into 3 parts:

**(1) The first inequality holds for n = 1**

obvious

**(2) The first inequality breaks for some n**

This can be seen by setting . For the arithmetic mean, you have:

as . As the inequality has integers on both sides and it follows that:

**(3)**

A simple transformation of the inequality is sufficient:

=

=

**qed**

**Insight 2:**

It’s now clear that the left inequality holds for all n below a certain bound and fails otherwise. It would make sense to assume that the right inequality holds for all n above a certain bound and fails otherwise. As the solution that satisfies both inequalities has to be unique, it also makes sense to assume . These assumptions leads to the following **case analysis:**

**Case 1)** :

The left inequality fails.

**Case 2)** :

We know that , therefore we can follow like in step **(3) in Lemma 1** that: which violates the right inequality.

**Case 3)** :

The left inequality is guaranteed to be true, as for the right inequality we have:

=

=

0. The middle term looks sort of like average. Try multiplying both sides by n. Unclear what to do with that…

1. Hmm. We have two inequalities. Let’s figure out whether we can get at least one of them to hold for some n and then worry about getting both. [= start by proving a weaker statement]

2. A silly guess – maybe LHS simply implies RHS? Quick computation shows that not… but as a by-product of this computation, I got: if RHS hold for n, then LHS doesn’t hold for n+1 ! This sounds like progress.

3. Carrying this line of thought further, I immediately obtain that LHS and RHS are both “monotone” (hold up to some point and then don’t or vice versa).

4. By Observation 2, the only scenario we have to worry about is when neither LHS nor RHS hold. But then we immediately arrive at contradiction with the assumption that a_n is increasing!

]]>Something like: ah – the problem has to do with sums of arrays and some kind of average – seen that before … I have some techniques I could use,

It would be interesting to know if that intuition is 100% reliable (well I suppose Turing forbids 100% – 90% then).

I have an counter example of failing intuition: the problem is how many license plates numbered 000000-999999 can be used if each plate must differ from any other in at least 2 places (rather than 1)? Should be easy, right? But I got stuck. I guess I need a flash of inside like in the dominoes-on-a-chessboard problem.

PS I am not a mathematician, but an engineer who reads your blog with great interest.

]]>First: try to get the left-hand inequality, but rewrite it as

i.e.

Rewrite this as

(Originally I made an indexing error here, and it held me up a little, later on, but the analysis continued as in the correct version)

Now we can guess the maximum that can be such that this is true (here I was thinking in terms of differences of terms, but didn’t notice that they formed a sequence of natural numbers just yet). That is, what is the largest number of terms on the LHS with this true? Namely when the gaps between the successive s are all 1, so we can solve for . This gives a finite bound on the possible solution once we know (this fact turned out not to be useful, but encouraging to know the search space was finite). At this point, might as well write this inequality as

where (except first time round, this expression was slightly different, due to the mistake above: the factor was instead).

Now let’s try the second inequality. Again, change it to

.

Playing the same trick with the differences gets us the second inequality in the form

Now I thought to perhaps find some extremum for in this case as well, but it was quickly apparent that nothing would come of it. Then I fixed the first rewritten inequality to what is above, and put it together to get

and it was clear I could reformulate the question as follows: given any sequence of natural numbers and any (chosen independently!), there must be a unique $n$ such that the inequalities hold. The previous inequality could be rewritten as where , whence, since is an increasing unbounded function (and, now that I think of it, is the only sensible definition given the original statement of the problem), given any natural number it must necessarily fall into precisely one such interval .

]]>As an aside, who makes up all these problems anyway? I imagine that it is not an easy task. I recently found old problems of this sort, though probably not quite as hard, in a box in my attic that I had worked on at a math program for high school students at Berkeley about 45 years ago. I gave copies of the problems to UW Math because they deal with talented kids and I knew that creating new problems is not easy.

]]>

is almost an average. I want to work with averages, so I’ll replace it with

.

(Here denotes the arithmetic mean.)

Actually I compressed it a little further and wrote

.

Then I wrote two columns, one containing the ‘s and one containing the expressions. With a little inspection, it was clear what was going on: at the start, the first column is less than the second column. But the sequence is increasing, so eventually, the first column must grow bigger than the second column. And once it does, it will stay that way. The crossing-over point corresponds to the unique integer in the problem.

]]>1) Is it a statement about averages?

2) Ohh, I see. Maybe it will be useful to write the left inequality as ?

3) is the last* n* to satisfies the left inequality the unique one to satisfy the right?

4) Could computer experimentation help here (and in other Olympiad problems)?

5) Can we automatize a proof. More generally, can we create a good computerized IMO participant?

6) Is it part of a more general interesting question? If yes what? If it is not interesting are the problems I study more interesting? Why?

7) Did I know this question? (Or a very similar one.)

8) Am I still interested in these kind of questions? Should I? I was quite interested when I was 16 years old, but should I be commited to such questions now? (along with others interests from that time)? Is keeping interested a way to keep one youth?

]]>

or else

or else

or else

It immediately jumped out at me that if the RHS of the first thing was false, then that automatically made the LHS of the next thing true, and similarly between the next pair in a way that would obviously extend inductively.

So I tried a couple of trivial rearrangements such as writing down the first differences of the sequence of not-quite-averages, and didn’t go far before I thought to multiply up all the statements to get rid of the denominators

or else

or else

or else

and then subtracted from all three sides of the first statement, from all three sides of the second, from the third and so on to get

or else

or else

or else

and now it’s clear that what we’re really saying is that must lie between exactly one pair of terms of this secondary sequence, so if we can show the sequence is increasing then we’re done. And it obviously is: if you set (for , so we include ), by increasingness of . So clearly there is a unique such that .

]]>So this is saying that the mean of the first n terms is between the nth term and the (n+1)th term. Well that can’t be right, obviously the mean is smaller than the biggest term, hmm, must have missed something…

…

…Oh ok, there are actually n+1 terms in the sum so it’s not the mean, just something a bit like the mean… in fact I suppose it sort of tends towards the mean

Decided to call the mean-like term y_i. Did some vague thinking about what the y_is were for different sequences – 1,2,3…; 101,102,103… etc. Started thinking about the y_is in terms of whether they’re “too small” (y_i x_(i+1)) or ”just right” (between x_i and x_(i+1) as required). Imagined the sort of standard thing that might be possible where you start with an (in)equality and replace terms with things bigger/smaller to get a new more useful inequality. Thought about things like how a large x in the sequence makes the next y be large.

Now with pencil and paper, tried some of this, and in among some confusing myself, got that if y_i is too small, y_(i+1) is too small, and that if y_i is just right, y_(i+1) is to small (actually these were identical arguments but I did it twice). So there can’t be more than one just right y, and once it gets too small it stays there. But y starts too big (y_1 = x_0+x_1>x_1) so I’d better check it doesn’t stay too big forever.

Some more faffing and thinking, it seems to make sense the ys can’t stay too big, since that means the xs are all bounded by the ys, which are bearing down on the mean, so eventually the whole thing will get trapped. But the same sort of thing I was doing above isn’t yelling me anything (in retrospect for very obvious reasons). After a while I got my head around what needed doing and started doing the right things. It seemed that for y_2 to be too big then x_3 seemed to need to be less than y_1. Similarly then x_4 needed to be less than y_1. Okay, this seemed to be it. The xs increase so they can’t all be less than y_1. I formalised this into an induction argument, showing that the ys can’t stay too big forever. Done!

No wait, not done. The ys could go straight from too big to too small right. Hmm I could do with proving that if y_i is too big, y_(i+1) can’t be too small. Thankfully this worked as an easy argument along the same lines as the too-small-implies-too-small one. Done!

]]>