n

Σ 1 = n/1!

i1=1

n i1

Σ Σ 1 = n(n+1)/2!

i1=1 i2=1

n i1 i2

Σ Σ Σ 1 = n(n+1)(n+2)/3!

i1=1 i2=1 i3=1

…

n i1 i2 i_k-1

Σ Σ Σ … Σ 1 = n(n+1)(n+2)…(n+k-1)/k!

i1=1 i2=1 i3=1 ik=1

Since the LHS is just summing 1 multiple times, it is clearly an integer. Hence k! divides n(n+1)(n+2)…(n+k-1).

]]>As to the original question, I suspect that the two proofs are incomparable (i.e. have different generalizations).

]]>inaccessible (or at least not obvious) beforehand. It seems clear that this

point will not be well-defined. ]]>

be worth making the following remark. If we work with cyclotomic

polynomials, (and we understand enough about them beforehand), then

the proof using cyclotomic polynomials is genuinely simpler. Specializing

to x = 1 gives the first proof, and for the second proof, using the polynomials

(then specializing to x =1) is more straightforward in a way, because

you get the necessary power of the cyclotomic polynomial first time,

there is no need to worry about having missed higher powers, as was

necessary with rational primes. I won’t try to put the latex here. ]]>

follow up to Gil Kalai. ]]>

to give something like Gil Kalai might have had in mind. It is necessary

to have a fairly minimal understanding of how to factorise these

into cyclotomic polynomials, but once one assumes that, I think one

can reason as follows to show that $latex $(x-1)(x^2-1) \ldots (x^k-1)$ $

divides $latex $(x^n -1)(x^{n+1}-1) \ldots x^{n+k-1} -1)$ $

as a polynomial for any positive integer n.

For any integer $m$ the cyclotomic polynomial $latex $\phi_{m}(x)$ $

divides $latex $x^{r} – 1$ $ whenever the positive integer m divides the

positive integer r. Essentially, the original second argument then tells us that

for each positive integer b less than or equal to $k$, the cyclotomic polynomial $latex $\phi_{b}(x)$ $ divides the second k-long product

at least as often as often as it does the first. Hence the second

product of polynomials is an integral polynomial times the first

(I don’t think one actually needs unqueness of factorization here,

just the “natural” factorization of the given polynomials as

products of cyclotomic polynomials. Then one can specialize

to $latex $x = 1$ $ to get the integer result ( after taking out

a factor of $latex $(x-1)^{k} $ $).

Whether this says anything about the “sameness” of the proofs is

another question. It seems to suggest maybe that the

second proof is more general, and somehow “stronger”.

for the power of a given prime p dividing n!, using the integer part function,

dividing n by successively higher powers of p which is more or less what goes on in the second proof originally given (one can also of course do it apparently more arithmetically and in one step by subtracting from n the sum of the digits in the p-adic expansion of n, and dividing the result by p-1). But, as was realised by Chebyshev, once you can do that, the fact that there are other ways to estimate n! (eg, Stirling’s formula) tells quite a lot about the distribution of prime numbers. ]]>

consecutive integers is divisible by $k!$. This is true when $k = 1$, so suppose

that $k > 1$ and that the result is established for smaller positive integers.

\medskip

Let $n,n+1,\ldots, n+k-1$ be any $k$ consecutive integers. Choose any $t$

with $1 \leq t \leq k-1$. Then (by induction) the product of the first $t$ of these

is divisible by $t!$, and the product of the next $k-t$ is divisible by $(k-t)!$.

Hence the product of all $k$ of them is divisible by $t!(k-t)!$. Suppose then that

the product is not divisible by $k!$. Then there is a prime $p$ such that the

power of $p$ dividing $k!$ is strictly greater than the power of $p$ dividing

$t!(k-t)!$ for each $t$ with $1 \leq t \leq k-1$. In particular, any such $p$

divides $k$, otherwise taking $t = 1$ gives a contradiction. But now set

$t = p^{a}b$ where $a,b$ are positive integers and $p$ does not divide $b.$

Then the binomial coefficient $\frac{t!}{p^{a}!(t-p^{a})!}$ is not divisible

by $p$, as it’s $$\frac{t(t-1)\ldots (t-(p^{a} -1))}{p^{a}(p^{a}-1) \ldots 2 \times 1},$$

and for $0 \leq i \leq p^{a}-1$ the highest power of $p$ dividing $p^{a}-i$ and $t-i$ is the same.

This last argument is used in one proof of Sylow’s theorem

(by H.Wielandt, and G.A. Miller before). Hence we are done by induction \emph{unless} $k = p^{a}$,

and in that case, we need a different argument to show that the power of $p$ dividing $k!$

also divides our $k$-long product. Suppose then that $k = p^{a}$ for some prime $p$ and

positive integer $a$. Then no two of the integers $n,n+1,\ldots ,n+k-1$ are congruent

(mod k). But there are $k$ of them, so every congruence class (mod k) is represented

once and only once in the list. Since the power of $p$ dividing $m_{i}k+i$ is at least as great

as the power of $p$ dividing $i$ for $1 \leq i \leq k$ and any integer $m_{i}$, we see that the power

of $p$ dividing $k!$ divides the product $n(n+1)\ldots (n+k-1)$ in this case too.

\end{document}

$

\usepackage{latexsym,amssymb}

\begin{document}

\medskip

For what it’s worth, here is an argument which combine features of those above.

It is, of course, longer than either.

\medskip

\noindent {\bf PROOF 1:} We prove by induction on $k$ that the product of any $k$

consecutive integers is divisible by $k!$. This is true when $k = 1$, so suppose

that $k > 1$ and that the result is established for smaller positive integers.

\medskip

Let $n,n+1,\ldots, n+k-1$ be any $k$ consecutive integers. Choose any $t$

with $1 \leq t \leq k-1$. Then (by induction) the product of the first $t$ of these

is divisible by $t!$, and the product of the next $k-t$ is divisible by $(k-t)!$.

Hence the product of all $k$ of them is divisible by $t!(k-t)!$. Suppose then that

the product is not divisible by $k!$. Then there is a prime $p$ such that the

power of $p$ dividing $k!$ is strictly greater than the power of $p$ dividing

$t!(k-t)!$ for each $t$ with $1 \leq t \leq k-1$. In particular, any such $p$

divides $k$, otherwise taking $t = 1$ gives a contradiction. But now set

$t = p^{a}b$ where $a,b$ are positive integers and $p$ does not divide $b.$

Then the binomial coefficient $\frac{t!}{p^{a}!(t-p^{a})!}$ is not divisible

by $p$, as it’s $$\frac{t(t-1)\ldots (t-(p^{a} -1))}{p^{a}(p^{a}-1) \ldots 2 \times 1},$$

and for $0 \leq i \leq p^{a}-1$ the highest power of $p$ dividing $p^{a}-i$ and $t-i$ is the same.

This last argument is used in one proof of Sylow’s theorem

(by H.Wielandt, and G.A. Miller before). Hence we are done by induction \emph{unless} $k = p^{a}$,

and in that case, we need a different argument to show that the power of $p$ dividing $k!$

also divides our $k$-long product. Suppose then that $k = p^{a}$ for some prime $p$ and

positive integer $a$. Then no two of the integers $n,n+1,\ldots ,n+k-1$ are congruent

(mod k). But there are $k$ of them, so every congruence class (mod k) is represented

once and only once in the list. Since the power of $p$ dividing $m_{i}k+i$ is at least as great

as the power of $p$ dividing $i$ for $1 \leq i \leq k$ and any integer $m_{i}$, we see that the power

of $p$ dividing $k!$ divides the product $n(n+1)\ldots (n+k-1)$ in this case too.

The Jacobi triple product identity is a classical, one of the commonly used identities. In his survey, Sylvester presented three proofs: a bijective, an involutive (by using an explicit sign-reversing involution), and another bijective which he attributed to Hathaway. In the next 120 years, over a dozen of bijective proofs have been published. In my bijection surveyI argued that that all these proofs are **exactly** the same up to a change of variables. This includes Sylvester and Hathaway’s proof, which the previous authors confused anyway (sometimes very loudly). Later on, I discovered that the involutive proof is in fact a “projection” of the bijective proof, a claim which I was able to make completely formal here. In other words, we can now **prove** that Sylvester’s three proofs (and all subsequent combinatorial proofs) are the same.

I should mention that the basic “projection” idea goes back to Andrews (1979), who informally observed that the celebrated Franklin’s involution for Euler’s pentagonal theorem follows from a bijection of a more involved identity, also due to Sylvester. Also, just because two proofs are the same does not mean they are redundant – their different presentations can inspire different generalizations and other ideas.

]]>A couple of more recent articles about proof identity are below. I don’t understand them but they look interesting anyway. They are from wikipedia.

]]>That last comment is a bit thin – a constant sequence has the multiplicative property we’re looking for without satisfying f(a).f(b) divides f(a.b). The powers sequence satisfies the multiplicative property without satisfying any of the conditions I was thinking about in which some members of the sequence are forced to be coprime.

]]>Fergal, the condition you give does not work for a sequence which is constant=2.

To get any useful condition, divide out first by the hcf of the sequence.

The sequence of powers of 2 (or r) has the multiplicative property, each member of the sequence divides every subsequent member, and the hcf of members is 1.

]]>OK let me try to see where my argument is wrong. Let’s go for the sequence 1,2,4,2,1,4,1, which is the minimal sequence that starts 1,2,4 and has the property that always divides I’ll take Then and My mistake is the simple one that it just isn’t true that the number of multiples of 2 in an interval of length 3 is minimized when that interval starts at 1.

I think perhaps what I need is the condition that if and only if but I haven’t checked. Since this stronger property does at least apply to the Fibonacci sequence.

]]>Consider f(1) = 1, f(2) = 6, f(3) = 6, f(4) = 6, f(5) = 5, f(6) = 6, f(7) = 7

f(1)f(2)f(3) = 1.6.6

but

f(5)f(6)f(7) =5.6.7

With f(n)=n the numerator gains new factors just fast enough to keep up with the denominator but nothing in your condition forces that.

I think f(a).f(b) divides f(a.b) is the condition you want. It would force f(6) to be a multiple of 36 in the counterexample above.

]]>First, I would like to thank you for editing my LaTeX time and again, if there should be anything at all at which I do reasonably well, then it’s definitely not using computers. Also, before writing anything at all, I should admit that I’m currently suffering from a really bad cold, and have taken a slight overdose of painkillers, so maybe it’s going to be nonsensical.

Consider the following sequence: , , , , , , for all . At the moment, I believe that divides whenever divides , but that equals and is not dividing , i.e. , contrary to expectation.

]]>One of the examples in the algorithms paper is a way of showing non-transitivity of any potential notion of “sameness”. The idea is to take a task that can be performed in stages, with two different ways of doing each stage. Suppose the task takes stages. Then one can define algorithms by picking between 0 and and using the first method for the first stages and the second method for the remaining stages. Then one is inclined (for certain examples) to say that any two consecutive algorithms are essentially the same, but that the first and last algorithms are not essentially the same.

A natural question, therefore, is whether a sorites-type argument can be used to show something similar about proofs. I’m sure it can, but it would be nice to have a good example rather than, say, something artificial that has an algorithm embedded into it.

Maybe one approach would be a proof by induction where there are two very different ways of proving the inductive step. Then one could prove the first steps of the induction in one way and the remaining steps in the other. Of course, the intermediate proofs would be extremely artificial, but a general notion of sameness of proofs should be able to cope with artificial proofs as well as natural ones.

Having said all that, it seems fairly obvious, especially after reading that paper, that the right approach to the question “When are two proofs the same?” is not to struggle to find an equivalence relation, and probably not even to find a metric, but rather to find good ways of refining the question. By that I mean that there are many interesting ways that two proofs can be related to each other, and one should not try to cram all those potential relationships into one single yes/no relationship.

Here are three examples of the kinds of relationships I am talking about.

1. Sometimes two proofs have the same general structure, but at some point you need to construct an object with certain properties, and sometimes there are many genuinely different ways of constructing that object. In such a situation it seems natural to say that the proofs use basically the same idea but that there is some flexibility about that detail. (Of course, if constructing the object in question is the main difficulty of the argument, then one would not want to say that the two proofs are the same.)

2. Sometimes one proof follows the same steps as another, except that it does one of the steps in a more laborious way (perhaps for good reasons such as wanting a more explicit argument).

3. Sometimes one proof is nonconstructive but in a way that can easily be made constructive, and the other proof is the constructive version.

More generally, proofs have internal structure, and there seems to be more hope of talking about when components of two proofs are essentially the same than when two proofs in their entirety are the same. (For instance, thinking about components of proofs immediately deals with the sorites-type examples.)

]]>The natural common generalization of the Fibonacci example and the positive integers seems to me to be the following, which has number theory woven into the question itself. Let be a function such that implies Then for every

Let me just check that that’s the right condition. Let be a prime and for each let be minimal such that (with the convention that this is infinite if there is no such ). Then and the numbers such that is a multiple of but not of are precisely the multiples of that are not multiples of If is an interval, then the number of times goes into is the sum over all of the number of multiples of that lie in So the second proof carries over straightforwardly.

It doesn’t seem completely obvious that one couldn’t give a more combinatorial proof, but first one would probably want a more combinatorial characterization of functions with the above property and I don’t at the moment have any ideas about that.

]]>Questions: (1) Does anybody see a purely combinatorial of this (not necessarily involving copulating rabbits)?

(2) Is there any sense in which we can say that the first proof for the integrality of binomial coefficients also shows the above assertion?

]]>“Thus, when one considers a specific device and asks whether an algorithm’s space requirements are an essential

characteristic of the algorithm, that is, whether one should count two

algorithms as different just because one uses much more space than the

other, then answer is likely to be “yes” once the space requirements

are large enough but “no” if they are small.”

If you have axioms A1, …, to AN and you can show that your statement is undecidable but adding axiom B makes it true (with proof PB) and adding axiom C also makes it true (with proof PC) and axiom B is not equivalent to axiom B, then PB and PC cannot be the same proof.

Is there an example of this? Things that can be proven with and without the axiom of choice seem like good candidates.

]]>