Basic logic — relationships between statements — converses and contrapositives | Gowers's Weblog

]]>Damn. OK I’ll add in to (i) that n can be 1.

]]>1 is a positive integer. 1 is not prime. 0! + 1 = 2, which is divisible by 1. Hence, 1 is a counterexample.

]]>I intended to write what I actually did write, but your suggestion is clearer, so I’ve adopted it.

]]>”But then $latex| x – y |$ is a positive number that is not greater than $latex| x – y |$.”

it seems like you may have intended to write something else, like

‘But then $latex| x – y |$ is a positive number, which shows that…”

]]>Thanks — I see your point about integrating now. To get maths you do exactly as you are doing but insert “latex” after the first dollar with no space between the dollar and “latex”. So if you want, for example, to write , then what you actually type in is £latex \int_0^1f(x)dx£ except that the pound sign should be a dollar sign.

]]>Uh, there are a lot of interesting points here. (By the way, what do I have to write to get math notation?)

We have a meta-theorem for first-order logic which says that with a free parameter has a proof iff has a proof.

In other words, as far as proving a statement goes, it does not matter whether the outer universal quantifiers are there. This is the precise meaning of “ is implicitly quantified”, I think. Also, this meta-theorem is sometimes misunderstood as saying that is equivalent to , which is nonsense.

Suppose I tell you that I integrated a function on and got . Can you tell which function it was? No. Suppose I tell you that I universally quantified a statement over the domain and I got true. Can you tell me which statement it was? No. We must not confuse the truth value of with the expression . A fair analogy would be this: if I show you the expression then you can tell what is. Likewise, if I show you the expression then you can tell what is.

Yes, what you say about having free parameters in the middle of the proof is essentially what I am trying to say. Speaking as a logician, you simply must have free parameters because the rule of inference for universal quantifiers requires you to use them. Namely, in order to prove you must do the following: pick a letter which has not been used so far, say , assume and prove . Here the fresh letter is a free parameter, and it cannot be eliminated without significant changes to how we write down things.

]]>1. Write out the proof. We start with P(n), then after several lines get to R(n), then after more lines to S(n), then eventually to Q(n). There is no binding of n in any of this.

2. Use the deduction theorem several times, with liberal application of brackets, to convert the lines of proof into one gigantic sequence of nested conditionals.

3. Now we have a single open formula, with free n. Put one last big pair of brackets round it.

4. Finally, put the universal quantifier, for all n, on the front.

Then no logician will be able to accuse a mathematician of any sloppiness. But we do need to be sure that the deduction theorem applies to the logic we implicitly invoke in writing proofs. (I guess it does.)

]]>As a matter of fact, I do have a half-written post in which I discuss at some length the difference between free and bound variables. However, I’m not sure I agree with your second sentence. I think that when such a sentence is uttered, is sometimes free and sometimes universally quantified — but only implicitly. For instance, if I say, “I discovered a cool fact yesterday: if is a prime of the form then it can be written as a sum of two squares,” then I would maintain that what I actually mean is that *every* prime of the form can be written as a sum of two squares. However, if I start to prove this result and begin by saying, “Let be a prime of the form ,” then has that mysterious fixed-but-arbitrary status and perhaps it’s better to call it free.

I’m not sure I completely buy the function/integral analogy, but you may be able to persuade me. If I take a statement that involves a parameter (presumably the analogue of the function) and form the statement then I get a statement without parameters, just as if I integrate f between 0 and 1 then I get a number that doesn’t depend on the variable that the function takes. But in the first case I get a statement from which I can deduce all the individual statements , whereas from the definite integral I can’t say anything about individual values of . So it seems to me that integrating is throwing away information in a way that universally quantifying isn’t.

I’m also interested by your last three words. I can see that having free parameters in the middle of proofs is extremely natural, and of course I do it myself, but why can’t one avoid it by simply universally quantifying everything? For instance, if I want to prove that for every , I would normally say, “Let and suppose that ,” and proceed to deduce . But …

Hmm … honesty compels me to leave that last paragraph there, but I think I now see the answer to my question. If, for example, my argument went then the first line of my proof basically has to be that . I certainly can’t start with as my initial assumption (since I need to deduce from and not from the quite possibly false statement that holds for all ). So if I “universally quantify over everything”, what I’m doing is taking as my premise and then saying, “Oh, and by the way, this deduction works for all ,” at the end of the proof. Is that what you mean when you say that free parameters are unavoidable?

]]>*Many thanks — corrected now.*