Having a blog gives me a chance to defend myself against a number of people who took issue with a passage in Mathematics, A Very Short Introduction, where I made the tentative suggestion that an abstract approach to mathematics could sometimes be better, pedagogically speaking, than a concrete one — even at school level. This was part of a general discussion about why many people come to hate mathematics.
The example I chose was logarithms and exponentials. The traditional method of teaching them, I would suggest, is to explain what they mean and then derive their properties from this basic meaning. So, for example, to justify the rule that xa+b=xaxb one would say something like that if you have a xs followed by b xs and you multiply them all together then you are multiplying a+b xs all together. Then, having established this rule, you would turn to the rule log(ab)=log(a)+log(b) and justify it by raising 10 (or e if you’d got on to that) to both sides, arguing that you get ab in both cases. This would itself be justified by the rule for exponentiation and the rule that 10log a=a. To justify that last rule you would shout at the children that this is what log means.
And yet, amazingly, at least 90 percent of children will get lost by that explanation, and will go on to make mistakes such as log(a+b)=log(a)+log(b). Could there be a better way?
Here’s a different way, at any rate. Perhaps it shouldn’t replace the traditional way, but it could certainly supplement it. It’s to admit, frankly, that the notion of multiplying a string of a xs together does not make sense when a is anything other than a positive integer and to focus from then on on the properties of the exponential and logarithmic functions, and especially the rules xa+b=xaxb and log(ab)=log(a)+log(b). If these were presented as in some sense defining the concepts of exponentiation and logarithms, then a number of traditional mistakes would be less likely to occur (and when they did, you could just say, “You’ve forgotten the definition” rather than, “You don’t understand the meaning of this concept if you could make a mistake like that”). Moreover, when pupils went on to do simple exercises, like working out the logarithm of the square root of x, they would be more likely to be guided to the correct proof: that the defining property of the square root, y, of x, is that y2=x; that this in turn means yy=x; that the log rule then tells you that log(y)+log(y)=log(x); so log(y)=log(x)/2.
Those who criticized this view tended to think that I was advocating pure rote learning rather than understanding. Actually, I was suggesting that a true understanding of a sophisticated concept such as the exponential function involves letting go of the intuitive meaning (once it has served its purpose of telling you the rules you want the function to satisfy) and using the defining properties instead.
Behind that suggestion is a more general claim, which is that mathematicians greatly underestimate the extent to which they think syntactically rather than semantically. When you work with the log function, it feels as though you are somehow in direct contact with the function, but this feeling is as much of an illusion as the feeling that you are actually seeing a cube when you decide to visualize it. (You don’t agree that that is an illusion? Then what colour was the one you just visualized?) When you actually write an argument using logs, you almost always use the familiar properties of the function (including more advanced properties, such as that it grows slowly) and not this direct contact with the meaning of the function. To put that more precisely, you don’t have to say to yourself things like, “This is the number x such that, if I raise e to the power x, I get a.” You don’t even need to say that when you take exp of a log—you just use the syntactic rule that exp and log cancel.
I write this expecting that it will still be an unpopular view. But I think I can defend it (or some suitably reexpressed version of it).