On Frege seeing what is in front of his nose

Take a mathematician of Frege’s generation, accustomed to writing the likes

(1) x^2 - y^2 = (x + y)(x - y)

(2) If x^2 - 4x + 3 = 0, then x = 1 or x = 3,

— and fancier things, of course!

Whatever unclear thoughts about ‘variables’ people may or may not have had once upon a time, they have surely been dispelled well before the 1870s, if not by Balzano’s 1817 Rein analytischer Beweis (though perhaps that was not widely enough read?), at least by Cauchy’s 1821 Cours d’analyse which everyone serious will have read. Both Bolzano and Cauchy take claims like (1) and (2) to be true just if each instance is true, plain and simple, and clearly gloss various claims written with variables as claims holding ‘for any value of x’. Maybe it is worth noting, then, that the mathematicians of the day, at least when on their best behaviour, could be very decently clear about this!

But then it seems to be only the tiniest of steps to say outright that an ideal notation for such claims might have the form ‘for any value of x, C(\ldots x\ldots x\ldots)’, and that such an explicit form is true when ‘C(\ldots n\ldots n\ldots)’ is true whatever ‘n’ names. So — looked at from this angle — the wonder is not that Frege came up with his basic account of the logical form of expressions of mathematical generality like (1) and (2), but that no one had quite said as much before.

If you go back to Bolzano’s 1810 Beyträge, what happens there (with hindsight) seems very odd indeed. After all, here is someone who within a few years — in his 1816 Der binomische Lehrsatz and (even more) the 1817 Rein analytischer Beweis — is very clear indeed about variables and their use, when making essential practical use of quantification in talking about continuity etc. Yet in the earlier Beyträge, his Contributions to a Better-Grounded Presentation of Mathematics, when Bolzano turns to talking about logic and the principles of deduction, he looks quite antediluvian — and as far as I know he never revisited the logical basics to try to do better. OK, he says the mathematician will need more principles than we’ll find in the traditional syllogistic, and your hopes rise just for a moment: but the additional principles he comes up with are such a very limited and disappointing lot — the likes of A is an M, A is an N, so A is an M-and-N. It seems that the Bolzano of the Beyträge is — despite his disagreements and amendments — so thoroughly soaked in Kant and in broadly Aristotlean logic that he just can’t see what is right in front of his nose in the mathematician’s use of variables.

At least here, then, Frege’s originality — it might be said — is not depth of insight but his unabashed willingness to take the mathematician’s usage of the likes of (1) and (2) at bald face value, and to not try to shoehorn such generalizations into the canonical forms sanctioned by received wisdom. It is as if his now standard treatment of generality is a very happy result of Frege’s knowing rather little of previous logic and philosophy.

I have often wondered, then, if Frege’s philosophical commentators have seen his basic discovery of a quantifier(-for-scoping)/variable notation as more exotic than it is. Take the pre-Frege usage of mathematician’s variables-of-generality at face value, without preconceptions (“don’t think, look” as Wittgenstein might say). Then note that we must distinguish generalizing a negation (x \neq x + 1) and negating a generalization (it’s not true in general that 2x = x^2) — so if we are going to use a symbol for negation we are going to somehow have to mark relative scopes. And we are already more or less there!

Now, in Begriffsschrift and pieces written around that time, Frege’s concern seems to be very much with regimenting mathematical language (formalizing the logical bits of it to go along with the common formal expressions we use for the non-logical bits, showing how adding the logical bits allows us to neatly cut down on the non-logical primitives by giving us the resources to define more complex concepts out of simpler ones, etc. etc.). He says remarkably little — except in using a few toy examples like the ‘Cato killed Cato’ one — about ordinary, non-mathematical, language more generally. So e.g. Dummett’s reading of Frege from the very beginning as aiming for a story about the real underlying structure of ordinary language generalizations is arguably considerable over-interpretation: Frege seems at least in Begriffsschrift to be more in the business of giving us a tidied up replacement for informal ways of talking, useful in regimenting science — one modelled, to borrow his phrase, upon the formula language of arithmetic.

Saying all this is of course not for a moment to underplay the depth of Frege’s reflections consequent on his discovery of the quantifier/variable notation! But I would very much like to know if there are any discussions in the history of maths or history of logic literatures on the how variables were regarded in 19th century mathematics.

4 thoughts on “On Frege seeing what is in front of his nose”

  1. There was a long debate in the second half of the 19th Century among a group we would now call applied statisticians, about what was the relevant definition of “the population” in any inference from a sample to a population. It strikes me that that debate bears on your question, because it concerns what variable instantiation is appropriate in any statistical problem. One of the protagonists was medical researcher and uncertainty theorist, Johannes von Kries. He was in the dissident, anti-probability-theory tradition of modeling of uncertainty that started with Leibniz and continued in the 20th Century with Shackle, Hamblin, Dempster, Shafer and many people in AI since the 1970s.

  2. David Makinson

    There is room for a good book, indeed, a pressing need for one, on the history of the use and concept of a variable, in mathematics and logic from antiquity to the present. Or is there such a book already out there somewhere, and I am showing my ignorance? Anybody reading Peter’s blog who is thinking of initiating a big research project in the history of logic could well turn in that direction…

    1. I suspect that if such a book does exist, it’s out of print or costs over £100.

      However, it’s an interesting and (it seems) neglected subject. For example, the Stanford Encyclopedia of Philosophy has an article on logical constants, but not (it seems) on variables.

  3. I would guess that the 19th century saw some unclarity about the notion of *instance* that you use. After all your ‘whatever ‘n’ names’ is a wee bit unclear too. (are we quantifying over the names or are assigning to the name ‘n’ whatever… If the later, which is what we really need, a little more machinery is needed, no?)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top