Year: 2016

Apple nerd note: Duet Display

If you have e.g. a MacBook of some description (or indeed a Windows machine), and an iPad, you can use the iPad as an additional display. Duet Display works over a USB cable, so is much much smoother and less flakey in operation than old implementations of the general idea using Bluetooth. Works a treat. OK, it won’t magically increase your “productivity” but it assuredly reduces the irritations of window-juggling when working. You can get Duet Display on the app store or from their site. If you don’t know it, a warmly recommended bargain.

Encore #6: Gödel vs Turing

And from the same collection of articles, a link to a paper that I (for once!) unreservedly praised and agreed with.

Church’s Thesis 13: Gödel on Turing (June 14, 2007)

Phew! At last, I can warmly recommend another paper in Church’s Thesis after 70 Years

Some headline background: Although initially Gödel was hesitant, by about 1938 he is praising Turing’s work as establishing the “correct definition” of computability. Yet in 1972 he writes a short paper on undecidability, which includes a section headed “A philosophical error in Turing’s work”.

So an issue arises. Has Gödel changed his mind?

Surely not. What Gödel was praising in 1938 was Turing’s analysis of a finite step-by-step computational procedure. (Recall the context: Turing was originally fired up by the Entscheidungsproblem, which is precisely the question whether there is a finitistic procedure that can be mechanically applied to decide whether a sentence is a first-order logical theorem. So it is analysis of such procedures that is called for, and that was of concern to Gödel too.)

What the later Gödel was resisting in 1972 was Turing’s further thought that, in the final analysis, human mental procedures cannot go beyond such finite mechanical procedures. Gödel was inclined to think that, in the words of his Gibbs Lecture, the human mind “infinitely surpasses the powers of any finite machine”. So, surely, there is no change of mind, just an important change of topic.

That’s at any rate what I have previously taken to be the natural view. But I confess I’d never done any careful homework to support it. Perhaps because it chimes with view I’ve been at pains to stress in various comments on this collection of articles — namely that there is a very important distinction between (1)  the “classic” Church-Turing thesis that the effectively computable functions (step-by-small-step algorithmically computable functions) are exactly the recursive functions, and (2)  various bolder theses about what can be computed by (idealised) machines and/or by minds.

And so it is very good to see that now Oron Shagrir has given a careful and convincing defence of this natural view of Gödel’s thoughts on Turing, with lots of detailed references, in his (freely downloadable!) paper “Gödel on Turing on computability”. Good stuff!

Encore #5: Church’s Thesis and open texture

At various times, I have blogged a series of posts as I read through a book, often en route to writing a review. One of first books to get this treatment was  the collection of articles Church’s Thesis After 70 Years edited by Adam Olszewski et al. This was a very mixed bag, as is often the way with such collections. But some pieces stood out as worth thinking about. Here’s one (which I initially posted about in 2007, but returned to a bit later when we read it one Thursday in the Logic Seminar.

Stewart Shapiro, “Computability, Proof, and Open-Texture” (January 18, 2008)

Let me say straight away that it is a very nice paper, written with Stewart Shapiro’s characteristic clarity and good sense.

Leaving aside all considerations about physical computability, there are at least three ideas in play in the vicinity of the Church-Turing Thesis. Or betters there is first a cluster of inchoate, informal, open-ended, vaguely circumscribed ideas of computability, shaped by some paradigm examples of everyday computational exercises. Then second there is the semi-technical idea of effective computability (with quite a carefully circumscribed though still informal definition, as given in various texts, such as Hartley Rogers’ classic). Then thirdly there is the idea of Turing computability (and along with that, of course, the other provability equivalent characterizations of computability as recursiveness, etc.).

It will be agreed on all sides that our original inchoate, informal, open-ended ideas could and can be sharpened up in various ways. Hence, the notion of effective computability takes some strands in inchoate notion and refines and radically idealizes them in certain ways (e.g. by abstracting from practical considerations of the amount of time or memory resources a computation would use). But there are other notions, e.g. of feasible computability, that can also be distilled out. Or notions of what is computable by a physically realisable set-up in this or other worlds. It isn’t that the notion of effective computability is — so to speak — the only clear concept waiting to be revealed as the initial fog clears.

So I think that Shapiro’s rather Lakatosian comments in his paper about how concepts get refined and developed and sharpened in mathematical practice are all well-taken, as comments about how we get from our initial inchoate preformal ideas to, in particular, the semi-technical notion of effective computability. And yes, I agree, it is important to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision — guided of course by the desire for interesting, fruitful concepts — in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes very well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept (or concept-cluster) and arriving at something more sharply bounded.

But another  question arises about the relation between the semi-technical notion of effective computability, once we’ve got there, and the notion of Turing computability. Now, Shapiro writes as if the move onwards from the semi-technical notion is (as it were) just more of the same. In other words, the same Lakatosian dynamic (rational conceptual development under the pressure of proof-development) is at work in first getting from the original inchoate notion of computability to the notion of effective computability, as in then going on eventually to refine out the notion of Turing computability. Well, that’s a good picture of what is going on at the conceptual level. But Shapiro seems to assume that this conceptual refinement goes along with a narrowing of extension  (in getting our concepts sharper, we are drawing tighter boundaries). But that doesn’t obviously follow.  An alternative picture is that once we have got as far as the notion of effective computable functions, we do have a notion which, though informal, is subject to sufficient constraints to ensure that it does indeed have a determinate extension (the class of Turing-computable functions). We can go on to say more about that extension, in coming up with various co-extensive technical notions of computability, but still the semi-technical notion of effective computability does enough for fix the class of functions we are talking about. For some exploration of the latter view, see for example Robert Black’s 2000 Philosophia Mathematica paper.

So a key issue here is this: is further refinement of “open texture” in the notion of effective computability required to determine a clear extension? Shapiro seems to think so. But looking at his paper, it is in fact difficult to discern any argument for supposing that things go his way. He is good and clear about how the notion of effective computability gets developed. But he seems to assume, rather than argue, that we need more of the same kind of conceptual development before we are entitled to settle the Turing computable/the recursive as a canonically privileged class of effectively computable function. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

Encore #4: Amethe von Zeppelin

Reading Carnap’s Logical Syntax I was intrigued by the question who the translator was, and wondered aloud in a blogpost here. A correspondent kindly filled in some detail which I added in a second post. Afterwards, I was contacted by a member of the family; but they couldn’t add much to the story, which is however still surely interesting enough for an encore.

Amethe von Zeppelin (January 5, 2012)

I’ve had Carnap’s The Logical Syntax of Language on my shelves for over forty years. I can’t say it was ever much consulted; but I’ve been reading large chunks of it today, in connection with Gödel. Carnap’s book is often credited with the first statement of the general Diagonalization Lemma, and I wanted to double check this. (The credit seems to be somewhat misplaced in fact, but that’s for another post!)

Reading the early pages of the book, I’ve been struck by how good the 1937 translation seems. Well, I can’t vouch for its accuracy — for I don’t have the German to check — but it’s very readable, and seems sure-footed with the logical ideas. The translation was organized by Ogden as editor of The International Library, went slowly, and Quine had suggested a very appropriate Harvard academic who was keen to do the job but Ogden would not make the change. True, Carnap writes to Quine  ‘Ogden sent me Ch.I and II of [the translation]. I had to spend much work in revising and correcting them; I found a lot of mistakes, misunderstandings and unsuitable expressions.’ But perhaps the experience got better after that, since Carnap doesn’t return to make later complaints, and the translator is warmly thanked in the Preface to the English Edition.

But who was Ameche Seaton, the translator? A Countess von Zeppelin, no less. (But no, not the Countess who later threatened to sue Led Zeppelin for illegal use of her family name.) Which makes me wonder: what led the Countess to becoming a translator, and why did she become involved in this project. What was her background that made her an apt choice for translating this book of all books?

She’d translated before, a history book by Paul Frischauer, Prince Eugene: A Man and a Hundred Years of History, first published in German in 1933, translated in 1934, and still in print. Later she translated Schlick’s Philosophy of Nature (1949), Walter Schubart’s Russia and Western Man (1950), Bruno Freytag’s Philosophical Problems of Mathematics (1951), and a book by Karl Kobald Springs of Immortal Sound (1950), on places in Austria associated with composers. She also co-translated Werner Heisenberg’s Nuclear Physics (1953). That’s a really rather remarkable catalogue! And she wasn’t “just” a translator: she was competent enough to be asked to review a group of logic and philosophy of science books for Nature in 1938 (she writes the composite review in a way that indicates she was very much up with developments). How did all this come about?

Amethe Smeaton was the daughter of a colonial administrator (later a liberal MP) called Donald Smeaton. She was born in the late 1890s and was at Girton College during WW1 but left without graduating because of ill health. She corresponded briefly with Russell about Principia in 1917. She married a Scottish army officer called Ian McEwen in 1919: they had a son who served in the Scots Guards and who was killed in WW2. In 1924 she published in the Morning Post an adulatory account of an interview she had with Mussolini (apparently she spent time in Italy as a child and therefore spoke Italian). Graf von Zeppelin was cited as co-respondent in her divorce in 1929: it was said that they had been “found living as Count and Countess von Zeppelin” at Mentone. She married the count in Cap Martin, France in August that year. (He had been a German army officer during WW1, then had travelled in the forests of Bolivia, publishing an account of his adventures in 1926. According to A J Ayer, he chased Otto Neurath through the streets of Munich with a revolver at one point.) They bought a house called Schloss Mauerbach near Vienna in 1939. She died around 1966.

An interesting life, then. And her translating work was all done after her marriage to the Count, when presumably she could pick and choose what she did. Which suggests an interesting mind too.

Encore #3: Paris then and now

Paris Hilton once seemed to be ubiquitous. She even made an appearance in Logic Matters:

Paris 1967, Paris 2007 (May 16, 2007)

No doubt you’ve all supported the campaign to absolve Paris Hilton from her prison sentence: after all, in the words of the petition, “She provides hope for young people all over the U.S. and the world. She provides beauty and excitement to (most of) our otherwise mundane lives.” I couldn’t have put it better.

But Guy Debord did, forty years ago (albeit in a French style which isn’t quite mine!):

Behind the glitter of spectacular distractions, a tendency toward making everything banal dominates modern society the world over, even where the more advanced forms of commodity consumption have seemingly multiplied the variety of roles and objects to choose from. … The celebrity, the spectacular representation of a living human being, embodies this banality by embodying the image of a possible role. As specialists of apparent life, stars serve as superficial objects that people can identify with in order to compensate for the fragmented working lives that they actually live. Celebrities exist to act out in an unfettered way various styles of living … They embody the inaccessible result of social labour by dramatizing its by-products of power and leisure (magically projecting them). The celebrity who stars in the spectacle is the opposite of the individual, the enemy of the individual in herself as well as in others. Passing into the spectacle as a model for identification, the celebrity renounces all autonomous qualities …

And there is much more in the same vein, in his La société du spectacle, which I’ve been looking at again (so long after my mispent youth in various leftist groups). Paris 1967 anticipates Paris 2007.

Encore #2: University “reforms”

I am now doubly removed from the impact of the various “reforms” to universities in the UK and beyond. For one thing, those of us in Cambridge are well protected in various ways from the worst effects. And for another, I am now officially retired from the fray. Which doesn’t stop me getting distressed by reports of what is happening.

One of the most acute writers recently on what is befalling universities in the UK  is a Cambridge colleague, the intellectual historian and cultural critic Stefan Collini. But telling though his analyses are, I’m not sure that he has much more idea than I have about what is then to be done. As I ruefully reflected in this piece:

Universities, galleries and Stefan Collini (February 26, 2012)

I read Stefan Collini’s What are Universities For? last week with very mixed feelings. In the past, I’ve much admired his polemical essays on the REF, “impact”, the Browne Report, etc. in the London Review of Books and elsewhere: they speak to my heart.

But to be honest, I found the book a disappointment. Perhaps the trouble is that Collini is too decent, too high-minded, has too unrealistically exalted a view about what actually happens in universities which is too coloured by attitudes appropriate to the traditional humanities. And he is optimistic to the point of fantasy if he thinks that people are so susceptible to “the romance of ideas and the power of beauty” that they will want, or can be brought to want, lots and lots of universities in order to promote these ideas (as if they would suppose that the task of “conserving, understanding, extending and handing on to subsequent generations the intellectual, scientific, and artistic heritage of mankind” was clearly ill-served when there were only forty universities in England, as opposed to a hundred and whatever).

The cultural goods that Collini extols, perhaps enough will agree, are not to be measured in crassly economic terms and should be publicly sustained. But that thought falls so very short of helping us to think about what should be happening with the mass university education of almost half the age cohort, about what should be taught and how it should be funded. Collini’s considerations — if they push anywhere — might indeed suggest the ring-fencing of a relatively few elite institutions, to be protected (as in the old days of the UGC) from quotidian government interference and direction. He himself mentions the Californian model (layers of different kinds of tertiary institutions, with a lot of movement between, but with a sharp intellectual hierarchy, with research concentrated at the top). But Collini doesn’t say if that is where he wants us to go. The myths of basic-equality-of-institutions that continue to be endorsed so often in public discourse about the universities in the UK are quite inimical to official moves in that direction.

I am still musing about Collini’s book which I’d finished on the train down, while in the Central Hall of the National Gallery. I am looking at one of their greatest paintings, Giovanni Moroni’s The Tailor. The tailor’s gaze is challenging, appraising: I sit for a while to gaze back. It was a busy weekday afternoon. But in ten minutes or more not one other visitor walking through the Hall pauses to give him a glance.

A bit later, I go to see once more the painting I’d perhaps most like to smuggle home and put over the mantelpiece, Fra Lippo Lippi’s wonderful Annunciation. I spend another ten minutes in the room where it hangs. One other person wanders in, and rather rapidly leaves again. Even Vermeer’s Young Woman Standing at a Virginal — and isn’t she on the ‘must see’ list in every pocket guide? — is surprisingly lonely, and almost no one stops to keep her company.

Take away all the school parties, take away all the overseas visitors, and who is left? You might reasonably imagine that the English don’t really care, or at any rate don’t care very much, about the art which is on show here. Oh, to be sure, we chattering classes know which blockbuster exhibitions are the done things to see: Leondardo in London or Vermeer in Cambridge will, for a while, be chock full of people. But from day to ordinary day? Some are vaguely glad to know that the National Gallery and the Fitzwilliam Museum are there. But, to be honest, a nice National Trust garden is really more our cup of tea (with scones and jam to follow, thank you).

I’m sure it’s not that we are more philistine as a nation than others (the Uffizi in December isn’t suddenly full of Italians glad to take advantage of the absence of tourists). But equally, I wouldn’t overrate the interest of even the more educated English in Culture with a capital ‘C’. When I was a Director of Studies, I used to ask my students towards the end of their time in Cambridge if they’d ever visited the Fitzwilliam: almost no-one ever had.

Collini, to return to him, reflects that “Some, at least, of what lies at the heart of a university is closer to the nature of a museum or gallery than is usually allowed or than most of today’s spokespersons for universities would be comfortable with.” And I rather agree. But where does the thought take us? I doubt that even Collini really thinks that the tepid interest of the English — just some of the English — in museums and galleries can be parlayed into a wide public enthusiasm for spending a lot more on universities in these difficult times. So what’s to be done?

Stefan Collini has written more since on what is happening to universities since his book. In a later post, I noted another piece of his from the London Review of Books, about the privatisation disasters befalling British universities. Let me quote again his acerbic  peroration:

Future historians, pondering changes in British society from the 1980s onwards, will struggle to account for the following curious fact. Although British business enterprises have an extremely mixed record (frequently posting gigantic losses, mostly failing to match overseas competitors, scarcely benefiting the weaker groups in society), and although such arm’s length public institutions as museums and galleries, the BBC and the universities have by and large a very good record (universally acknowledged creativity, streets ahead of most of their international peers, positive forces for human development and social cohesion), nonetheless over the past three decades politicians have repeatedly attempted to force the second set of institutions to change so that they more closely resemble the first. Some of those historians may even wonder why at the time there was so little concerted protest at this deeply implausible programme. But they will at least record that, alongside its many other achievements, the coalition government took the decisive steps in helping to turn some first-rate universities into third-rate companies. If you still think the time for criticism is over, perhaps you’d better think again.

And Collini’s latest entirely admirable piece in the LRB turns to analyse, deconstruct, and shred the latest government proposals on teaching assessment. Not very cheering reading, I am afraid.

Encore #1: The single best bit of advice on writing

The Logic Matters blog is ten years old next month, and is fast approaching its thousandth post too. Heavens above! — doesn’t time fly when you are having fun?

So I’ve been revisiting old posts, partly to recall some of what I’ve been up to logically and otherwise, since I don’t keep any other diary. And over the next few weeks I will repost a few efforts that, for one reason or another, strike me as interesting enough, no doubt editing a little and occasionally adding a few second thoughts. 

I’ll probably dart around between different years of the back catalogue, but let me start with an encore of one of the very first posts. This is pleasingly short, but contains the single best bit of actually useful advice for authors I know, which I happily pass on at no charge. You’re welcome …

Broad’s advice for writers (April 15, 2006)

I’m ploughing on as fast as I can to get my Gödel book finished. I try to keep in mind the good advice that C.D. Broad used to give. Leave your work at the end of the day in the middle of a paragraph or two which you know roughly how to finish. That way, you can pick up the threads the next morning and get straight down to writing again. So much better than starting the day with a dauntingly blank sheet of paper — or nowadays, a blank screen — as you ponder how to kick off the next section or next chapter. Instead, with luck, you face that next hurdle while on a roll, with the ideas already flowing again.

Well, it works for me …

Five things I learnt this week …

So, apart from more unwanted confirmation that we are going to hell in a handcart, what did I learn this last week?

  1. Tom Leinster’s Basic Category Theory is in lots of ways really good. It is full of illuminating explanations and very helpful connection-making. But — rather cheeringly for me! — it isn’t quite so basic and quite so good as to make it redundant to continue trying to work on a Gentle Introduction  for a somewhat different audience. I had looked at much of Leinster’s book when it came out: but now a few of us are reading through it together, which concentrates the mind on its expositional successes and possible flaws. And, for one thing,  we are not that convinced by the organization of the book (I can see why Leinster wants to start talking about adjunctions very early on — that way, we get to interesting stuff fast — but it does seem to mean that some things are initially more puzzling than they need be, as their rationale can only really become clear later). Still, I’m really enjoying and learning from the reading group.
  2. I was a bit staggered to find (having not looked at the analytics before) that the front page of the LaTeX for Logicians section here was visited over 100K times last year. Which is also cheering in a small way, if only because it shows that the past effort wasn’t wasted; but this also made me feel more than a bit guilty about neglecting those pages for quite a while. Which is why I gave over a day or so to sorting things out there (making one or two interesting discoveries along the way).
  3. Talking of analytics, mine ceased to work for a fortnight on academia.edu. Well, no matter; but it does seem to be symptomatic of some trouble they are having in getting basic things to work reliably. Indeed, they now seem to have broken the ability to show a list of papers in a given research area (and we’ve never been able to do basic things like order the results of searches by e.g. number of views/downloads in the last 30 days, so we can spot papers that colleagues are deeming worth looking at). Yet despite the seeming shakiness of the current offering, I discover that academia.edu are now inviting us to sign up for a premium service at (would you believe!?) $9.99 a month for what seem to be trivial benefits. That is ridiculous. Their efforts so far at monetisation aren’t going to end well.
  4. The Julliard Quartet in their present incarnation are pretty good but not stunningly so. They performed here in Cambridge last week in the often terrific Camerata Musica series, playing Mozart’s “Dissonance” Quartet, the Debussy quartet, and Beethoven’s last quartet.  Maybe I wasn’t initially in the right mood, but their Mozart just didn’t work for me: dully uninspired. Fortunately, things then got a lot better.
  5. Virginia Woolf’s To the Lighthouse, however,  is consistently amazing. Yes, I know, how did I get this far without having read it before?

LaTeX for Logicians refresh

As a bit of distraction from all the other logic-related things I really ought to be doing, I’ve just been tidying the LaTeX for Logicians pages for the first time for a good while. After a bit of re-arrangement here, some renewing of broken links there, then working a few suggestions in comments (thanks!) into the main pages, and doing some searching on e.g. tex.stackechange, here we are.

Whatever one’s issues and reservations about LaTeX for more general use, it is still surely quite invaluable for logicians. So do please let me know, then, how these pages can be improved, what new LaTeX packages of use to logicians that I have missed, etc.

What is the modern conception of logic? #1

Three and a half years ago, there were a few blogposts here (and also a follow-up document) about whether there is a canonical modern story about how we should conceive of our Ps and Qs, whether we should define validity primarily for schemas or interpreted sentences, and that kind of thing. Just for the fun of discovery (and because I suspect we rush too fast to suppose that there is a uniform ‘contemporary conception’ of such matters) I’m going to return to the issue over some coming blogposts — developing, correcting, adding to, and sometimes retracting what I said before. This kind of nit-picking won’t be to everyone’s taste; but hopefully some might be as intrigued by the variety of views that have been on offer out there in the modern canon of logic texts.

I can’t expect people to remember the previous discussion! —  so I’ll start again from scratch. Here then is Episode #1, even if much of it I have said before.

1. A contemporary conception?

Warren Goldfarb, in his paper ‘Frege’s conception of logic’ in The Cambridge Companion to Frege (2010), announces that his ‘first task is that of delineating the differences between Frege’s conception of logic and the contemporary one’. And it is not a new idea that there are important contrasts to be drawn between Frege’s approach and some modern views of logic. But one thing that immediately catches the eye in Goldfarb’s prospectus is his reference to the contemporary conception of logic. And that should surely give us some pause, even before reading on.

So how does Goldfarb characterize this contemporary conception? It holds, supposedly, that

the subject matter of logic consists of logical properties of sentences and logical relations among sentences. Sentences have such properties and bear such relations to each other by dint of their having the logical forms they do. Hence, logical properties and relations are defined by way of the logical forms; logic deals with what is common to and can be abstracted from different sentences. Logical forms are not mysterious quasi-entities, a la Russell. Rather, they are simply schemata: representations of the composition of the sentences, constructed from the logical signs (quantifiers and truth-functional connectives, in the standard case) using schematic letters of various sorts (predicate, sentence, and function letters). Schemata do not state anything and so are neither true nor false, but they can be interpreted: a universe of discourse is assigned to the quantifiers, predicate letters are replaced by predicates or assigned extensions (of the appropriate arities) over the universe, sentence letters can be replaced by sentences or assigned truth-values. Under interpretation, a schema will receive a truth-value. We may then define: a schema is valid if and only if it is true under every interpretation; one schema implies another, that is, the second schema is a logical consequence of the first, if and only if every interpretation that makes the first true also makes the second true. A more general notion of logical consequence, between sets of schemata and a schema, may be defined similarly. Finally, we may arrive at the logical properties or relations between sentences thus: a sentence is logically true if and only if it can be schematized by a schema that is valid; one sentence implies another if they can be schematized by schemata the first of which implies the second. (pp. 64–65)

Note an initial oddity here (taking up a theme that Timothy Smiley has remarked on in another context). It is said that a ‘logical form’ just is a schema. What is it then for a sentence to have a logical form? Presumably it is for the sentence to be an instance of the schema. But the sentence ‘Either grass is green or grass is not green’ — at least once we pre-process it as ‘Grass is green $\lor$ $\neg$\,grass is green’ — is an instance of both the schema $P \lor \neg P$ and the schema $Q \lor \neg Q$. These are two different schemata (if we indeed think of schemata, as Goldfarb describes them, as expressions ‘constructed from logical signs … using schematic letters’): but surely we don’t want to say that the given sentence, for this reason at any rate, has two different logical forms. So something is amiss.

But let’s not worry about this detail for the moment. Let’s ask: is Goldfarb right that contemporary logic always (or at least usually) proceeds by defining notions like validity as applying in the first instance to schemata?

Some other writers on the history of logic take the same line about modern logic. Here, for example, is David Bostock, in his Russell’s Logical Atomism (2012), seeking to describe what he supposes is the ‘nowadays usual’ understanding of elementary logic, again in order to contrast it with the view of one of the founding fathers:

In logic as it is now conceived we are concerned with what follows from what formally, where this is understood in terms of the formal language just introduced, i.e. one which uses ‘P’, ‘Q’, … as schematic letters for any proposition, ‘a’, ‘b’, … as schematic letters for any reference to a singular subject, and ‘F’, ‘G’, … as schematic letters for any predicate. So we first explain validity for such schemata. An interpretation for the language assigns some particular propositions, subjects or predicates to the schematic letters involved. It also assigns some domain for the quantifiers to range over …. Then a single schematic formula counts as valid if it always comes out true, however its schematic letters are interpreted, and whatever the domain of quantification is taken to be. A series of such formulae representing an argument … counts as a valid sequent if in all interpretations it is truth-preserving, i.e. if all the interpretations which make all the premises true also make the conclusions true. …

We now add that an actual proposition counts as ‘formally valid’ if and only if it has a valid form, i.e. is an instance of some schematic formula that is valid. Similarly, an an actual argument is ‘formally valid’ if and it only if it has a valid form, i.e. is an instance of some schematic sequent that is valid. Rather than ‘formally valid’ it would be more accurate to say ‘valid just in virtue of the truth functors and first-level quantifiers it contains’. This begs no question about what is to count as the ‘logical form’ of a proposition or an argument, but it does indicate just which ‘forms’ are considered in elementary logic.

Finally, the task of logic as nowadays conceived is the task of finding explicit rules of inference which allow one to discover which formulae (or sequents) are the valid ones. … What is required is just a set of rules which is both ‘sound’ and ‘complete’, in the sense (i) that the rules prove only formulae (or sequents) that are valid, and (ii) that they can prove all such formulae (or sequents). (pp. 8–10)

Bostock here evidently takes very much the same line as Goldfarb, except that he avoids the unhappy outright identification of logical forms with schemata. And he goes on to say that not only do we define semantic notions like validity in the first place for schemata but proof-systems too deal in schemata — i.e. are in the business of deriving schematic formulae (or sequents) from other schematic formulae (or sequents).

It isn’t difficult to guess a major influence on Goldfarb. His one-time colleague W.V.O. Quine’s Methods of Logic was first published in 1950, and in that book — much used at least by philosophers — logical notions like consistency, validity and implication are indeed defined in the first instance for schemata. Goldfarb himself takes the same line in his own later book Deductive Logic (2003). Bostock’s own book Intermediate Logic is perhaps a little more nuanced, but again takes basically the same line.

But the obvious question is: are Goldfarb and Bostock right that the conception of logic they describe, and which they themselves subscribe to in their respective logic books, is so widely prevalent? I have certainly heard it said that a view of their kind is ‘canonical’: but what does the canon actually say?

Philosophers being a professionally contentious lot, we wouldn’t usually predict happy consensus about anything much! If we are going to find something like a shared a canonical modern conception, it is more likely to be an unreflective party line of mathematical logicians, who might be disposed to speed past preliminary niceties en route to the more interesting stuff. At any rate, what I propose to do here is to concentrate on the mathematical logicians rather than the philosophers. So let’s take some well-regarded mathematical logic textbooks from the modern canon.

How far, going back, should we cast the net? I start with Mendelson’s classic Introduction to Mathematical Logic (first published in 1964), and some books from the same era. Now, you might reasonably say that — although these books are ‘contemporary’ in the loose sense that they are still used, still recommended — they aren’t sufficiently up-to-date to chime with Goldfarb’s and Bostock’s intentions when they talk about logic as it is ‘nowadays conceived’. Fair enough. It could turn out that, beginning with an initially messy variation in approaches in the ‘early modern’ period (if we can call it that! — I mean the 1960s and 1970s, some seventy and more years after the first volume of Grundgesetze), there does indeed later emerge some convergence on a single party line in the ‘modern modern’ period. Well, that will be interesting if so. And it will be interesting too to try to discern whether any such convergence (if such there has been) is based on principled reasons for settling on one dominant story.

So what we’ll be doing to considering e.g. how various authors have regarded formal languages, what they take logical relations to hold between, how they regard the letters which appear in logical formulas, what accounts they give of logical laws and logical consequence, and how they regard formal proofs. To be sure, we will expect to find recurrent themes running through the different treatments (after all, there is only a limited number of options). But will we eventually find enough commonality to make it appropriate to talk of ‘the’ canonical contemporary conception of logic among working logicians? And if so, will it be as Goldfarb and Bostock describe it?

Let’s look and see …

[To be continued]

Scroll to Top