# Book Note: The Open Logic Text, #1

It has been nine months since I really looked at the Teach Yourself Logic 2015 Study Guide. It’s time to start thinking about a 2016 update. So over the coming weeks I’ll be tinkering with the current version, while reading, dipping, skimming and skipping though various logic books old and new  (I have far too long a list!). I’ll be posting here some Book Notes on individual texts as I go along. So here’s the first instalment on the first book, the new Open Logic Text.

This is a collaborative, open-source, text book, and very much work in progress. The book’s website describes the project like this:

The Open Logic Text is an open-source, collaborative textbook of formal (meta)logic and formal methods, starting at an intermediate level (i.e., after an introductory formal logic course). It is aimed at a non-mathematical audience (in particular, students of philosophy and computer science), but is completely rigorous.

The project team is distinguished: Aldo Antonelli, Andy Arana, Jeremy Avigad, Gillian Russell, Nicole Wyatt, Audrey Yap, and Richard Zach. You can get a current version of the complete text (pp. 229) and also a sample course text Intermediate Logic selected and remixed from OLT (pp. 138) from this download page.

I don’t have any settled view on the likely merits or otherwise of this sort of collaborative project, but of course wish it well, and will be very interested to see how things develop. My comments here are on the complete text as retrieved on 2 Oct. 2015.

Part I: First Order Logic (pp. 9-64) What do these opening chapters cover? Ch. 1 is on the syntax and semantics of first-order languages. Ch. 2 talks about theories and their models. Ch. 3 presents the LK sequent calculus and proves its soundness. Ch. 4 proves the completeness theorem by Henkin’s method (and touches on compactness and the downward LS theorem).

That’s a packed program for less than sixty pages. And a non-mathematical student who tackles it will have to bring more to the party than she is likely to have picked up from a typical introductory formal logic course. Five pages in, it is assumed that the reader knows what an induction hypothesis is and how proofs by induction work. A couple of pages later, we are told that a certain definition is a ‘recursive’ definition of a function, again without explanation. When it comes to talking about first-order theories, we get as first examples the theory of strict linear orders and the theory of groups, with no explanation of what such things are. A bit later, Cantor’s theorem is mentioned as if already familiar (in remarking that the formal language of first-order set-theory suffices to express it). And so it goes. Actually, a mathematically rather ept audience seems to be presupposed!

As noted, the proof system considered is the sequent calculus LK. There isn’t a word to link this up to what a student may have encountered in her introductory formal course (very likely a tree-based system, or a Fitch-style proof system), or to explain why we are doing things this way. We get a few examples, and then a three-page soundness proof. Again, tough for the non-mathematical, I’d have thought. So far, in fact, this all reads like slightly souped up lecture hand-outs covering some of the fiddly bits of a course, very respectable if accompanied by a lot of lecture-room chat around and about, but not amounting yet to a stand-alone text book.

Having adjusted to the level of the enterprise, however, the chapter on completeness is pretty clearly done, with the strategy of a Henkin proof (and the pitfalls that have to be negotiated to get the proof-idea to fly) being well explained. This could make useful revision material for students doing courses based on other books as it isn’t tied to LK.

Part I, continued: Beyond First Order Logic (pp. 65-79) After a short overview, the sections of Ch. 5 are titled ‘Many-sorted logic’, ‘Second-order logic’, ‘Higher-order logic’, ‘Intuitionistic logic’, ‘Modal logics’, ‘Other logics’. Yes, in just fifteen pages. The pages on second-order logic are probably useful, enough to give a student a flavour of the issues. Otherwise, the discussions are necessarily pretty perfunctory — place-holders for later developments.

Part II: Model Theory (pp. 81-103) The website tells us that these twenty-four pages come from Aldo Antonelli’s notes. The assumed sophistication of the reader goes up another notch or two. These are pretty terse maths notes in the conventional style (and none the worse for that! — but perhaps again not quite fitting the advertised brief).

Ch. 6 comprises just ten pages on ‘The Basics of Model Theory’ (including the usual proof that countable dense linear orderings without end points are isomorphic, and a brisk discussion of non-standard models of first-order True Arithmetic). Then Ch . 7 proves the interpolation theorem and derives the Beth definability theorem. Ch. 8 proves Lindström’s theorem.

Now I would not have counted full proofs of interpolation and  Lindström’s theorem as entry-level topics on model theory. And that’s not me being idiosyncratic. Cross-checking, I find  that Manzano’s nice elementary text on model theory doesn’t get to either. Even the bible, Hodges’s Shorter Model Theory, never gets to Lindström’s theorem — and we don’t meet interpolation until half way into that book.

So I wouldn’t yet advise tackling these pages as initial reading on model theory. However, taken as standalone treatments for those with enough background, Chs. 7 and 8 are actually crisply and clearly done, and so could well be recommended e.g. as supplements to Manzano or other introductions.

To be continued.

This entry was posted in Books, Logic. Bookmark the permalink.

### 14 Responses to Book Note: The Open Logic Text, #1

1. Richard Zach says:

Thanks Peter! The order in which the chapters are collected in the “complete PDF” version is not optimal, I agree. When used as a text (eg in the Intermediate Logic PDF) it is preceded by the part on naive set theory (“Sets, Relations, Functions”), which is meant to introduce the student to basic mathematical talk, and how proofs work, including induction. To make logic accessible to a beginning non-maths student is one of the main aims of the project, but it’ll take someone working through the material for a course and adding the wordy explanations and examples to get there. The model theory part isn’t there yet; the first-order logic part is probably halfway there; and I’m working on an improved part on Turing machines right now which I’ll teach next term. There will also be alternative chapters on other proof systems (axiomatic & natural deduction) so the instructor can pick and choose which ones they want to cover: some think it’s better to stick with what’s familiar from the intro course, some think it’s important students see something new. The “beyond first order logic” chapter is intended for a course where these topics can’t be covered, to point the students in the direction of what’s out there. Eventually, I hope there will be entire parts that cover each of them (Aldo’s notes on modal logic are waiting to be incorporated).

The whole thing is open source, and the open source mantra is “release early, release often.” Release early so people have an early opportunity to criticize, contribute, use, and improve the material. Release often so the text grows and improves over time.

• Rowsety Moid says:

It might be a good idea to make it clearer that the Intermediate Logic PDF contains material that isn’t in the “complete PDF”. The Download page says it is a “sample course text selected and remixed from OLT,” which sounds like it won’t contain anything that isn’t in the complete PDF.

2. Rowsety Moid says:

While Lindström’s theorems don’t seem to be covered by many books, they are covered in Ebbinghaus and Flum’s mathematical logic text and in Chang and Keisler’s Model Theory (which, if we’re making Bible analogies, might at least count as an Old Testament); and, in a way, covering something that most books don’t is a point in The Open Logic Text’s favour, so long as the topic is sufficiently interesting.

I wonder what they have in mind as a “non-mathematical audience” if they’re including CS students. CS students don’t seem a very non-mathematical audience to me, but perhaps they are what the authors have in mind. CS students will probably have done A-Level maths (or equivalent) and I think that includes proof by induction. CS students would probably also have some understanding of recursive definitions. Strict linear orders and groups seem more problematic, but CS students are bound to learn at least that much mathematics at some point.

• Richard Zach says:

You’d be surprised what little math you can count on North American CS students to know. By “non-mathematical” we mean: not mathematics students.

• Peter Smith says:

I am indeed minded to recommend the chapters on interpolation and on Lindström in TYL2016 as supplements to Manzano. I found them admirable. So my concern wasn’t about how the content was handled but about the somewhat misleading flag under which it flies. This part of OLT is at the moment quite a way from being a substitute for other, more conventionally published, introductions to model theory (whether stand-alone or parts of longer texts).

3. “My comments here are on the complete text as retrieved on 25 Oct. 2015.” Wow, backwards time travel works!

The last time the Open Logic Text was discussed on the Logic Matters blog ( http://www.logicmatters.net/2015/05/10/the-open-logic-text/ ), there was some discussion of the merits of Git, and Richard Zach put up some material on the OLT blog about the use of Git. Since then I have experimented on GitLab, starting from zero knowledge, and can confirm that it is a wonderful system, even for a one-author project. So I encourage everyone to have a go.

The only cautionary note I would sound about Git is this. If you use it to see what you have changed since a previous commit, in order to reconsider whether your changes were improvements, and you want all of your changes to come up for reconsideration, you need to make sure you compare the current version with the version that, last time you compared, played the role of the current version. This might not be the version as at the latest commit. So care would be needed to track what had been compared with what, and commit messages stating what had happened might not be enough if you committed several files at once.

• Rowsety Moid says:

If I don’t use Git, however, then I don’t need to know about or understand such issues; and there’s a long list of other things I don’t need to learn either.

So for me there are significant costs to using Git, and the benefits don’t seem all that great for something like a one-person project. I accept that there are significant benefits to using Git — or one of the alternatives — for large, collaborative projects, and perhaps if someone is regularly using Git anyway, they may find it easy to use it for everything. But if I’m not in one of those situations?

When I’m working on a one-person project, I keep a log file with dated entries that describe the changes, the reasons for them, problems I’ve found, ideas, and so on. I don’t need to break up my notes into separate “commit messages” or figure out what to do with the parts that wouldn’t go with any “commit”. I keep copies of significant versions. I can use ‘diff’ when I want to compare two versions (which isn’t very often). If I need a more fine grained record of past versions, there’s Time Machine.

So what would Git give me that would be worth the cost of changing how I work to fit with what Git requires, understanding Git well enough to be able to understand and recover from any mistakes I make in how I use it, and learning the complicated, poorly designed command-line interface or a GUI that tries to hide the complications?

In any case, if I started using Git in a very simple way, I would soon want to do more. I would want to create branches, and one of the big selling points of Git is supposed to be the way it does branches and merging. But I would find Git’s way awkward and annoying more often than not. I want to have different branches in different directories, not have a system that changes what’s in my current directory depending on which branch I’ve made current. I don’t want to have to put things in a “stash” I don’t even want to have to designate a branch as the current one, rather than be able to work on different branches simultaneously. (I know it’s possible to use Git in a separate-directory way, but that would be working around an approach I don’t want in the first place.)

So if I were going to start using a revision control system, I would pick Mercurial or Bazaar ahead of Git. They’re a better fit to the ways I’d want to work, and they seem to be much easier to learn and to use.

• In response to Rowsety Moyd, 04 October at 1641, I agree that Git may well not be for everyone. I only meant to encourage those who have not tried it to play with it and see how they find it. And recommendations for alternatives to try, such as Mercurial and Bazaar as mentioned by Rowsety, are always welcome.

I also agree that it is perfectly possible to devise a system of folders and a routine for moving versions to archives that will give you comparable facilities. It sounds like Rowsety already has one, and there is another I recently wrote up here: http://www.rbphilo.com/authors.html

For anyone puzzled by my reference to time travel, Peter has now amended the date from 25 to 2 October.

• I agree that the Mercurial interface is well-designed, while Git’s interface… wasn’t designed at all initially, and evolved slowly to something that you can use after a steep learning curve. (I use git anyway for social reasons — I had to learn it well for shared projects, and GitHub is ages ahead of its competitors).

Apart from that, almost all your complaints have some merit, but apply equally to Mercurial (except maybe stashing).

But at least once you know Git, I think it requires far less discipline than copying files by hand. I know few developers who don’t get screwed up with that; it’s worse if you experiment with changes, test them and want to go back to some version (which you maybe don’t do on text).

One correction though: even bad VCS, and especially good ones (say, Mercurial) are orders of magnitude more robust than the alternatives starting from two people projects. If you collaborate on a project closely by sending files by email, you’re usually in for a huge amount of pain. (Judging from CS students who haven’t learned Git — I know 0 ones who don’t end up regretting it).

Separating changes into commits so logically is not a git requirement, it’s just considered good practice, for reasons that make more sense for software projects — you can put what you’d put in your log file there, and use tags instead of “copies of significant revisions”.

I don’t see the difference between “different branches in different directories with Git/Mercurial” and a system designed for that. (Well, except that SVN encodes branching through directories, *internally*, which turns out to be a poor technical choice, and which they didn’t execute well until well after Git made SVN irrelevant).

Disclaimer: I never mentioned Bazaar because I never used it, and I’m a CS guy; learning Git was still painful, but probably learning Mercurial was easier for me than for a non-CS guy.

• Rowsety Moid says:

Git and similar systems are the first version control systems that have had any appeal to me, but people who like version control systems seemed to like them even back when they were things like SCCS and RCS.

Although I can think of situations where I would find a VCS invaluable, I have never actually been in one. For the things I’ve actually done, I’ve always felt a VCS would be more trouble than it was worth, and those things include moderately large, small group (2-4 people) software projects. (An example of “moderately large” would be 100s of Java files.)

I don’t find it difficult to experiment with changes, test them, and then go back if I run into trouble or don’t like the results.

On some specific points:

If I use tags instead of copies, I immediately run into the problem that there are two kinds of tags in Git. I’d have to learn how tags work and the commands needed to use them. I already know how to use files and how to write scripts that help me use files.

Re branching, I find Mercurial a bit confusing there, because it seems there are at least three different ways to do branching. That discourages me. However, it appears that Git now allows multiple “working trees”, which looks like it might be close enough to what I want, though I’d have to learn a lot more about them to be sure.

I think my comment about my log / notes vs “commit messages” may have been too brief to be sufficiently clear. My notes include things that would make sense as commit messages, but they also include things that wouldn’t; and breaking the notes up into commit messages would make them less useful to me.

• Sorry, I’m confused by your note. I agree that two compare two versions, you better compare those two versions (git diff v1..v2), but I’m less sure about the rest.

Maybe your scenario is that you often want to review what you changed since your last review. For that scenario, I’d recommend working on an “unreviewed” branch (often called “develop”); you can update the master branch with those changes only after you reviewed them, so that your review command is always “git diff master..unreviewed”. To keep things simple (and avoid merges), I’d recommend only committing on unreviewed. (Merges on text are much more painful than on software, because text is often not modular, and because you can’t automate rereading the result).

There are two complexities:
– of course, you still need a couple commands for other actions in the workflow
– what I’ve just described is a Git development workflow, and designing one isn’t trivial. Most existing workflows are designed for software, not text, and text and programs usually have different properties. I use tons of branches for software, and only master for papers, because as mentioned merging text fundamentally doesn’t work, especially for the text quality and cohesion that is required from high-quality papers. This, BTW, seems a useful reason to have a distinct “Git for philosophers” guide.

• Sorry if my comment was confusing (I assume you meant mine). The first thing to say is that I was thinking entirely of text, not software. What I was envisaging was something like this:

I am nearly at the end of a piece of work, and making corrections. Every so often I see how I am getting on. But I don’t want to re-check the whole piece every time. So I need a system to check all the changes that have been made, to pick up both the mistakes (as when I deleted the wrong word or corrected a spelling in the wrong way) and the complete accidents where I never meant to make any change at all, but just hit a key at the wrong moment.

I commit twice a day, morning and afternoon, starting on Monday morning, but don’t stop to check all my changes every time.

On Tuesday afternoon, I commit and then compare the version at that time with the version as it existed on Sunday evening. That picks up all the Monday and Tuesday changes, and any amendments I make then will be presented for reconsideration next time.

Just after the Thursday morning commit, I do another comparison. This is where the risk arises. I had better not compare with the most recent commit (Wednesday afternoon), or the one before it. I must make sure that I go back to the version at the time of the Tuesday afternoon commit. I might handle that with a commit message on Tuesday flagging the version to use. But if I have several chapters running at once, in separate files, and did not do comparisons on all of them on Tuesday afternoon, it might be that for some chapters I would need to go back to some much earlier version.

As you say, there are systems and there is scope for a bit of discipline, like separate commits for separate files. But if the system doesn’t make you do it, it is easy to slip into bad habits.

• Rowsety Moid says:

Re Git workflow and also using branches — I though this page looked very useful: http://nvie.com/posts/a-successful-git-branching-model/

For a small project, it wouldn’t all be needed, and it can be simplified, but it looks like a good structure to use as a model. Note the part about using the “–no-ff” option with “git merge”. I would often want to do that, if I were using Git branches, and I’m not sure I’d ever have spotted that option, or realised its significance, by myself.