First model theory seminar today. We were just limbering up reading the first chapter and a bit into the second chapter. It fell to me to try to say something to introduce the reading — difficult as nothing very exciting is going on yet: the interesting stuff starts next week. So I offered a few arm-waving thoughts about the way Hodges defines ‘structure’ and about his (and other model theorist’s) stretched use of ‘language’. Here is what I said.
[Later: In the light of comments, I’ve slightly revised the intro piece to make what I was saying clearer at a few points. And of course, I was struggling a bit to find anything thought-provoking to say about the very introductory opening pages of Hodges’s book! — so I’m no doubt slightly over-stating the points too.]
Many thanks for all that. Well, I confess I was struggling just a bit to find anything even slightly interesting/through-provoking to say by way of introducing the opening pages of Hodges book (it seemed too much to ask people to read over 70 pages in one go, so we didn’t get past the really elementary stuff for the first session). So I was probably over-stating the case, and it is good to have this counterbalance! So thanks again.
Thanks for the reply!
A few things, taken in a somewhat different order:
Sure, Hodges gives some reasons — though they don’t seem overwhelming to me.
I don’t think there’ll be overwhelming reasons for any way of doing it, but Hodges says in effect that he’s telling us how model theorists think: “Change the name and you change the structure” — as opposed to the way group theorists normally see things, for example.
I think you’ve raised interesting issues, but I also think your discussion ends up making Hodges’s approach seem more idiosyncratic than it is.
I’ve found it interesting to look at some of the other texts and, since I’ve now done that, I’ll make a few remarks.
There’s been some evolution of terminology, and Hodges’s “structures” are what often used to be called just “models”. I looked in Chang and Keisler, and even though this was the third edition (1990), I couldn’t find “structure” in the index (though they did use the word at times). However, their models were the same as Hodges’s structures: a domain plus an interpretation that maps relation symbols to relations and so on.
In some other, presumably more “modern” texts, the equivalent is “L-structures”, though some use a different letter than “L”. Some texts, Marker for example, start with pure structures, calling them “structures”, then move to L-Structures; but they still end up there pretty quickly.
In effect, Hodges’s “structures” are L-structures from the start; they just aren’t called that right away. Enderton (2nd ed) also just says “structure”.
Bell and Slomson (Models and Untraproducts), which dates from the late 1960s, goes further with the other appraoch: starting with “relational structures” and then relating them to languages. They say the relational structure is a “realisation” of a suitable language and that the language is “appropriate” for the structure.
(BTW, it would be nice if Dover would reprint Sacks’s Saturated Model Theory.)
__________
Well, just what content is left to the notion of “language” in this idealized sense, I wonder!
It at least still has syntax and semantics and is structured in a compositional way.
I think that there’s also a similar “idealisation” issue for pure structures. Consider an L-structure for an L that has a transfinite number of function symbols, for example. The corresponding pure structure will have the same transfinite number of distinguished functions. It’s quite an idealisation, imo, to regard them as distinguishable.
Then, to talk about those functions, we might say Fi where i ranges over the members of a transfinite index set I. Those i are not all that different from names.
__________
I’m not sure what is being said here — though it sounds a bit alarmingly like the suggestion that we should be careless about the distinction between names and what they name! I’m not sure that that can ever be the clear way to go!
I was just thinking that when people write a description of a structure, they tend to use “+”, for example, to refer to the addition function even if “+” isn’t meant to be a name in that place. So it may work better to have names in the picture from the start.
Hodges does have the distinction, of course, but it’s done by having c, for instance, be a name – a constant, let’s say – and c with A as a superscript being the element of the domain of A that’s designated by c. (Which seems reasonably standard.)
…
BTW, one thing Hodges does that’s unusual (and that I don’t think you’ve discussed yet) is to allow domains to be empty. It’s never been clear to me why the nonempty restriction is usually made. So I’m hoping you’ll say something about that at some point.
I think he explains the reasons for the way he’s presenting things pretty well at the start of chapter 1. Sure, Hodges gives some reasons — though they don’t seem overwhelming to me.
Then when Hodges defines structures in section 1.1, the distinguished entities (elements, relations, etc) do have names, but the entities needn’t be naming themselves, and the number of distinguished entities is no greater than the number of symbols in the language, which seems reasonable. There can still be more elements and so on that aren’t named. Indeed: and I didn’t say that in Hodges’s labelled structures elements do have to self-name. I only made the point that allowing self-naming is what ensures that for every pure structure there is a (or rather lots) of labelled structure.
It seems to me that what most model theory texts are primarily defining is L-structures in which there’s an underlying set plus mappings from constants, relation symbols, etc, to entities. Some talk briefly about structures in which things aren’t (or at least we aren’t told they are) named; but then Hodges also talks of “mathematical objects” and says that an object can be interpreted as a strucure in several different ways. The difference may just be terminological. Well, yes, we are in the business of fixing terminology: I was just stressing that in building labelling into the structures themselves we mustn’t lose sight of what we are doing here in yoking together a pure structure and a way of referring to its parts.
When people talk about structures, they typically name the distinguished entities, even if the are names Ri and similar. Of course, sometimes we’re not supposed to think of some of the names as names. Some books try to distinguish, such as by using different fonts, the names that are names from the ones that we’re meant to think of as the entities named. Hodges’ approach may well be simpler or clearer. I’m not sure what is being said here — though it sounds a bit alarmingly like the suggestion that we should be careless about the distinction between names and what they name! I’m not sure that that can ever be the clear way to go!
In any case, if we accept that there’s a transfinite number of objects in a structure’s domain, what’s the problem with having the same transfinite number of constants, especially if the constants are the very same objects? How might it be an idealisation too far? Is it just the term “language” that’s making it seem questionable? Well, just what content is left to the notion of “language” in this idealized sense, I wonder!
One minor thing: I can’t work out what you mean when saying, in your next to last paragraph, that given an atomic fact, a corresponding atomic sentence will be true (ok so far) if it exists. That was badly put, sorry: I was just making the trivial point that if there is such an atomic sentence as Pa then it is made true by a corresponding fact.
I also think, btw, that talk of facts is often unclear. What exactly is a fact? Introducing them into the ontology of an explanation (so to speak) is a risky move. I take the general point — but in this context we have entities that can serve as facts — e.g. the entity constituted by an element a and property P, the very composite that Hodges calls a “sentence” (when the element and property are self-labelling), so I certainly not engaged in anything more ontologically mysterious than he is!
Thanks for the comments, and when I have a moment, I’ll rewrite bits of what I posted.
I think your comments about “Hodges-structures” may make things sound weirder than they are.
I think he explains the reasons for the way he’s presenting things pretty well at the start of chapter 1.
Then when Hodges defines structures in section 1.1, the distinguished entities (elements, relations, etc) do have names, but the entities needn’t be naming themselves, and the number of distinguished entities is no greater than the number of symbols in the language, which seems reasonable. There can still be more elements and so on that aren’t named.
It seems to me that what most model theory texts are primarily defining is L-structures in which there’s an underlying set plus mappings from constants, relation symbols, etc, to entities. Some talk briefly about structures in which things aren’t (or at least we aren’t told they are) named; but then Hodges also talks of “mathematical objects” and says that an object can be interpreted as a strucure in several different ways. The difference may just be terminological.
When people talk about structures, they typically name the distinguished entities, even if the are names Ri and similar. Of course, sometimes we’re not supposed to think of some of the names as names. Some books try to distinguish, such as by using different fonts, the names that are names from the ones that we’re meant to think of as the entities named. Hodges’ approach may well be simpler or clearer.
In any case, if we accept that there’s a transfinite number of objects in a structure’s domain, what’s the problem with having the same transfinite number of constants, especially if the constants are the very same objects? How might it be an idealisation too far? Is it just the term “language” that’s making it seem questionable?
One minor thing: I can’t work out what you mean when saying, in your next to last paragraph, that given an atomic fact, a corresponding atomic sentence will be true (ok so far) if it exists.
I also think, btw, that talk of facts is often unclear. What exactly is a fact? Introducing them into the ontology of an explanation (so to speak) is a risky move. :)
Sorry if “potentially infinite” was distracting (though a common usage): I just meant no upper limit on length. Indeed, as you say, no computation in the usual sense is actually infinite.
You write:
“the informal idea of computation to remove any arbitrary finite ceiling on the length of computations (treat computations as potentially infinite)”
Isn’t your parenthetical remark somewhat inadequate? According to the idea stated, no computation is infinite, nor could it be (though it could be of an arbitrary finite length). So why say computation is “potentially infinite”?
(I am not much of a logic buff, so maybe I am missing something very basic…)
“Still, there are liberal idealizations and
liberal idealizations.” (page 3’s first sentence).
I suppose it reads roughly like “…liberal idealizations and
‘transliberal’ idealizations”?