# And who is for 0-ary function expressions?

In defining a first order syntax, there’s a choice-point at which we can go two ways.

Option (A): we introduce a class of sentence letters (as it might be, $latex A, A’, A”, \ldots$) together with a class of predicate letters for different arities $latex n > 0$ (as it might be $latex P_1, P’_1, P”_1, \ldots$, $latex P_2, P’_2, P”_2, \ldots$, $latex P_3, P’_3, P”_3, \ldots$). The rule for atomic wffs is then that any sentence letter is a wff, as also is an $latex n$-ary predicate letter $latex P_n$ followed by $latex n$ terms.

Option (B): we just have a class of predicate letters for each arity $latex n \geq 0$ (as it might be $latex P_0, P’_0, P”_0, \ldots, P_1, P’_1, P”_1, \ldots, P_2, P’_2, P”_2, \ldots$). The rule for atomic wffs is then that any $latex n$-ary predicate letter followed by $latex n$ terms is a wff.

What’s to choose? In terms of resulting syntax, next to nothing. On option (B) the expressions which serve as unstructured atomic sentences are decorated with subscripted zeros, on option (A) they aren’t. Big deal. But option (B) is otherwise that bit tidier. One syntactic category, predicate letters, rather than two categories, sentence letters and predicate letters: one simpler rule. So if we have a penchant for mathematical neatness, that will encourage us to take option (B).

However, philosophically (or, if you like, conceptually) option (B) might well be thought to be unwelcome. At least by the many of us who follow Great-uncle Frege. For us, there is a very deep difference between sentences, which express complete thoughts, and sub-sentential expressions which get their content from the way they contribute to fix the content of the sentences in which they feature. Wittgenstein’s Tractatus 3.3 makes the Fregean point in characteristically gnomic form: ‘Only the proposition has sense; only in the context of a proposition has a name [or predicate] meaning’.

Now, in building the artificial languages of logic, we are aiming for ‘logically perfect’ languages which mark deep semantic differences in their syntax. Thus, in a first-order language we most certainly think we should mark in our syntax the deep semantic difference between quantifiers (playing the role of e.g. “no one” in the vernacular) and terms (playing the role of “Nemo”, which in the vernacular can usually be substituted for “no one” salve congruitate, even if not always so as myth would have it). Likewise, we should mark in syntax the difference between a sentence (apt to express a stand-alone Gedanke) and a predicate (which taken alone expresses no complete thought, but whose sense is fixed in fixing how it contributes to the sense of the complete sentences in which it appears). Option (B) doesn’t quite gloss over the distinction — after all, there’s still the difference between having subscript zero and having some other subscript. However, this doesn’t exactly point up the key distinction, but rather minimises it, and for that reason taking option (B) is arguably to be deprecated.

It is pretty common though to officially set up first-order syntax without primitive sentence letters at all, so the choice of options doesn’t arise. Look for example at Mendelson or Enderton for classic examples. (I wonder if they ever asked their students to formalise an argument involving e.g. ‘If it is raining, then everyone will go home’?). Still, there’s another analogous issue on which a choice is made in all the textbooks. For in an analogous way, in defining a first order syntax, there’s another forking path.

Option (C): we introduce a class of constants (as it might be, $latex a, a’, a”, \ldots$); we also have a class containing function letters for each arity $latex n > 0$ (as it might be $latex f_1, f’_1, f”_1, \ldots$, $latex f_2, f’_2, f”_2, \ldots, f_3, f’_3, f”_3, \ldots$). The rule for terms is then that any constant is a term, as also is an $latex n$-ary function letter followed by $latex n$ terms for $latex n > 0$.

Option (D): we only have a class of function letters for each arity $latex n \geq 0$ (as it might be $latex f_0, f’_0, f”_0, \ldots, f_1, f’_1, f”_1, \ldots, f_2, f’_2, f”_2, \ldots$). The rule for terms is then that any $latex n$-ary function letter followed by $latex n$ terms is a term for $latex n \geq 0$.

What’s to choose? In terms of resulting syntax, again next to nothing. On option (D) the expressions which serve as unstructured terms are decorated with subscripted zeros, on option (C) they aren’t. Big deal. But option (D) is otherwise that bit tidier. One syntactic category, function letters, rather than two categories, constants and function letters: one simpler rule. So mathematical neatness encourages many authors to take option (D).

But again, we might wonder about the conceptual attractiveness of this option: does it really chime with the aim of constructing a logically perfect language where deep semantic differences are reflected in syntax? Arguably not. Isn’t there, as Great-uncle Frege would insist, a very deep difference between directly referring to an object $latex a$ and calling a function $latex f$ (whose application to one or more objects then takes us to some object as value). Again, so shouldn’t a logically perfect notation sharply mark the difference in the the devices it introduces for referring to objects and calling functions respectively? Option (D), however, downplays the very distinction we should want to highlight. True, there’s still the difference between having subscript zero and having some other subscript. However, this again surely minimises a distinction that a logically perfect language should aim to highlight. That seems a good enough reason to me for deprecating option (D).

### 8 thoughts on “And who is for 0-ary function expressions?”

1. Curiously, I came to the exact opposite conclusion, but for the same Fregean reasons.

The difference between a sentence and a predicate (or a name and a functor) is that the former stands for something saturated, has a complete sense, and stands for a thought (or object). They need nothing further — and what conveys that sense better than a ‘0’?

No, the real mistake is saying that ‘$latex P_1$’ is a predicate. Frege would deprecate any language which calls something a predicate (or a functor) when the unsaturated nature of its denotation is left to be indeicated by a positive subscript. Rather, we should say ‘$latex P_1\xi$’ is a predicate or ‘$latex f_2\xi\zeta$’ is a function.

I think this helps to preserve at the level of syntax the thought that an n-ary predicate combined with n names expresses a thought every bit as much as an atomic sentential constant.

1. That’s interesting, Brian. Though I doubt that we disagree about the point underlying my grumbling — i.e. surface slickness of notation is one thing, perspicuously tracking ‘logical form’ (or underlying semantic structure, if you prefer) is something else. I guess we can in the end ttake different views about what’s ideally perspicuous …

2. Another argument for deprecating (D), it seems to me, is the following, at least on a view that reduces functions to their graphs, and their graphs to to sets on n-tuples. On such a view, a 1-place function is a set of ordered pairs; but then a 0-place function is a set of 1-tuples, which is, on anybody’s account, an object of type higher than 0. We might of course, harmlessly identify $latex a$ and $latex \langle a \rangle$, but in so doing we would lose an important conceptual distinction.

1. Yep, it is a complete mystery to me why LaTeX works after a fashion in posts (with base-line shifting which isn’t very pretty) but not in comments. I’ll ask on the host’s forum! [Later: Ah — got LaTeX working in comments, and improved baseline shift. Memo to self: RTFM!]

But yes, I like the point!!

1. May I object to the ‘on anybody’s account’? It’s only if we a view a tuple as a kind of set that we think of a 1-tuple as anything other than an individual. Not that I’m happy reducing functions to graphs, mind, but if I were, I’d want them reduced in such a way that the tuples weren’t defined in anything like Kuraowski’s way.

3. Ah! But of course “the deep semantic difference between quantifiers … and terms” disappears, e.g., in Montague grammar, where terms are a particular kind of quantifiers (so-called Montagovian individuals). It seems to me that this gives us a deep insight into the way (or perhaps a way) language works.

1. I knew someone would bring up Montague! Well, let’s not argue the case, though I guess I’m more of a Fregean traditionalist. Because in a way, the example makes my point.

If contra Frege, you do think names and quantifiers semantically belong together in some deep sense, then — in so far as you seek a perspicuous logical language where syntactic typing goes with semantic typing — then you presumably will want your logical grammar to be different from the Fregean’s grammar which sharply demarcates names and quantifiers in the syntax. Similarly, if for some reason you do want to think, strongly contra Frege again, of using a name and calling a function as actually semantically belonging together in some deep sense, then to be sure you’ll positively want to treat names and function expressions as belonging to the same basic syntactic type. And if you are Fregean you certainly won’t. I’m just moaning about supposing that the syntactic choice should be left to be driven by mere mathematical convenience, given we also want our formal languages to aim for ‘logical perfection’, i.e. for syntax to track semantics.

Scroll to Top