Two-place functions aren’t one-place functions, are they?

Here’s a small niggle, that’s arisen rewriting a very early chapter of my Gödel book, and also in reading a couple of terrific blog posts by Tim Gowers (here and here).

We can explicitly indicate that we are dealing with e.g. a one-place total function from natural numbers to natural numbers by using the standard notation for giving domain and codomain thus: f\colon\mathbb{N}\to\mathbb{N}. What about two-place total functions from numbers to numbers, like addition or multiplication?

“Easy-peasy, we indicate them thus: f\colon\mathbb{N}^2\to\mathbb{N}.”

But hold on! \mathbb{N}^2 is standard shorthand for \mathbb{N}\times \mathbb{N}, the cartesian product of \mathbb{N} with itself, i.e. the set of ordered pairs of numbers: and an ordered pair is standardly regarded as one thing with two members, not two things. So a function from \mathbb{N}^2 to \mathbb{N} is in fact a one-place function that maps one argument, an ordered pair object, to a value, not (as we wanted) a two-place function mapping two arguments to a value.

“Ah, don’t be so pernickety! Given two objects, we can find a pair-object that codes for them, and we can without loss trade in a function from two objects to a value to a related function from the corresponding pair-object to the same value.”

Yes, sure, we can eventually do that. And standard notational choices can make the trade invisible. For suppose we use `(m, n)' as our notation for the ordered pair of m with n, then `f(m, n)' can be parsed either way, as representing a two-place function with arguments m and n or as a corresponding one-place function with the single argument (m, n). But the fact that trade between the two-place and the one-place function is glossed over doesn’t mean that it isn’t being made. And the fact that the trade can be made (even staying within arithmetic, using a pairing function) is a result and not quite a triviality. So if we are doing things from scratch — including proving that there is a pairing function that matches two things with one thing in such a way that we can then extract the two objects we started with — then we do need to talk about two-place functions, no? For example, in arithmetic, we show how to construct a pairing function from the ordinary school-room two-place addition and multiplication functions, not some surrogate one-place functions!

So what should be our canonical way of indicating the domains (plural) and codomain of e.g. a two-place numerical function? An obvious candidate notation is f\colon\mathbb{N}, \mathbb{N} \to\mathbb{N}. But I haven’t found this used, nor anything else.

Assuming it’s not the case that I (and one or two mathmos I’ve asked) have just missed a widespread usage, this raises the question: why is there this notational gap?

This entry was posted in Logic. Bookmark the permalink.

8 Responses to Two-place functions aren’t one-place functions, are they?

  1. Aldo Antonelli says:

    Some people might argue that there is no notational gap, and that in fact currying provides the preferred way to represent functions of more than one argument, at the cost of an escalation in type (but if you take the universe to be type-free then there is no escalation at all). Would you count a curried function as adequate representation?

  2. Jose Antonio Ortega says:

    In the functional programming arena it’s common to use the notation

    f: N -> N -> N

  3. Aatu Koskensilta says:

    Currying gives us a *representation* of n-ary functions as 1-ary functions. But this doesn’t answer Peter’s question: just what it is that these unary functions are representations /of/?

    On the usual set theoretic definitions, all functions are unary, and the same is true of the currying approach adopted in e.g. Haskell and ML. This is just one of those little oddities, those peculiar technical artifacts, that result from a set theoretic reduction of (arguably) more basic (in practice) notions. Now, there are (functional) programming languages where functions are not curried, or unary and taking tuples as arguments, by default (Scala is such a language, if my memory does not play me false), and naturally, or so one would expect at least, there would be some notation for types of truly n-ary functions as opposed to functions of tuples or functions yielding functions yielding etc.

  4. Peter Smith says:

    Aatu puts it nicely. Sure, we can in some sense trade in a two-place function from two numbers to numbers of type N,N –> N for a one-place function from numbers to (one-place functions from numbers to numbers), i.e. a function of type N –> (N –> N). But what is that type-changing dodge supposed to reveal? In what sense does the dodge give us an “adequate representation” of what’s going on? Its adequate for certain particular formal purposes, no doubt: but is it supposed to reveal what two-place functions are “really”?

    Here’s a revealing quote from A Gentle Introduction to Haskell on the site:

    Consider this definition of a function which adds its two arguments:

    add :: Integer -> Integer -> Integer
    add x y = x + y

    So we have the declaration of type — and we are told how the curried function acts, by what? Appeal to our prior understanding of the familiar school-room two-place function! Ordinary two-place functions are rungs on the ladder by which we climb to an understanding of what’s going on in the likes of Haskell …

  5. It’s interesting to contrast this situation with the situation of 0-ary functions. When it comes to 0-ary functions, there are two commonly used roads: allow 0-ary functions which take no argument and play the role of constant symbols; or forbid 0-ary functions and introduce constant symbols as wholly separate from function symbols. Of course, these two ways are totally interchangeable, just like f:N^2->N and f:N,N->N are interchangeable. The difference, though, is that in the case of binary functions, as you point out, for some reason society has unambiguously settled on one of the two ways, and the other (the one you bring up) is never used.

    Come to think of it, writing this comment might have made me realize *why* f:N,N->N is never used. If you’re defining first-order syntax, if you wanted, you could introduce “function symbols” and “binary function symbols” as two separate classes of things (just as you can, if you desire, make “constant symbols” separate from “function symbols”). Having done so, where would you draw the line? Would you do the same for ternary function symbols and so on all the way up? The result would be a little unwieldy: “A first-order language is a set of constant symbols together with a set of unary function symbols, together with a set of binary function symbols, together with a set of ternary function symbols, together with … (forever)” And then when you go to define, say, terms, you have infinitely many cases in your definition.

    • Peter Smith says:

      The first point, about 0-ary functions, is interesting: it’s worth a separate blog post, coming shortly!

      I’m not sure about the second point though (so I might be missing something). The naive response is that standard first-order syntax assigns each function symbol an arity and the recursive part of the rule for terms is that a function symbol of arity n followed by n terms is a term. There’s nothing unwieldy about that, even though we are allowing unary, binary, ternary etc. function symbols (i.e. function symbols of arity 1, 2, 3 …). What’s wrong with the naive response?

  6. Brian King says:

    I think the issue here is the reification of ordered tuples, i.e. treating them as a particular kind of set. If we instead treat functions as mappings from classes to classes, then we can use Quine’s virtual class notation to give us ‘f: A×B -> C’ explicitly defined as ∀x,y((Ax & By) -> ∃z(Cz & ∀w(Fxyw w = z))), where F is f’s relational counterpart. The problem is that we use the same notation for two different things: to the philosopher a bug, to the mathematician a feature.

  7. a says:

    My two cents. If one considers functions as a special kind of relation, then 2-ary functions are 3-ary relations, and 1-ary functions are 2-ary relations. So in this context the problem develops earlier, with 1-ary functions (which are 2-ary relations). Now are 2-ary relations just a property of ordered paris or a relationship between things? While there are certain technical advantages to the former, philosophically the second seems to have the advantage of naturalness. Extending the reasoning to 3-ary relations and thus 2-ary functions, one can conclude that it is more natural to say that 2-ary functions have two arguments and not one ordered-pair argument.

Leave a Reply

Your email address will not be published. Required fields are marked *