Most arguments are not arguments

Here’s a strange claim — or rather, something that ought to strike the uncorrupted mind as strange!

An argument consists of a set of declarative sentences (the premisses) and a declarative sentence (the conclusion) marked as the concluded sentence. (Halbach, The Logic Manual)

We are told more or less exactly the same by e.g. Bergmann, Moor and Nelson’s The Logic Book, Tennant’s Natural Logic, and Teller’s A Modern Formal Logic Primer. Benson Mates says the same in Elementary Logic, except he talks of systems rather than sets.

Now isn’t there something odd about this? And no, I’m not fussing about the unnecessary invocation of sets or systems, nor about the assumption that the constituents of arguments are declarative sentences. So let’s consider versions of the definition that drop explicit talk of sentences and  sets. What I want to highlight is what Halbach’s definition shares with, say, these modern definitions:

(L)et’s say that an argument  is any series of statements in which one (called the conclusion ) is meant to follow from, or be supported by, the others (called the premises). (Barker-Plummer, Barwise, Etchemendy, Language, Proof, and Logic)

In our usage, an argument is a sequence of propositions.We call the last proposition in the argument the conclusion: intuitively, we think of it as the claim that we are trying to establish as true through our process of reasoning. The other propositions are premises: intuitively, we think of them as the basis on which we try to establish the conclusion. (Nick Smith, Logic: The Laws of Truth)

And the shared ingredient is there too in e.g. Lemmon’s Beginning Logic, Copi’s Symbolic Logic, Hurley’s Concise Introduction to Logic, and many more.

Still nothing strike you as odd?

Well, note that on this sort of definition an argument can only have one inference step. There are premisses, a signalled final conclusion, and nothing else. Which seems to “overlook the fact that arguments are generally made up of a number of steps” (as Shoesmith and Smiley are very unusual in explicitly noting in their Multiple Conclusion Logic). Most real-world arguments have initial given premiss, a final conclusion, and stuff in-between.

In other words, most real-world arguments are not arguments in the textbook sense.

“Yeah, yeah, of course,” you might yawn in reply, “the textbook authors are in the business of tidying up ordinary chat — think how they lay down the law about ‘valid’ and ‘sound’, ‘imply’ and ‘infer’ and so on. So what’s the beef here? Sure they use ‘argument’ for one-step cases, and in due course probably use ‘proof’ for multi-step cases. So what? Where’s the problem?”

Well, there is of course no problem at all about stipulating usage for some term in a logic text when it is clearly signalled that we are recruiting a term which has a prior familiar usage and giving it a new (semi)-technical sense. That’s of course what people explicitly do with e.g. “valid”, which is typically introduced with overt warnings about no longer talking about propositions as valid, as we do, and so on. But oddly the logic texts never (almost never? — have I missed some?) seem to give a comparable explicit warning when arguments are being officially restricted to one-step affairs.

In The Argument Sketch, Monty Python know what an argument in the ordinary sense is: “An argument is a connected series of statements intended to establish a proposition.” Nothing about only initial premisses and final conclusions being allowed in that connected series!

So: I wonder how and why the logic texts’ restricted definition of argument which makes most ordinary arguments no longer count as such has continued to be propagated, with almost no comment? Any suggestions?

14 thoughts on “Most arguments are not arguments”

  1. I think this issue raises a question of what introductions to symbolic logic are for and what the intended readership is. But for now I’ll just list some more data points:

    * Mark Zegarelli’s Logic for Dummies defines “argument” to include intermediate steps. So it can be added to the short list of introductory logic books that use “argument” in the ordinary way, including intermediate steps.

    * Theodore Sider’s Logic for Philosophy has one sentence that refers to the “premises” and “conclusion” of one argument, but it doesn’t define “argument” or even make much use of the word. In the index, the entry for “argument” is about the arguments of a function.

    * Bell et al, Logical Options, on the other hand, defines an argument to consist of premises and conclusions (more than one conclusion is allowed), which at least doesn’t appear to allow for intermediate steps. (I haven’t looked through the book to see whether they ever use “argument” in a different way.)

    * In Logical Labyrinths, Smullyan doesn’t explicitly define “argument” and seems to use the word primarily for the intermediate steps (for the proof or other reasoning used to argue for the conclusion).

    * Enderton’s A Mathematical Introduction to Logic uses “argument” in a similar way; the word “premise” does not occur in the book at all (he uses “hypothesis” instead); and when he gives examples of simple arguments in the introduction, he calls them “deductions”.

    * Chiswell and Hodges, Mathematical Logic, seems to be another example of that sort. For example, on page 26 we find “A close inspection of this argument shows that we prove the theorem by proving (its negation) and deducing an absurdity. … This form of argument is known as reductio ad absurdum.” Or, on page 29, “There is a well-known proof when n is positive. But this argument will not work when n is negative.”

    * In philosophy, when the word “argument” is used, it isn’t always easy to tell whether intermediate steps are included; but it usually is easy to tell in some areas, such as discussions of free will (of the “consequence argument”, for example) or of arguments for or against the existence of God (“ontological argument”, etc). It seems to me that intermediate steps normally are included.

    1. Critical thinking texts are another category that tends to use the 1-step definition. (I haven’t so far spotted an exception.)

      So it’s primarily introductory texts of various sorts that use that definition. More advanced texts, and mathematical logic texts, may not use “argument” in any significant way at all or may even use “argument” primarily for the intermediate steps.

      There’s something odd about this, because it’s the introductory texts that are focused much more on applications to ordinary arguments in natural language. Why do so many of them adopt a definition of “argument” that’s so at odds with ordinary usage?

      It can’t be as preparation for more advanced or mathematical logic texts, since they don’t use that definition; it isn’t even how “argument” is generally used in philosophy. I suppose it is simpler, at some points, to define “argument” just as premises plus a conclusion, but it’s not a lot simpler, and it doesn’t fit the many arguments “in the wild” that have intermediate steps.

      I have a similar question about using trees. Why do introductory texts use trees? I don’t think I’ve ever seen an argument in a maths text or in ordinary works of philosophy or politics or anything else that was presented using trees. Wouldn’t it make much more sense for to use natural deduction?

  2. I suspect it’s done for reasons of simplicity.

    Many logical properties can be defined in terms of satisfiability. (A tautology is always satisfiable, a contradiction is never satisfiable, an argument is invalid when the premises with the negation of the conclusion is unsatisfiable, and so on.) And if we’re introducing these logical properties to students in this way, then the intermediate steps of the argument don’t matter. What does matter is the initial set of premises, along with the conclusion, and we don’t need to take notice of anything else.

    (In a way, this merely rephrases David Auerbach’s comment, since trees are particularly suited for proving things via satisfiability.)

  3. I think there are actually several reasons for preferring the standard textbook definition of an argument.

    (a) We want to distinguish between presenting the argument itself (either accept the conclusion or reject a premise you must!) and showing that the argument is valid. Intermediate steps are useful for the latter task, but not the former. Thus, the textbook definition easily finds a role for intermediate steps while still leaving them out of the definition.

    (b) With intermediate steps we get very fine-grained identity criteria. (1) to (3) are the same argument according to the textbook definition, but not according to your multiple steps definition:
    (1) A; if A and B, then C; If C, then D; therefore, if B, then D
    (2) A; if A and B, then C; therefore, if B, then C; if C, then D; therefore, if B, then D
    (3) if A and B, then C; if C, then D; therefore, if A and B, then D; A; therefore, if B, then D

    (c) The textbook conception of an argument differs from ordinary conceptions in multiple ways. For example, a research article and its abstract, or a literary work and its summary in a single tweet, or a rambling rant and its punchline, are all arguments in the sense of the textbook definition. An argument is just anything for which validity is a nice thing to have. Thus, I don’t think that intermediate steps are ubiquitous; it’s just one type of argument that often contains intermediate steps.

    (d) Some languages distinguish between the two. German for example has both “Argument” and “Argumentation”. An argumentation may contain a lot of steps, but an argument is a one-step affair.

    (e) Traditionally logic is about inferences and entailments. The textbook definition of an argument is closer to that tradition.

    So I totally agree that in general textbooks should do a better job when explaining their definition of an argument, but I still think that it has a lot of advantages that speak in favour of sticking to it.

    1. For (a): We can just distinguish instead between presenting the premise and conclusions and showing the inference is valid.

      For (b): Don’t we have the identity issue anyway, if not for arguments, then for proofs or whatever we call the intermediate steps? There can also be different but equivalent ways to formulate the same premise or conclusion.

      For (c): That looks more like an argument against the textbook definition to me. I’d find it very odd to say a literary work and its summary in a single tweet constitutes an argument.

      (d) is interesting but doesn’t seem a reason to use the textbook definition in English.

      For (e), I don’t see how the textbook definition is closer.

      In any case, since Peter’s book presumably doesn’t / won’t use that textbook definition, doesn’t that show that the alternative definition is viable for textbook use and doesn’t lead to serious problems?

      1. Thanks for your helpful replies! I took Peter to be arguing not just that there are two equally useful ways of defining “argument”, but that his way is actually the better one (because closer to the commonsense notion). My impression is that the two approaches are now tied. As far as I’m concerned, I’m happy with a tie. For then I don’t have to change anything in my intro to logic course :-)

        Although I can’t defend it in full here, let me add at least something about my motivation for sticking to the definition of an argument criticised by Peter: I find it difficult to motivate first-order logic when introducing it as a serious attempt at explaining how reasoning or arguing or offering reasons for a belief works. For many (valid) arguments don’t offer reasons at all (e.g. P, therefore P) and most reason-offering arguments are not meant to be deductively valid. But I find it quite easy to introduce the notion of (deductive) validity as something that is interesting in its own right (if this is true, what *must* be true as well?). I then explain to my students that this notion has a lot of interesting applications. Some kinds of reasoning strive for validity, summaries strive for validity, working out an opponent’s implicit commitments strives for validity and so on. Especially summaries are a paradigm example of something that must be deductively valid. For there is no such thing as an inductive or ampliative summary. That’s why I think of an argument as anything which can usefully be evaluated as valid or invalid. Whereas some teachers of logic ask “what do we want arguments to be?” (and surprisingly reply: valid!), others ask “what kind of thing to we want to be valid?” (and reply: a lot of things! e.g. some small subset of reasoning, summaries). (If you think the idea that summaries are arguments is completely nuts, you may want to take a look at Strawson: “Introduction to Logical Theory”, 1952, p. 14.)

  4. An irritating definition, no doubt. Sure, philosophers and mathematicians have always employed their licence to create their own non-everyday and even non-intuitive definitions, but this one is quite far-out. If an *inference* were being defined here (or an argument step, if you prefer the term), I don’t think we’d have any problems. I don’t think it helps anyone that this sort of very pedantic definition that ignores the multi-step nature of most abstract (formal) and informal argument – on the contrary, I argue that it can only be misleading.

  5. Here’s a different possibility:

    The book Just the Arguments: 100 of the Most Important Arguments in Western Philosophy certainly seems to use “argument” to mean the whole thing, including intermediate inference steps — but when an argument is set out in that book, each line, including intermediate steps, is labelled either “P” or “C” followed by a number. Lines that result from applying an inference rule such as modus ponens are labelled “C”. So every line is either a premise or a conclusion.

    In this view, it seems that an argument can have sub-arguments that reach intermediate conclusions; and I wonder how the books that define “argument” in the one-step way deal with intermediate steps when they come to cases that include them. Do they see those cases as complex or compound arguments that contain sub-arguments, or do they have some other way to talk about them?

    1. I followed my wondering above by looking at a book I happened to have, Logic: Techniques of Formal Reasoning, 2nd ed, by Kalish, Montague, and Gar. It defines “argument” as premises and a conclusion, but it then defines “derivation” as

      “a sequence of steps that leads from the premises to the conclusion of a symbolic argument. Each intermediate step should be the conclusion of an intuitively valid argument whose premises are among the preceding steps. These intermediate steps are generated in accordance with inference rules.”

      If the book has a term for the combination of an outer argument and the derivation of its conclusion, I haven’t so far spotted it.

  6. In contrast to the texts cited, compare the following from David Makinson Sets Logic and Maths for Computing (2nd edition 2012, opening paragraph of chapter 10):

    “In the last two chapters we learned quite a lot about propositional and quantificational logic and in particular their relations of logical implication. In this chapter, we look at how simple implications may be put together to make a deductively valid argument, or proof. At first glance, this may seem trivial: just string them together! But although it starts like that, it goes well beyond, and is indeed quite subtle.”

  7. In my opinion the trouble with a definition of argument is simply that it needs to address (although sometimes in a hidden way) the ontological issue of when 2 arguments are the same one or different ones. The usual definition of argument (where the intermediate steps are ignored) can be seen as a quotient (under the obvious equivalence class) of the “real-world arguments (where the intermediate steps are taken into account)”.

    In my opinion (as a mathematician working in logic) this ontological issue is not a serious problem at all. Let me notice that this is something that happens very often in mathematics. When do we consider that 2 graphs are the same one? When do we consider that two surface are the same one (topologically speaking)?

  8. David Auerbach

    Here’s a candidate reason (in some cases, perhaps only unconscious or charitably attributable.). If you’re aiming at using trees for establishing logical properties (e.g., validity) then “steps” are irrelevant.

    On the other hand, if you are using a proof system then steps will arrive then.

    In the latter case there’s more of a reason to be clear about the departure, or be clear about not departing too much, from the vernacular.

    Maybe that’s all just a slightly more verbose version of “yeah, yeah…”?

    1. I think that’s a good point, but at least some of the authors in question (such as Copi) use natural deduction rather than trees. Perhaps they just want to separate proof from argument.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top