Speak with the vulgar.

Think with me.

How do we understand formal languages?

with 5 comments

Consider a sentence from some formal language; for example, a sentence of quantified modal logic:

  • \exists x(Fx \land \lozenge \neg Fx)
    “There is an F which could have been a non-F.”

What fixes the meaning of this sentence? How do we make sense of it? And then, what is the status of our judgments about truth conditions, validity, consequence, etc. for formal sentences?

A candidate answer. (Peter van Inwagen defends this view in “Meta-Ontology”, 1998.) The symbols in a formal language are defined in terms of some natural language, like English. For instance, \exists is defined to mean “there is”, \lozenge to mean “possibly”, and so on. We understand the formal sentence by replacing each symbol with its English definiens, and we understand the English sentence directly. On this view, formal languages are just handy abbreviations for the natural languages we have already mastered, perhaps with some extra syntactic markers to remove ambiguity.

Suppose A, an English speaker, claims that \phi is intuitively valid. If B wants to argue that \phi is in fact invalid, she has only three options. (1) Use a different English translation from A. In this case, though, B would merely be talking past A. (2) Deny that A correctly understands the English sentence—so B is controverting a datum of natural language semantics. (3) Deny A’s logical intuition. So B’s only options are pretty drastic: to deny a native speaker’s authority on the meaning of her own language, or to deny a (let’s say pretty strong) logical intuition.

I’m pretty sure the candidate answer is wrong. First, because the obvious English translations for a logical symbol often turn out to wrong—witness the logician’s conditional, or the rigid “actually” operator—and we can go on understanding the symbol even before we have found an adequate translation. Also, we don’t typically explain the use of a symbol by giving a direct English translation: rather, we describe (in English, or another formal language) generally how the symbol is to be used. Furthermore, we can have non-trivial arguments over whether a certain English gloss of a formal sentence is the right one.

Here’s an alternative picture. In order to do some theoretical work, we introduce a regimented language as a tool. What we need for the job is some sentences that satisfy certain semantic constraints. \phi should mean that snow is white. \psi should be valid. \alpha should have \beta as a consequence. We generally won’t have codified these constraints, but we internalize them in our capacity as theorists using a particular language; someone who doesn’t use the language in accordance with the constraints doesn’t really understand it. (This view is like conceptual role semantics, except that constraints that specify the meaning directly, in other languages, are allowed.)

In using the language, we assume that some interpretation satisfies our constraints—to use a formal language is, in effect, to make a theoretical posit. Insofar as our constraints underdetermine what such an interpretation would be, our language’s interpretation is in fact underdetermined. If no language satisfies all the constraints, then we’ve been speaking incoherently; we need to relax some constraints. The constraints are partly a matter of convention, but also partly a matter of theory: internally, in using the language, we commit ourselves to its coherence, and thus to the existence of an interpretation that satisfies the constraints; and externally, the constraints are determined by the theoretical requirements we have of the language.

Say A judges \phi to be valid. What this involves is A’s judgment that “\phi is valid” is a consequence of a certain set of implicit semantic constraints on the language. Again suppose that B denies A’s validity intuition. Now there are two ways to go. (1) Deny A’s logic: B might agree on the relevant constraints, but disagree that they have “\phi is valid” as a consequence. (2) Deny A’s constraints: B might say that some of the constraints A imposes are not appropriate for the language in question. This might be based on an internal criticism—some of A’s constraints are inconsistent—or, more likely, external criticism: some of A’s constraints don’t adequately characterize the role the language is intended to play. The important upshot is that, unlike on van Inwagen’s view, B can disagree not only on linguistic grounds or logical grounds, but also on theoretical grounds. (Of course, since on my view the constraints also fix the meaning of the language, there is no bright line between the linguistic and theoretical grounds for disagreement—this is Quine’s point.)

Written by Jeff

December 10, 2008 at 3:34 pm

Posted in Language, Logic

Tagged with , ,

5 Responses

Subscribe to comments with RSS.

  1. Hi Jeff,

    Thanks for the post, that was thought provoking. I have a couple questions about how it is supposed to work though.

    Suppose A and B espouse two semantic interpretations that satisfy the constraints. Now suppose A and B agree they satisfy the constraints, and that the constraints are appropriate in this context. Let’s say they disagree about is some inference in a portion of the language that does not have a natural language equivalent; it’s not supposed to represent (for example, you mentioned the @ operator.) How is this disagreement to be settled? It strikes me they’re no better off than with the original van Inwagen proposal.

    I also find it quite hard to see how anyone can have any “intuitions” about validity in a language, if it was just introduced to do some theoretical work (it seems weird to say “I can just intuit that this semantics will do the work I want it to better than that one.”)

    Andrew

    December 11, 2008 at 1:37 pm

  2. So how about the following, which seems to be in a similar spirit to van Inwagen. Obviously, the abbreviation view is a bit extreme, but it seems natural to assume that we begin with a “base” language whose purpose is to characterise a fragment of English.

    We then try to assign a formal semantics to that language, with various constraints, for example, that an inference is valid iff its English translate is. But the problem, which you mentioned, is that you can introduce new vocabulary to your formal language, corresponding to something natural in the formal semantics, without corresponding to any English equivalent. How can you possibly argue over inferences involving the new vocabulary?

    I think the basic idea is this: being adequate as a semantic theory is more than just making the right sentences true, it’s somehow representing the intended interpretation of the English sentences it was supposed to capture. On this view, there really is a natural (non-gruesome) operator out there, @, its just English speakers haven’t yet gotten around to naming it.

    Actually, I think @ is a bad example, since we have a pretty good English equivalent, so let me try a few more examples. Suppose you think the Kripke semantics has pretty much got it right for modal talk. Then you can introduce a new operator \Diamond^{<\omega}p, true iff p is true in at most finitely many accessible worlds. Or supposing you think the closeness semantics for counterfactuals gets it right; then you can extend your base counterfactual language with the operator \bigcirc p true iff p is true in an open set of worlds. These are clearly just logicians toys, the point is, if you’ve got the semantics for the base language “right”, then you can make good sense of this new vocabulary.

    Even if A and B disagree about which semantics is the “right” one, there’s still some room for dispute. If you’ve given a decent semantics then any new vocabulary you can introduce in terms of it should be conservative over the base language. (So for example, I think the right interpretation for the language of PA is the standard model, you think it’s a non-standard one. However, if you introduce a new “non-standard(x)” predicate, you can invalidate induction.)

    Andrew

    December 11, 2008 at 1:40 pm

  3. Sorry, but aren’t there sentences of formal languages that we can perfectly well understand without there being an English paraphrase?

    For example, what about countable disjunctions in infinitary languages? Or how about nth-order predication for some particularly large (but finite) n?

    Seems to me that van Inwagen’s proposal seems pretty limited from the start, no?

    acotnoir

    December 11, 2008 at 9:30 pm

  4. Yeah, it seems Inwagen’s proposal does pretty badly. Although in your example, you can always say things like the ‘the countable conjunction is true iff all of the conjuncts are’, so there is some way of elucidating the truth conditions (indeed, that’s how you’d do it metalinguistically.)

    By the way, I think van Inwagen was thinking about this stuff in the context of first order logic, where his proposal is plausibly true. It’s a bit unfair to hold him to this view for arbitrary formal languages.

    Andrew

    December 12, 2008 at 7:01 am

  5. Sorry I’ve been out of touch for a bit. Some replies.

    1. It’s true that it’s a bit unfair for me to pin the more general view on van Inwagen–but I do think that his proposal doesn’t work even for first-order logic. Look for instance at the Strawson-Grice debate about natural language logical connectives. It’s debatable whether English “\phi and \psi” is really true iff both conjuncts are true (e.g., Strawson argues that it also requires the right temporal ordering)–but how that debate is resolved has absolutely no bearing on the truth conditions for the conjunction of first-order logic. Ergo, we don’t understand the latter by translating it into the former.

    2. “I also find it quite hard to see how anyone can have any “intuitions” about validity in a language, if it was just introduced to do some theoretical work…”

    Having an intuition that \phi is valid in a language L amounts to this: having, first, an intuitive grip on the constraints that L is answerable to, and second, an intuitive belief that \phi follows from these constraints. It isn’t a primitive kind of insight (“I can just intuit…”). It’s something like mathematical intuition.

    3. In the case where there are two theories, both consistent with all the semantic constraints, one of which validates an inference and another which doesn’t, I’d say it’s genuinely undetermined whether the inference is valid. We resolve the dispute by adding new constraints. This resolution won’t be a discovery of some new matter of fact, but rather a stipulation of some new matter of meaning–it’s like when the astronomers disagree over whether Pluto is a planet.

    4. Certainly we can introduce formal languages specifically to model certain fragments of natural language. But I don’t think that’s what most formal languages are for. Instead, they’re meant to get at something abstracted away from our natural way of putting things–and so our judgments about them are responsive to different issues than our judgments about natural language. The issue is complicated, I think, because informal philosophical language has precisely the same feature: its meaning is responsive to different concerns than natural language. This is too condensed, though; I’ll try to post more soon.

    Jeff

    December 17, 2008 at 7:58 pm


Leave a comment