Speak with the vulgar.

Think with me.

Archive for the ‘Language’ Category

Questions

with 4 comments

I’ve been thinking a bit about the semantics of questions. I know hardly any of the literature on this, but I’ve worked out a little view that seems to have some nice features. If you know more I’d be interested to hear what you think.

The semantic value (SV) of a question has two jobs to do. First, it should fit nicely into the rest of our semantics: it should help us get the right truth conditions for sentences with embedded questions. Second, it should fit nicely into the rest of our pragmatics: it should help us explain what a speaker does when she asks a question. Ideally both of these should require minimal revision to the rest of what we were doing in those projects.

As I see it, the standard account (the SV of a question is a set of propositions that partition logical space) does a mediocre job on both counts. You can get things to work, but the account doesn’t really make it easy, and you end up having to build a lot of new machinery in other places, like attitude verbs. I think I might be able to do better.

Read the rest of this entry »

Advertisements

Written by Jeff

May 25, 2009 at 8:42 am

Magnetic laws

with 7 comments

I’ve been thinking more about the problem of fit I posted last week. Specifically, I’m trying to work out how a response appealing to reference magnetism would go.

Recall the puzzle: how is it that when we select hypotheses that best exemplify our theoretical values, we so often hit on the truth? A simple example: emeralds, even those we haven’t observed, are green, rather than grue. And lo, we believe they are green, rather than believing they are grue. It seems things could have been otherwise, in either of two ways:

  1. There might be people who project grue rather than green, in a world like ours.
  2. Or there might be people who (like us) project green, in a world where emeralds are grue.

Those people are in for a shock. Why are we so lucky?

A response in the Lewisian framework goes like this. Not all properties are created equal: green is metaphysically more natural than grue. In particular, it is semantically privileged: it is easier to have a term (or thought) about green than it is to have a term (or thought) about grue. This should take care of possibility 1. If there are people who theorize in terms of grue rather than green, their practices would have to be sufficiently perverse to overcome the force of green’s reference magnetism. There are details to fill in, but plausibly it would be hard for natural selection to produce creatures with such perverse practices.

But this still leaves possibility 2. Given that our theories are attracted to the natural properties, even so, why should a theory in terms of natural properties be true? The green-projectors in the world of grue emeralds have just as natural a theory as ours, to no avail.

But even though 2 is possible, we can still explain why it doesn’t obtain. What we need to explain is why emeralds are green—and we shouldn’t try to explain that by appeal to general metaphysics, but by something along these lines: the electrons in a chromium-beryllium crystal can only absorb photons with certain amounts of energy. That is, we explain why emeralds are green by appeal to the natural laws of our world.

Generalizing: “joint-carving” theories yield true predictions because their predictions are supported by natural laws. Why is this? On the Lewisian “best system” account of laws, it is partly constitutive of a natural law that it carve nature at the joints: naturalness is one of the features that distinguishes laws from mere accidental generalizations. So, much as reference magnetism makes it harder to have a theory that emeralds are grue than it is to have a theory that emeralds are green, so the best system account makes it harder to have a law that emeralds are grue than it is to have a law that emeralds are green. Then the idea is that, since our theories and our laws are both drawn to the same source, this makes it likely that they line up. Furthermore, since the laws explain the facts, this explains why our theories fit the facts.

Something isn’t right about this story; I’m having a hard time getting it clear, but here’s a stab. There’s a general tension in the best system account: on the one hand, the laws are supposed to explain the (non-nomic) facts; on the other hand, the (non-nomic) facts are metaphysically prior to the laws. But metaphysical priority is also an explanatory relation, and so it looks like we’re in a tight explanatory circle. (Surely this point has been made? I don’t know much of the literature on laws, so I’d welcome any pointers.)

This is relevant because the answer to the problem of fit relies on the explanatory role of laws—a role that seems difficult for the best systems account to bear up. But I feel pretty shaky on this, and would appreciate help.

Written by Jeff

March 2, 2009 at 8:44 pm

The “strict philosophical sense”

with 4 comments

Here’s an inconsistent triad:

  • The question of the ontological status of ordinary material objects is a serious question: its answer isn’t obvious.
  • Obviously there is a chair I’m sitting on.
  • Ontology is about what there is. (So, specifically, the question of the ontological status of ordinary material objects is just the question of whether there are such objects (chairs being among them).)

All three principles are pretty compelling. How can we resolve their inconsistency?

I suggest that there is an equivocation on “there is”. When we say ontology is about what there is, we are using “there is” in a different way than when we say there is a chair I’m sitting on. It is responsive to different constraints.

This is Quine’s picture: to find out what there is, we look at what we quantify over in our simplest theory of the world. The quantifiers are the symbols that appear in certain inferences: If a is a \phi, then there is a \phi; If we can infer that a isn’t a \phi from premises not involving a, then we can infer from the same premises that there isn’t a \phi. These rules, or something like them, constrain what we mean by “there is”, when we are doing our philosophical theory-building.

But the natural meaning of “there is” is constrained by the facts of English usage (perhaps together with some facts about the natural properties out there for us to talk about). There’s no reason to think beforehand that the constraints of theory-building are going to coincide with the constraints of ordinary usage. Clearly there’s an etymological relationship between the “strict philosophical” sense of “there is” and the ordinary English sense, but it looks plausible to me that they aren’t quite the same thing.

An analogy. We have an ordinary use of “animal” that excludes human beings. But biologists have discovered that there is a more useful category for systematic theory-building, one which mostly coincides with ordinary “animal”, but which includes human beings. This “strict biological sense” of the word “animal” doesn’t mean that a sign that says “No animals are allowed in the bus” is (strictly speaking) wrong. It’s just employing a different sense of “animal”.

I think a lot of philosophers think that when they say “strictly speaking”, they are manipulating the pragmatics of the discourse: the “strict philosophical sense” is the most literal sense. If what I’m saying is right, then this is a mistake. The strict philosophical sense isn’t any more literal than the ordinary sense; it is simply a sense that belongs to a different, philosophical register.

Written by Jeff

December 21, 2008 at 2:26 pm

How do we understand formal languages?

with 5 comments

Consider a sentence from some formal language; for example, a sentence of quantified modal logic:

  • \exists x(Fx \land \lozenge \neg Fx)
    “There is an F which could have been a non-F.”

What fixes the meaning of this sentence? How do we make sense of it? And then, what is the status of our judgments about truth conditions, validity, consequence, etc. for formal sentences?

A candidate answer. (Peter van Inwagen defends this view in “Meta-Ontology”, 1998.) The symbols in a formal language are defined in terms of some natural language, like English. For instance, \exists is defined to mean “there is”, \lozenge to mean “possibly”, and so on. We understand the formal sentence by replacing each symbol with its English definiens, and we understand the English sentence directly. On this view, formal languages are just handy abbreviations for the natural languages we have already mastered, perhaps with some extra syntactic markers to remove ambiguity.

Suppose A, an English speaker, claims that \phi is intuitively valid. If B wants to argue that \phi is in fact invalid, she has only three options. (1) Use a different English translation from A. In this case, though, B would merely be talking past A. (2) Deny that A correctly understands the English sentence—so B is controverting a datum of natural language semantics. (3) Deny A’s logical intuition. So B’s only options are pretty drastic: to deny a native speaker’s authority on the meaning of her own language, or to deny a (let’s say pretty strong) logical intuition.

I’m pretty sure the candidate answer is wrong. First, because the obvious English translations for a logical symbol often turn out to wrong—witness the logician’s conditional, or the rigid “actually” operator—and we can go on understanding the symbol even before we have found an adequate translation. Also, we don’t typically explain the use of a symbol by giving a direct English translation: rather, we describe (in English, or another formal language) generally how the symbol is to be used. Furthermore, we can have non-trivial arguments over whether a certain English gloss of a formal sentence is the right one.

Here’s an alternative picture. In order to do some theoretical work, we introduce a regimented language as a tool. What we need for the job is some sentences that satisfy certain semantic constraints. \phi should mean that snow is white. \psi should be valid. \alpha should have \beta as a consequence. We generally won’t have codified these constraints, but we internalize them in our capacity as theorists using a particular language; someone who doesn’t use the language in accordance with the constraints doesn’t really understand it. (This view is like conceptual role semantics, except that constraints that specify the meaning directly, in other languages, are allowed.)

In using the language, we assume that some interpretation satisfies our constraints—to use a formal language is, in effect, to make a theoretical posit. Insofar as our constraints underdetermine what such an interpretation would be, our language’s interpretation is in fact underdetermined. If no language satisfies all the constraints, then we’ve been speaking incoherently; we need to relax some constraints. The constraints are partly a matter of convention, but also partly a matter of theory: internally, in using the language, we commit ourselves to its coherence, and thus to the existence of an interpretation that satisfies the constraints; and externally, the constraints are determined by the theoretical requirements we have of the language.

Say A judges \phi to be valid. What this involves is A’s judgment that “\phi is valid” is a consequence of a certain set of implicit semantic constraints on the language. Again suppose that B denies A’s validity intuition. Now there are two ways to go. (1) Deny A’s logic: B might agree on the relevant constraints, but disagree that they have “\phi is valid” as a consequence. (2) Deny A’s constraints: B might say that some of the constraints A imposes are not appropriate for the language in question. This might be based on an internal criticism—some of A’s constraints are inconsistent—or, more likely, external criticism: some of A’s constraints don’t adequately characterize the role the language is intended to play. The important upshot is that, unlike on van Inwagen’s view, B can disagree not only on linguistic grounds or logical grounds, but also on theoretical grounds. (Of course, since on my view the constraints also fix the meaning of the language, there is no bright line between the linguistic and theoretical grounds for disagreement—this is Quine’s point.)

Written by Jeff

December 10, 2008 at 3:34 pm

Posted in Language, Logic

Tagged with , ,

More fatalism

leave a comment »

In response to my previous post, Jonathan Ichikawa offered a more puzzling variant of the fatalist argument. International tensions are high, and a captain spies a foreign frigate off his starboard. He reasons,

  1. Either there will be a battle or there won’t be a battle.
  2. If there will be a battle, there’s no harm in firing the cannons now.
  3. If there won’t be a battle, there’s no harm in firing the cannons now.
  4. So, there’s no harm in firing the cannons now.

Something is wrong here.

“There’s no harm in firing the cannons now” means something like “Firing the cannons now will lead to no worse consequences than doing otherwise.” Now let’s suppose that “if” expresses a material conditional. Then the argument is valid. But the world could be like this:

  • The captain fires the cannons. It starts a battle, which leads to a terrible war and thousands of ugly deaths. If the captain hadn’t fired the cannons, none of this would have happened.

In this case, the first premise is true. The third premise is also true, because the antecedent is false. But the second premise is false: there will be a battle, but there’s very great harm in firing the cannons now. Moreover, whatever “if” means, it means something at least as strong as the material conditional. So Premise 2 really is false in the war-world.

Then why does it sound true? Probably because we naturally hear it as saying something like “If there will be a battle anyway, there’s no harm in firing the cannons now.” That is, we implicitly give the antecedent some kind of modal force.

  • If (necessarily, there will be a battle), then there’s no harm in firing the cannons now.

This is true: if the battle is inevitable, the captain might as well take the first shot. But the reasoning fails precisely because the battle is not inevitable: the antecedent of this conditional is false.

There may be something more general going on here: maybe when someone says “It will be the case that P”, in general we hear this as saying “No matter what, it will be the case that P”. Maybe “will” even has this modal force as part of its content. On this alternative story, Premise 2 is true after all, and the fatalist argument shows roughly what the fatalist thinks it shows: in general, “There will be a battle or there won’t be a battle” is false! But this does not mean that the future is “open”: rather it means that this is false:

  • Necessarily, there WILL be a battle, or, necessarily, there WILL not be a battle.

Where “WILL” is an artificial version of “will”, with all of the modal overtones stripped out—it means merely “at some future time”. If something like this semantic story is true, it would explain a lot of our confusions about the future: our language naturally leads us to confuse tense with modality.

Written by Jeff

March 2, 2008 at 12:00 am

Fatalism

leave a comment »

This is well-trod ground, but I was thinking about this old puzzle this afternoon and I wanted to work through it myself.

There’s an argument that Aristotle discusses, I think, to the effect that our present actions make no difference to future events. It goes like this. Let B be the proposition “There will be a sea battle tomorrow”, and let A be the proposition “The captain starts the attack.”

  1. Either B or not B. (Premise)
  2. Suppose B.
  3. In that case, whether or not A, B.
  4. So A makes no difference as to whether B.
  5. Now drop the assumption that B, and suppose instead not-B.
  6. In that case, whether or not A, not-B.
  7. So again A makes no difference as to whether B.
  8. So in any case, A makes no difference as to whether B.

That is, the captain’s decision makes no difference as to whether there will be a sea battle. But fatalism like this is crazy, isn’t it?

This argument, or something like it, has led some people to deny excluded middle for at least some sentences about the future. They say that there is no fact of the matter whether there will be a sea battle tomorrow, and only when tomorrow comes will the proposition become either true or false. Otherwise, they reason, how could we make free decisions that affect the future?

This response is unnecessary. We should stop and ask, what do we mean by the expression “Whether or not P, Q”? Here’s a reasonable thing to mean by it:

  • (If P then Q, and if not-P then Q) or (If P then not-Q, and if not-P then not-Q).

In other words, either P and not-P equally well imply Q, or else P and not-P equally well imply not-Q. Intuitively, in no case does Q’s truth value depend on P’s.

But now we need to be careful about what we mean by “if”. In classical logic we take “If P then Q” to be logically equivalent to “Q or not-P.” (This meaning of “if” is called “the material conditional”.) On that understanding of “whether or not” and “if”, (3) logically follows from (2), and (6) logically follows from (5).

But in ordinary English usually what we mean by “If P then Q” is something stronger than just “Q or not P”. For instance, both of the inferences “It isn’t raining; so if it’s raining then it’s Tuesday” and “It’s raining; so if it’s Tuesday then it’s raining” sound weird at best, false at worst. Something closer to what we usually mean by “If P then Q” is “Necessarily, not P or Q” (or “In every relevant case, not P or Q”)—this may be too strong, but we’ll work with it. If we read “if” this way, then the analysis I gave for “Whether or not P, Q” is equivalent to “Necessarily, Q” or (“In every relevant case, Q”). And that sounds about right.

If we understand “whether or not” in the natural way, then the inference from “B” to “Whether or not A, B” is no good. It’s just like reasoning “B, therefore necessarily B.” On the other hand, if we insist on understanding “whether or not” in terms of the “if” of classical logic, then we shouldn’t allow the inference from “Whether or not A, B” to “A makes no difference as to whether B”. It sounds okay, but that’s just because we’re using the words “whether or not” in a funny artificial way. On neither of the two ways of understanding “whether or not” does the argument go through. We don’t have to deny that there are objective facts about the future in order to avoid fatalism.

Written by Jeff

January 27, 2008 at 12:00 am