Speak with the vulgar.

Think with me.

Magnetic laws

with 7 comments

I’ve been thinking more about the problem of fit I posted last week. Specifically, I’m trying to work out how a response appealing to reference magnetism would go.

Recall the puzzle: how is it that when we select hypotheses that best exemplify our theoretical values, we so often hit on the truth? A simple example: emeralds, even those we haven’t observed, are green, rather than grue. And lo, we believe they are green, rather than believing they are grue. It seems things could have been otherwise, in either of two ways:

  1. There might be people who project grue rather than green, in a world like ours.
  2. Or there might be people who (like us) project green, in a world where emeralds are grue.

Those people are in for a shock. Why are we so lucky?

A response in the Lewisian framework goes like this. Not all properties are created equal: green is metaphysically more natural than grue. In particular, it is semantically privileged: it is easier to have a term (or thought) about green than it is to have a term (or thought) about grue. This should take care of possibility 1. If there are people who theorize in terms of grue rather than green, their practices would have to be sufficiently perverse to overcome the force of green’s reference magnetism. There are details to fill in, but plausibly it would be hard for natural selection to produce creatures with such perverse practices.

But this still leaves possibility 2. Given that our theories are attracted to the natural properties, even so, why should a theory in terms of natural properties be true? The green-projectors in the world of grue emeralds have just as natural a theory as ours, to no avail.

But even though 2 is possible, we can still explain why it doesn’t obtain. What we need to explain is why emeralds are green—and we shouldn’t try to explain that by appeal to general metaphysics, but by something along these lines: the electrons in a chromium-beryllium crystal can only absorb photons with certain amounts of energy. That is, we explain why emeralds are green by appeal to the natural laws of our world.

Generalizing: “joint-carving” theories yield true predictions because their predictions are supported by natural laws. Why is this? On the Lewisian “best system” account of laws, it is partly constitutive of a natural law that it carve nature at the joints: naturalness is one of the features that distinguishes laws from mere accidental generalizations. So, much as reference magnetism makes it harder to have a theory that emeralds are grue than it is to have a theory that emeralds are green, so the best system account makes it harder to have a law that emeralds are grue than it is to have a law that emeralds are green. Then the idea is that, since our theories and our laws are both drawn to the same source, this makes it likely that they line up. Furthermore, since the laws explain the facts, this explains why our theories fit the facts.

Something isn’t right about this story; I’m having a hard time getting it clear, but here’s a stab. There’s a general tension in the best system account: on the one hand, the laws are supposed to explain the (non-nomic) facts; on the other hand, the (non-nomic) facts are metaphysically prior to the laws. But metaphysical priority is also an explanatory relation, and so it looks like we’re in a tight explanatory circle. (Surely this point has been made? I don’t know much of the literature on laws, so I’d welcome any pointers.)

This is relevant because the answer to the problem of fit relies on the explanatory role of laws—a role that seems difficult for the best systems account to bear up. But I feel pretty shaky on this, and would appreciate help.

Written by Jeff

March 2, 2009 at 8:44 pm

The possibility of resurrection

with 5 comments

Say next week I am utterly destroyed, body and mind. There is no immortal trace of me that survives the destruction (at least in the short term). Could God, even so, raise me up at the last day? Certainly in the distant future he could make a person who was like me in various respects, but could he ensure that the person he made then was really me? Here’s a story about how this might be, given three controversial premises.

  1. The concept person is an ethical concept. To be the same person as X is, roughly, to be responsible for X’s actions, to have a special stake in what happens to X, to have special obligations and rights in how you treat X.

  2. Ethical value is grounded in God’s evaluative attitudes. In broad strokes, to be good is to be loved by God. More complicated ethical concepts similarly come down to God’s attitudes in suitably complicated ways.

These two imply that for X and Y to be the same person amounts to God having the right evaluative attitudes—for short, it amounts to God regarding X and Y as the same person. Then the final premise is that God’s attitudes are not unduly constrained:

  1. It is possible for God to regard X and Y as the same person, even if X is destroyed long before Y is created.

It’s clear how the conclusion follows.

You can count me among the skeptics of the second premise, but I think there’s something interesting here even if we give it up. If personhood is answerable to questions of value, then it isn’t clear why traditional metaphysical issues like physical or causal continuity would be relevant to survival. If those connections are severed, then the conceptual obstacles to resurrection seem much less threatening.

Written by Jeff

February 24, 2009 at 1:50 pm

Theory choice and God

with 3 comments

We have a lot of true beliefs. A few examples: there are dogs; every set has a power set; the universe is around 13 billion years old; it is generally wrong to torture children for fun; there are stars at space-like separation from us; a bus will go up First Avenue tomorrow. How did we get so many true beliefs about so many subjects?

First, how did we get our beliefs? The rough story is something like this: we have gathered evidence and formed hypotheses to account for that evidence, lending our belief to the hypotheses that best exemplify our theoretical values: fit, simplicity, power.

Second, why are so many of the beliefs we got this way true? It is easy to imagine creatures whose theoretical values are very bad guides to the truth. (Two versions: there could be people in a universe like ours who are defective theory-choosers—(e.g.) they favor the grue-theory over the green-theory; or there could be people with values like ours in a universe where (e.g.) emeralds are really grue.) So it sure looks like it could have been otherwise for us; why isn’t it? I see six main lines of response.

  1. Skepticism. The alleged datum is false: our theoretical values really aren’t a good guide to the truth. This would be very bad.

  2. Anti-realism. Somehow or another, our theoretical values constitute the facts in question. I assume this isn’t plausible for most of the subjects I mentioned.

  3. No reason. We’re just lucky. I think this response leads to skepticism (even though the problem didn’t start as a skeptical worry): if we find out there’s no reason for our theory choices to track the truth, then we shouldn’t be confident that they really do.

  4. Evolution. I’m sure this is part of the story, but it can’t be all of it. Selection might account for reliable theory-choosing about middle-sized objects of the sort our ancestors interacted with (I think Plantinga’s worries about this case can be answered); but this doesn’t account for our more exotic true beliefs about (e.g.) math, morality, cosmology, or the future. There could be creatures subject to the same constraints on survival as our ancestors (at whatever level of detail is selectively relevant), and yet who choose bad theories at selectively neutral scales and distances. Why aren’t we like them?

  5. Reference magnetism: certain theories are naturally more “eligible” than others; our beliefs are attracted to eligible theories; furthermore, the eligible theories are likely to be true. On the Lewisian version of this story, though, there is no reason why the last part should hold: we might very well be in a universe where the natural properties are distributed in a chaotic or systematically misleading way. In that case, reference magnetism would systematically attract our beliefs toward false theories. There might be a better non-Lewisian variant, where a property’s eligibility is somehow tied to its distribution, but I don’t know how this would go.

  6. Theism. Someone (I won’t be coy—it’s God) who has certain theoretical values is responsible both for the way the universe is, and also for the theoretical values we have. He made the universe as he saw fit, which means it has the kind of simplicity he likes; and he made us in his image, which means we like the same kind of simplicity. So when we judge Fness as a point in favor of a theory, this makes it likely that God favors Fness, which in turn makes it likely that the universe is F.

    (Why isn’t the fit perfect? The theist already needed an answer to the problem of evil—why the universe doesn’t perfectly fit our moral values; presumably that answer will also apply to our theoretical values.)

So it looks to me like the theist has a better explanation for an important fact than her rivals; this counts as evidence in theism’s favor.

Any good alternative explanations I’m missing?

Written by Jeff

February 22, 2009 at 2:03 pm

Indefinite divisibility

with 2 comments

If you’re interested, I’ve written a short paper on my nominalistic indefinite extensibility arguments. (This is also my way of making good on my offer in the comments to discuss a sort of consistency result for supergunk—it’s in the appendix.)

Written by Jeff

February 17, 2009 at 8:49 pm

The “strict philosophical sense”

with 4 comments

Here’s an inconsistent triad:

  • The question of the ontological status of ordinary material objects is a serious question: its answer isn’t obvious.
  • Obviously there is a chair I’m sitting on.
  • Ontology is about what there is. (So, specifically, the question of the ontological status of ordinary material objects is just the question of whether there are such objects (chairs being among them).)

All three principles are pretty compelling. How can we resolve their inconsistency?

I suggest that there is an equivocation on “there is”. When we say ontology is about what there is, we are using “there is” in a different way than when we say there is a chair I’m sitting on. It is responsive to different constraints.

This is Quine’s picture: to find out what there is, we look at what we quantify over in our simplest theory of the world. The quantifiers are the symbols that appear in certain inferences: If a is a \phi, then there is a \phi; If we can infer that a isn’t a \phi from premises not involving a, then we can infer from the same premises that there isn’t a \phi. These rules, or something like them, constrain what we mean by “there is”, when we are doing our philosophical theory-building.

But the natural meaning of “there is” is constrained by the facts of English usage (perhaps together with some facts about the natural properties out there for us to talk about). There’s no reason to think beforehand that the constraints of theory-building are going to coincide with the constraints of ordinary usage. Clearly there’s an etymological relationship between the “strict philosophical” sense of “there is” and the ordinary English sense, but it looks plausible to me that they aren’t quite the same thing.

An analogy. We have an ordinary use of “animal” that excludes human beings. But biologists have discovered that there is a more useful category for systematic theory-building, one which mostly coincides with ordinary “animal”, but which includes human beings. This “strict biological sense” of the word “animal” doesn’t mean that a sign that says “No animals are allowed in the bus” is (strictly speaking) wrong. It’s just employing a different sense of “animal”.

I think a lot of philosophers think that when they say “strictly speaking”, they are manipulating the pragmatics of the discourse: the “strict philosophical sense” is the most literal sense. If what I’m saying is right, then this is a mistake. The strict philosophical sense isn’t any more literal than the ordinary sense; it is simply a sense that belongs to a different, philosophical register.

Written by Jeff

December 21, 2008 at 2:26 pm

How do we understand formal languages?

with 5 comments

Consider a sentence from some formal language; for example, a sentence of quantified modal logic:

  • \exists x(Fx \land \lozenge \neg Fx)
    “There is an F which could have been a non-F.”

What fixes the meaning of this sentence? How do we make sense of it? And then, what is the status of our judgments about truth conditions, validity, consequence, etc. for formal sentences?

A candidate answer. (Peter van Inwagen defends this view in “Meta-Ontology”, 1998.) The symbols in a formal language are defined in terms of some natural language, like English. For instance, \exists is defined to mean “there is”, \lozenge to mean “possibly”, and so on. We understand the formal sentence by replacing each symbol with its English definiens, and we understand the English sentence directly. On this view, formal languages are just handy abbreviations for the natural languages we have already mastered, perhaps with some extra syntactic markers to remove ambiguity.

Suppose A, an English speaker, claims that \phi is intuitively valid. If B wants to argue that \phi is in fact invalid, she has only three options. (1) Use a different English translation from A. In this case, though, B would merely be talking past A. (2) Deny that A correctly understands the English sentence—so B is controverting a datum of natural language semantics. (3) Deny A’s logical intuition. So B’s only options are pretty drastic: to deny a native speaker’s authority on the meaning of her own language, or to deny a (let’s say pretty strong) logical intuition.

I’m pretty sure the candidate answer is wrong. First, because the obvious English translations for a logical symbol often turn out to wrong—witness the logician’s conditional, or the rigid “actually” operator—and we can go on understanding the symbol even before we have found an adequate translation. Also, we don’t typically explain the use of a symbol by giving a direct English translation: rather, we describe (in English, or another formal language) generally how the symbol is to be used. Furthermore, we can have non-trivial arguments over whether a certain English gloss of a formal sentence is the right one.

Here’s an alternative picture. In order to do some theoretical work, we introduce a regimented language as a tool. What we need for the job is some sentences that satisfy certain semantic constraints. \phi should mean that snow is white. \psi should be valid. \alpha should have \beta as a consequence. We generally won’t have codified these constraints, but we internalize them in our capacity as theorists using a particular language; someone who doesn’t use the language in accordance with the constraints doesn’t really understand it. (This view is like conceptual role semantics, except that constraints that specify the meaning directly, in other languages, are allowed.)

In using the language, we assume that some interpretation satisfies our constraints—to use a formal language is, in effect, to make a theoretical posit. Insofar as our constraints underdetermine what such an interpretation would be, our language’s interpretation is in fact underdetermined. If no language satisfies all the constraints, then we’ve been speaking incoherently; we need to relax some constraints. The constraints are partly a matter of convention, but also partly a matter of theory: internally, in using the language, we commit ourselves to its coherence, and thus to the existence of an interpretation that satisfies the constraints; and externally, the constraints are determined by the theoretical requirements we have of the language.

Say A judges \phi to be valid. What this involves is A’s judgment that “\phi is valid” is a consequence of a certain set of implicit semantic constraints on the language. Again suppose that B denies A’s validity intuition. Now there are two ways to go. (1) Deny A’s logic: B might agree on the relevant constraints, but disagree that they have “\phi is valid” as a consequence. (2) Deny A’s constraints: B might say that some of the constraints A imposes are not appropriate for the language in question. This might be based on an internal criticism—some of A’s constraints are inconsistent—or, more likely, external criticism: some of A’s constraints don’t adequately characterize the role the language is intended to play. The important upshot is that, unlike on van Inwagen’s view, B can disagree not only on linguistic grounds or logical grounds, but also on theoretical grounds. (Of course, since on my view the constraints also fix the meaning of the language, there is no bright line between the linguistic and theoretical grounds for disagreement—this is Quine’s point.)

Written by Jeff

December 10, 2008 at 3:34 pm

Posted in Language, Logic

Tagged with , ,

Getting in touch with the universe

with 7 comments

In my last post I argued that the set-theoretic problems with “absolutely everything” carry over even for those who don’t believe in sets, by appealing to the possibility of “supergunk”. There’s another route to the same conclusion by way of some principles about contact. I think it’s kind of neat.

Let’s take contact to be a two-place relation between objects; it is reflexive (we count overlap as contact), symmetric, and monotonic: if X touches a part of Y, then X touches Y. These are all standard so far.

The following additional principles seem jointly possible:

  1. A pretty weak separation principle: if X and Y don’t touch, then there is some further Z that doesn’t touch either of them. (Think of Z as being located between X and Y, keeping them apart.)

  2. A very strong distribution principle: if X touches the fusion of the \phi’s, then X touches some \phi. (Since the last post, I’ve switched from plural quantification to schemes, since I think it helps avoid some issues.) We might call this contact supervenience: what touches the whole touches some part.

The finite version of distribution is completely tame and standard: if X touches Y + Z, then X touches Y or X touches Z. It’s very hard to imagine the finite version failing. It turns out that the general version can fail, though. For instance, none of the intervals \left[\frac{1}{n}, 1\right] touches the interval \left[-1, 0\right]; but their fusion does (under ordinary topology). But this is pretty counterintuitive (John Hawthorne has written a whole paper about the principle’s failure). And so, even if it turns out that actually contact doesn’t supervene on parts, it still strikes me as a way things could have been.

But these two principles together give rise to another extensibility argument. Suppose that something doesn’t touch X. Given any \phi’s that don’t touch X, their fusion doesn’t touch X by (2), and so by (1) there is some further thing that doesn’t touch X. So the \phi’s, whatever they may be, don’t exhaust the things that don’t touch X: the non-X-touchers are indefinitely extensible. Thus, in a world where (1) and (2) hold, it doesn’t make sense to talk about absolutely everything there is.

To sum up: (1) and (2) are jointly possible; therefore, generality absolutism is possibly false. Since generality absolutism isn’t contingent, generality absolutism is actually false.

Written by Jeff

December 6, 2008 at 7:59 pm