Speak with the vulgar.

Think with me.

Oops

with 2 comments

I’ve been cleaning up my “Indefinite Divisibility” paper from last year. One of my arguments in it concerned supergunk: X is supergunk iff for every chain of parts of X, there is some y which is a proper part of each member of the chain. I claimed that supergunk was possible, and argued on that basis against absolutely unrestricted quantification. I even thought I had a kind of consistency proof for supergunk: in particular, a (proper class) model that satisfied the supergunk condition as long as the plural quantifier was restricted to set-sized collections. Call something like this set-supergunk.

Well, I was wrong. I’ve been suspicious for a while, and I finally proved it today: set-supergunk is impossible. So I thought I’d share my failure. In fact, an even stronger claim holds:

Theorem. If x_0 is atomless, then x_0 has a countable chain of parts such that nothing is a part of each of them.

Proof. Since x_0 is atomless, there is a (countable) sequence x_0 > x_1 > x_2 > \dots . For each positive integer k, let y_ k be x_{k-1} - x_ k. Then let z_ k be the sum of y_ k, y_{k+1}, y_{k+2}, \dots . Note that the z_ k’s are a countable chain. Note also that each z_ k is part of x_{k-1}.

Now suppose that z_\omega is a part of each z_ k. In that case, z_\omega is part of each x_ k. But since x_ k is disjoint from y_ k, this means that z_\omega is disjoint from each y_ k, and so by the definition of a mereological sum, z_\omega is disjoint from each z_ k. This is a contradiction.

Written by Jeff

January 15, 2010 at 6:22 pm

Questions

with 4 comments

I’ve been thinking a bit about the semantics of questions. I know hardly any of the literature on this, but I’ve worked out a little view that seems to have some nice features. If you know more I’d be interested to hear what you think.

The semantic value (SV) of a question has two jobs to do. First, it should fit nicely into the rest of our semantics: it should help us get the right truth conditions for sentences with embedded questions. Second, it should fit nicely into the rest of our pragmatics: it should help us explain what a speaker does when she asks a question. Ideally both of these should require minimal revision to the rest of what we were doing in those projects.

As I see it, the standard account (the SV of a question is a set of propositions that partition logical space) does a mediocre job on both counts. You can get things to work, but the account doesn’t really make it easy, and you end up having to build a lot of new machinery in other places, like attitude verbs. I think I might be able to do better.

Read the rest of this entry »

Written by Jeff

May 25, 2009 at 8:42 am

Composition as abstraction

with 9 comments

“The whole is nothing over and above the parts.” This is a nice thought, but it turns out to be difficult to make precise. One attempt is the “composition as identity” thesis: if the Xs compose y, the Xs are y.

This won’t work, at least not without great cost. The United States is composed of fifty states, and it is also composed of 435 congressional districts. If composition is identity, then the U.S. is the states, and the U.S. is the districts; thus the states are the districts. This is bad: composition-as-identity collapses mereologically coextensive pluralities, which means now your plural logic can be no more powerful than your mereology. So you lose the value of even having plural quantifiers. That’s a big sacrifice. (This argument is basically Ted Sider’s, in “Parthood”.)

But the problem here isn’t that the fusion of the Xs is something more than the mere Xs: rather, the fusion is something less. Mereological sums are less fine-grained than pluralities, so if we require each plurality to be identical to a particular sum, we lose the (important!) distinctions that plural logic makes.

This suggests a better way: mereological sums are abstractions from pluralities. Roughly speaking, sums are pluralities with some distinctions ignored. In particular, sums are what you get by abstracting from pluralities on the relation of being coextensive. (Analogously: colors are what you get when you abstract from objects on the same-color relation. Numbers are what you get when you abstract from pluralities on equinumerosity.)

Let’s polish this up a bit. Take overlap as primitive, and define parthood in the standard way:

  • x is part of y iff everything that overlaps x overlaps y.

This has a natural plural generalization:

  • The Xs are covered by the Ys iff everything that overlaps some X overlaps some Y.

Parthood is the limiting case of being covered when there’s just one X and one Y. (I’ll identify each object with its singleton plurality.) We can also define an equivalence relation:

  • The Xs are coextensive with the Ys iff the Xs cover the Ys and the Ys cover the Xs.

Now we can state an abstraction principle. Let Fus be a new primitive function symbol taking one plural argument.

  1. Fus X = Fus Y iff the Xs are coextensive with the Ys.

(Compare Hume’s Principle: #X = #Y iff the Xs are equinumerous with the Ys.) This is the main principle governing composition. It isn’t the only principle we’ll need. For all I’ve said so far, fusions could live in Platonic heaven; but we need them to participate in mereological relations:

  1. The following are equivalent:
    1. The Ys cover the Xs.
    2. The Ys cover Fus X.
    3. Fus Y covers the Xs.

This guarantees that Fus X really is the fusion of the Xs by the standard definition of “fusion”. There is one final assumption needed to ensure that our mereology is standard:

  1. Parthood is antisymmetric. (If x is part of y and y is part of x, then x = y.)

Equivalently: Fus x = x. In the singular case, composition really is identity.

These three principles imply all of standard mereology. So just how innocent are they?

I think they’re fairly innocent, given the right conception of how abstraction works. I like a “tame” account of abstraction which doesn’t introduce any new ontological commitments. (This means tame abstraction is too weak for Frege arithmetic or for Frege’s Basic Law V—this is a good thing.) The basic idea is that abstract terms refer indefinitely to each of their instances. For example, the singular term “red” refers indefinitely to each red thing: we consider all red instances as if they were a single thing, without being specific as to which. (Semantically, you can understand indefinite reference in terms of supervaluations.) Red has the properties that all red things must share. E.g., if any red thing must be rosier than any taupe thing, then we can also say that red is rosier than taupe. Speaking of red doesn’t commit to any new entity—it’s just speaking of the old entities a new way.

As for colors, so for fusions. “The fusion of the Xs” doesn’t refer to some new exotic thing: it refers indefinitely to each plurality coextensive with the Xs. You could say it refers to the Xs, as long as you don’t mind the difference between coextensive pluralities. Furthermore, since whenever the Xs are coextensive with the Ys they stand in exactly the same covering relations, Principle 2 is justified.

Principle 3, on the other hand, is not entirely innocent. Given the definition of parthood, it amounts to extensionality: no distinct (singular) objects are coextensive. I think it’s right to consider this a separate, serious commitment, one that (unlike the rest of mereology) doesn’t flow from the mere conception of a mereological sum. It might, however, flow from the conception of an object. If you aren’t too worried about speaking completely fundamentally, antisymmetry can be had cheaply, by considering “objects” to be coextension-abstractions from the basic objects, in just the same way that sums are coextension-abstractions from the basic pluralities.

So, indeed, the whole is nothing more than its parts. It can’t be identified with any particular plurality of its parts, but it can be identified indefinitely with every plurality of its parts.

[There’s a technical issue for the semantics I’ve alluded to here. I’m treating Fus X as semantically plural (it refers indefinitely to pluralities), but it is syntactically singular. In particular, as a singular term it can be ascribed membership in pluralities. But this means that I need the semantics to allow pluralities to be members of pluralities—and so on—and this isn’t ordinarily allowed. So it looks like I’ll need to give the semantics in terms of “superplurals”. (See section 2.4 of the SEP article on plural quantifiers.) Whether this semantic richness should be reflected in the language is a separate issue—I’m inclined to think not, but I haven’t really thought it through.]

Written by Jeff

April 25, 2009 at 12:21 pm

Posted in Logic, Metaphysics

Tagged with , ,

Laws before facts

with 3 comments

Does the universe come “facts first” or “laws first”? That is, in terms of metaphysical priority, do the non-nomic facts determine what the laws of nature are, or are the laws at the ground floor determining what the non-nomic facts are? (Or maybe neither grounds the other; I’ll ignore this view for now.) The best-known example of a facts-first theory is Lewis’s “best system” account: to be a law of nature is to be a member of the set of generalizations over the non-nomic facts that has the best balance of simplicity and strength. Here are two rough-and-ready arguments against an account like that. The first is the circularity argument I gestured at a few weeks ago:

  1. The laws explain the non-nomic facts.
  2. If Y explains X, then X does not explain Y.
  3. If X grounds Y, then X explains Y.
  4. So the non-nomic facts don’t ground the laws.

And this is the second:

  1. The non-nomic facts are many and disparate; the laws are simple and few.
  2. Prefer metaphysical theories that are simpler and more parsimonious at the fundamental level.
  3. So prefer laws-first to facts-first metaphysics.

The second premise is a methodological principle, rather than a general metaphysical claim (hence the imperative). It’s a ceteris paribus principle, and so the conclusion is a ceteris paribus conclusion: there is a presumption in favor of laws-first accounts.

Thoughts?

Written by Jeff

April 18, 2009 at 6:57 pm

Another quick objection to Tooley

leave a comment »

In “The Nature of Laws” (1977), Michael Tooley claims that it is nomologically true that Fs are Gs just in case a certain relation R—the nomological relation—holds between the universal Fness and the universal Gness. He claims further that this relation-symbol “R” is a theoretical term whose referent is fixed in Ramsey-Lewis style: we specify some constraints C, and then stipulate that R is the unique relation that satisfies C. These are his constraints:

  1. R is a two-place relation on universals.
  2. R is contingent: there are universals Fness and Gness such that it is neither necessary that R(Fness, Gness) nor necessary that not R(Fness, Gness).
  3. R(Fness, Gness) logically entails that all Fs are Gs.

(I’ve left out the complications that are introduced to deal with laws that have different forms, such as “All non-F’s are G’s or H’s” (Tooley doesn’t think there are negative or disjunctive universals). But Tooley thinks there is a different nomological relation associated with each syntactic construction, so this doesn’t make a difference here.)

But it doesn’t look at all plausible to me that these constraints pick out a unique relation (assuming anything satisfies them at all). Look, here’s a non-nomological relation that satisfies conditions 1–4: the relation denoted by the two-place quantifier “All”—that is, the relation that holds between Fness and Gness just in case all Fs are Gs. Tooley hasn’t said anything that would distinguish his nomological relation from such run-of-the-mill categorical relations. This strikes me as a serious problem. Am I missing something?

(I’m working through David Armstrong’s What is a law of nature? now—I’ll see if he adds anything helpful.)


EDIT: I inadvertently left out one of Tooley’s constraints:

  1. R is irreducibly second-order.

You might think this might help. You might say in particular that the “All” relation is in fact reducible to less-than-second-order universals only—since, after all, “All(F, G)” holds iff for every x, if Fx then Gx. But this “reduction” involves the concept “for every”, which plausibly involves the “All”-relation in disguise. (Analogously, one might “reduce” a purported nomological relation R by pointing out that “R(F, G)” holds iff it is nomologically necessary that for every x, if Fx then Gx.) I guess I’m not really sure what the rules are for reducing universals.

Armstrong makes a conjecture along the same lines: “I speculate that the laws of nature constitute the only irreducibly second-order relations between universals” (84). So presumably he thinks that either there is no “All”-relation, or else that it is reducible to a lower order. Does anyone have an idea why he would think this?

Written by Jeff

March 18, 2009 at 9:26 am

Improbable laws

with 3 comments

One of my goals over spring break is to get familiar with some of the literature on laws of nature. I may blog some thoughts on it as I go.

This afternoon I read Michael Tooley’s “The Nature of Laws” (in the anthology edited by John Carroll). In the section on the epistemology of laws, Tooley shows how we could become confident that a certain law holds, in a Bayesian framework. He then argues that this confirmation story is a distinctive benefit of his account (the DTA account):

[T]here is a crucial assumption that seems reasonable if relations among universals are the truth-makers for laws, but not if facts about particulars are the truth-makers. This is the assumption that m and n [the prior probabilities of certain statements of laws] are not equal to zero. If one takes the view that it is facts about the particulars falling under a generalization that make it a law, then, if one is dealing with an infinite universe, it is hard to see how one can be justified in assigning any non-zero probability to a generalization, given evidence concerning only a finite number of instances. For surely there is some non-zero probability that any given particular will falsify the generalization, and this entails, given standard assumptions, that as the number of particulars becomes infinite, the probability that the generalization will be true is, in the limit, equal to zero.

In contrast, if relations among universals are the truth-makers for laws, the truth-maker for a given law is, in a sense, an “atomic” fact, and it would seem perfectly justified, given standard principles of confirmation theory, to assign some non-zero probability to this fact’s obtaining.

This can’t be right. If Tooley is right in the first paragraph that the probability of any universal generalization over particulars is zero, then appealing to the “atomicity” of nomological facts is no help. The problem is that, on his own view, the nomological relation between universals logically entails the corresponding universal generalization over particulars. But this means that, by monotonicity, the probability of the relation can be no greater than the probability of the generalization. So if the generalization has zero probability, so too does the relation.

The upshot is that if Tooley’s point in the first paragraph is right, then it’s devastating for just about any account of the epistemology of laws—because any account of laws will have it that a generalization being true-by-law entails it being plain-old-true. So we’d better figure out why Tooley’s point is wrong.

Written by Jeff

March 14, 2009 at 3:27 pm

Posted in Epistemology, Metaphysics, Science

Tagged with ,

Non-distributive explanations

leave a comment »

This distribution principle looks awfully plausible:

  • A explains (B and C) iff (A explains B and A explains C).

But I think it might be false, at least in the right-to-left direction.

A potential counterexample comes from Aristotle. A bandit is lingering by a road, and a farmer is walking home down the same road. By chance (as we would say), they meet at place X at time T. There is a telic explanation in terms of the bandit’s purposes for his being at X at T, and there is a telic explanation in terms of the farmer’s purposes for his being at X at T. But there is no telic explanation for their meeting, even though I take it that their meeting just consists in both of them being at X at T. The meeting is just a coincidence.

Slightly more carefully: let BP be the bandit’s purposes and FP be the farmer’s purposes, and let BXT and FXT be the relevant location facts. Assuming that antecedent strengthening holds for “explains”, this means that (BP and FP) telically explains BXT, and (BP and FP) telically explains FXT. But since their meeting is coincidental, it seems plausible that (BP and FP) does not telically explain (BXT and FXT).

If distribution fails for telic explanation, then perhaps it fails for nomic explanations as well, for the same kind of reason.

Why does this matter? It’s relevant to my criticism last week of Lewis’s “best system” account of laws (not just my criticism). Briefly, I said: on Lewis’s account, the qualitative facts explain what the laws are, but the laws should explain the qualitative facts. That makes a very tight explanatory circle, and that’s bad.

A response might go: it’s true that the conjunction of the qualitative facts explain the laws. It’s also true that the laws explain each individual qualitative fact. But it doesn’t follow that the laws explain the conjunction of the qualitative facts—since the distributive principle fails—and so there is no bad explanatory circle.

Written by Jeff

March 10, 2009 at 4:38 pm

Posted in Epistemology, Metaphysics, Science

Tagged with ,

Magnetic laws

with 7 comments

I’ve been thinking more about the problem of fit I posted last week. Specifically, I’m trying to work out how a response appealing to reference magnetism would go.

Recall the puzzle: how is it that when we select hypotheses that best exemplify our theoretical values, we so often hit on the truth? A simple example: emeralds, even those we haven’t observed, are green, rather than grue. And lo, we believe they are green, rather than believing they are grue. It seems things could have been otherwise, in either of two ways:

  1. There might be people who project grue rather than green, in a world like ours.
  2. Or there might be people who (like us) project green, in a world where emeralds are grue.

Those people are in for a shock. Why are we so lucky?

A response in the Lewisian framework goes like this. Not all properties are created equal: green is metaphysically more natural than grue. In particular, it is semantically privileged: it is easier to have a term (or thought) about green than it is to have a term (or thought) about grue. This should take care of possibility 1. If there are people who theorize in terms of grue rather than green, their practices would have to be sufficiently perverse to overcome the force of green’s reference magnetism. There are details to fill in, but plausibly it would be hard for natural selection to produce creatures with such perverse practices.

But this still leaves possibility 2. Given that our theories are attracted to the natural properties, even so, why should a theory in terms of natural properties be true? The green-projectors in the world of grue emeralds have just as natural a theory as ours, to no avail.

But even though 2 is possible, we can still explain why it doesn’t obtain. What we need to explain is why emeralds are green—and we shouldn’t try to explain that by appeal to general metaphysics, but by something along these lines: the electrons in a chromium-beryllium crystal can only absorb photons with certain amounts of energy. That is, we explain why emeralds are green by appeal to the natural laws of our world.

Generalizing: “joint-carving” theories yield true predictions because their predictions are supported by natural laws. Why is this? On the Lewisian “best system” account of laws, it is partly constitutive of a natural law that it carve nature at the joints: naturalness is one of the features that distinguishes laws from mere accidental generalizations. So, much as reference magnetism makes it harder to have a theory that emeralds are grue than it is to have a theory that emeralds are green, so the best system account makes it harder to have a law that emeralds are grue than it is to have a law that emeralds are green. Then the idea is that, since our theories and our laws are both drawn to the same source, this makes it likely that they line up. Furthermore, since the laws explain the facts, this explains why our theories fit the facts.

Something isn’t right about this story; I’m having a hard time getting it clear, but here’s a stab. There’s a general tension in the best system account: on the one hand, the laws are supposed to explain the (non-nomic) facts; on the other hand, the (non-nomic) facts are metaphysically prior to the laws. But metaphysical priority is also an explanatory relation, and so it looks like we’re in a tight explanatory circle. (Surely this point has been made? I don’t know much of the literature on laws, so I’d welcome any pointers.)

This is relevant because the answer to the problem of fit relies on the explanatory role of laws—a role that seems difficult for the best systems account to bear up. But I feel pretty shaky on this, and would appreciate help.

Written by Jeff

March 2, 2009 at 8:44 pm

The possibility of resurrection

with 5 comments

Say next week I am utterly destroyed, body and mind. There is no immortal trace of me that survives the destruction (at least in the short term). Could God, even so, raise me up at the last day? Certainly in the distant future he could make a person who was like me in various respects, but could he ensure that the person he made then was really me? Here’s a story about how this might be, given three controversial premises.

  1. The concept person is an ethical concept. To be the same person as X is, roughly, to be responsible for X’s actions, to have a special stake in what happens to X, to have special obligations and rights in how you treat X.

  2. Ethical value is grounded in God’s evaluative attitudes. In broad strokes, to be good is to be loved by God. More complicated ethical concepts similarly come down to God’s attitudes in suitably complicated ways.

These two imply that for X and Y to be the same person amounts to God having the right evaluative attitudes—for short, it amounts to God regarding X and Y as the same person. Then the final premise is that God’s attitudes are not unduly constrained:

  1. It is possible for God to regard X and Y as the same person, even if X is destroyed long before Y is created.

It’s clear how the conclusion follows.

You can count me among the skeptics of the second premise, but I think there’s something interesting here even if we give it up. If personhood is answerable to questions of value, then it isn’t clear why traditional metaphysical issues like physical or causal continuity would be relevant to survival. If those connections are severed, then the conceptual obstacles to resurrection seem much less threatening.

Written by Jeff

February 24, 2009 at 1:50 pm

Theory choice and God

with 3 comments

We have a lot of true beliefs. A few examples: there are dogs; every set has a power set; the universe is around 13 billion years old; it is generally wrong to torture children for fun; there are stars at space-like separation from us; a bus will go up First Avenue tomorrow. How did we get so many true beliefs about so many subjects?

First, how did we get our beliefs? The rough story is something like this: we have gathered evidence and formed hypotheses to account for that evidence, lending our belief to the hypotheses that best exemplify our theoretical values: fit, simplicity, power.

Second, why are so many of the beliefs we got this way true? It is easy to imagine creatures whose theoretical values are very bad guides to the truth. (Two versions: there could be people in a universe like ours who are defective theory-choosers—(e.g.) they favor the grue-theory over the green-theory; or there could be people with values like ours in a universe where (e.g.) emeralds are really grue.) So it sure looks like it could have been otherwise for us; why isn’t it? I see six main lines of response.

  1. Skepticism. The alleged datum is false: our theoretical values really aren’t a good guide to the truth. This would be very bad.

  2. Anti-realism. Somehow or another, our theoretical values constitute the facts in question. I assume this isn’t plausible for most of the subjects I mentioned.

  3. No reason. We’re just lucky. I think this response leads to skepticism (even though the problem didn’t start as a skeptical worry): if we find out there’s no reason for our theory choices to track the truth, then we shouldn’t be confident that they really do.

  4. Evolution. I’m sure this is part of the story, but it can’t be all of it. Selection might account for reliable theory-choosing about middle-sized objects of the sort our ancestors interacted with (I think Plantinga’s worries about this case can be answered); but this doesn’t account for our more exotic true beliefs about (e.g.) math, morality, cosmology, or the future. There could be creatures subject to the same constraints on survival as our ancestors (at whatever level of detail is selectively relevant), and yet who choose bad theories at selectively neutral scales and distances. Why aren’t we like them?

  5. Reference magnetism: certain theories are naturally more “eligible” than others; our beliefs are attracted to eligible theories; furthermore, the eligible theories are likely to be true. On the Lewisian version of this story, though, there is no reason why the last part should hold: we might very well be in a universe where the natural properties are distributed in a chaotic or systematically misleading way. In that case, reference magnetism would systematically attract our beliefs toward false theories. There might be a better non-Lewisian variant, where a property’s eligibility is somehow tied to its distribution, but I don’t know how this would go.

  6. Theism. Someone (I won’t be coy—it’s God) who has certain theoretical values is responsible both for the way the universe is, and also for the theoretical values we have. He made the universe as he saw fit, which means it has the kind of simplicity he likes; and he made us in his image, which means we like the same kind of simplicity. So when we judge Fness as a point in favor of a theory, this makes it likely that God favors Fness, which in turn makes it likely that the universe is F.

    (Why isn’t the fit perfect? The theist already needed an answer to the problem of evil—why the universe doesn’t perfectly fit our moral values; presumably that answer will also apply to our theoretical values.)

So it looks to me like the theist has a better explanation for an important fact than her rivals; this counts as evidence in theism’s favor.

Any good alternative explanations I’m missing?

Written by Jeff

February 22, 2009 at 2:03 pm