Speak with the vulgar.

Think with me.

Archive for the ‘Logic’ Category

Oops

with one comment

I’ve been cleaning up my “Indefinite Divisibility” paper from last year. One of my arguments in it concerned supergunk: X is supergunk iff for every chain of parts of X, there is some y which is a proper part of each member of the chain. I claimed that supergunk was possible, and argued on that basis against absolutely unrestricted quantification. I even thought I had a kind of consistency proof for supergunk: in particular, a (proper class) model that satisfied the supergunk condition as long as the plural quantifier was restricted to set-sized collections. Call something like this set-supergunk.

Well, I was wrong. I’ve been suspicious for a while, and I finally proved it today: set-supergunk is impossible. So I thought I’d share my failure. In fact, an even stronger claim holds:

Theorem. If x_0 is atomless, then x_0 has a countable chain of parts such that nothing is a part of each of them.

Proof. Since x_0 is atomless, there is a (countable) sequence x_0 > x_1 > x_2 > \dots . For each positive integer k, let y_ k be x_{k-1} - x_ k. Then let z_ k be the sum of y_ k, y_{k+1}, y_{k+2}, \dots . Note that the z_ k’s are a countable chain. Note also that each z_ k is part of x_{k-1}.

Now suppose that z_\omega is a part of each z_ k. In that case, z_\omega is part of each x_ k. But since x_ k is disjoint from y_ k, this means that z_\omega is disjoint from each y_ k, and so by the definition of a mereological sum, z_\omega is disjoint from each z_ k. This is a contradiction.

Advertisements

Written by Jeff

January 15, 2010 at 6:22 pm

Composition as abstraction

with 9 comments

“The whole is nothing over and above the parts.” This is a nice thought, but it turns out to be difficult to make precise. One attempt is the “composition as identity” thesis: if the Xs compose y, the Xs are y.

This won’t work, at least not without great cost. The United States is composed of fifty states, and it is also composed of 435 congressional districts. If composition is identity, then the U.S. is the states, and the U.S. is the districts; thus the states are the districts. This is bad: composition-as-identity collapses mereologically coextensive pluralities, which means now your plural logic can be no more powerful than your mereology. So you lose the value of even having plural quantifiers. That’s a big sacrifice. (This argument is basically Ted Sider’s, in “Parthood”.)

But the problem here isn’t that the fusion of the Xs is something more than the mere Xs: rather, the fusion is something less. Mereological sums are less fine-grained than pluralities, so if we require each plurality to be identical to a particular sum, we lose the (important!) distinctions that plural logic makes.

This suggests a better way: mereological sums are abstractions from pluralities. Roughly speaking, sums are pluralities with some distinctions ignored. In particular, sums are what you get by abstracting from pluralities on the relation of being coextensive. (Analogously: colors are what you get when you abstract from objects on the same-color relation. Numbers are what you get when you abstract from pluralities on equinumerosity.)

Let’s polish this up a bit. Take overlap as primitive, and define parthood in the standard way:

  • x is part of y iff everything that overlaps x overlaps y.

This has a natural plural generalization:

  • The Xs are covered by the Ys iff everything that overlaps some X overlaps some Y.

Parthood is the limiting case of being covered when there’s just one X and one Y. (I’ll identify each object with its singleton plurality.) We can also define an equivalence relation:

  • The Xs are coextensive with the Ys iff the Xs cover the Ys and the Ys cover the Xs.

Now we can state an abstraction principle. Let Fus be a new primitive function symbol taking one plural argument.

  1. Fus X = Fus Y iff the Xs are coextensive with the Ys.

(Compare Hume’s Principle: #X = #Y iff the Xs are equinumerous with the Ys.) This is the main principle governing composition. It isn’t the only principle we’ll need. For all I’ve said so far, fusions could live in Platonic heaven; but we need them to participate in mereological relations:

  1. The following are equivalent:
    1. The Ys cover the Xs.
    2. The Ys cover Fus X.
    3. Fus Y covers the Xs.

This guarantees that Fus X really is the fusion of the Xs by the standard definition of “fusion”. There is one final assumption needed to ensure that our mereology is standard:

  1. Parthood is antisymmetric. (If x is part of y and y is part of x, then x = y.)

Equivalently: Fus x = x. In the singular case, composition really is identity.

These three principles imply all of standard mereology. So just how innocent are they?

I think they’re fairly innocent, given the right conception of how abstraction works. I like a “tame” account of abstraction which doesn’t introduce any new ontological commitments. (This means tame abstraction is too weak for Frege arithmetic or for Frege’s Basic Law V—this is a good thing.) The basic idea is that abstract terms refer indefinitely to each of their instances. For example, the singular term “red” refers indefinitely to each red thing: we consider all red instances as if they were a single thing, without being specific as to which. (Semantically, you can understand indefinite reference in terms of supervaluations.) Red has the properties that all red things must share. E.g., if any red thing must be rosier than any taupe thing, then we can also say that red is rosier than taupe. Speaking of red doesn’t commit to any new entity—it’s just speaking of the old entities a new way.

As for colors, so for fusions. “The fusion of the Xs” doesn’t refer to some new exotic thing: it refers indefinitely to each plurality coextensive with the Xs. You could say it refers to the Xs, as long as you don’t mind the difference between coextensive pluralities. Furthermore, since whenever the Xs are coextensive with the Ys they stand in exactly the same covering relations, Principle 2 is justified.

Principle 3, on the other hand, is not entirely innocent. Given the definition of parthood, it amounts to extensionality: no distinct (singular) objects are coextensive. I think it’s right to consider this a separate, serious commitment, one that (unlike the rest of mereology) doesn’t flow from the mere conception of a mereological sum. It might, however, flow from the conception of an object. If you aren’t too worried about speaking completely fundamentally, antisymmetry can be had cheaply, by considering “objects” to be coextension-abstractions from the basic objects, in just the same way that sums are coextension-abstractions from the basic pluralities.

So, indeed, the whole is nothing more than its parts. It can’t be identified with any particular plurality of its parts, but it can be identified indefinitely with every plurality of its parts.

[There’s a technical issue for the semantics I’ve alluded to here. I’m treating Fus X as semantically plural (it refers indefinitely to pluralities), but it is syntactically singular. In particular, as a singular term it can be ascribed membership in pluralities. But this means that I need the semantics to allow pluralities to be members of pluralities—and so on—and this isn’t ordinarily allowed. So it looks like I’ll need to give the semantics in terms of “superplurals”. (See section 2.4 of the SEP article on plural quantifiers.) Whether this semantic richness should be reflected in the language is a separate issue—I’m inclined to think not, but I haven’t really thought it through.]

Written by Jeff

April 25, 2009 at 12:21 pm

Posted in Logic, Metaphysics

Tagged with , ,

Indefinite divisibility

with 2 comments

If you’re interested, I’ve written a short paper on my nominalistic indefinite extensibility arguments. (This is also my way of making good on my offer in the comments to discuss a sort of consistency result for supergunk—it’s in the appendix.)

Written by Jeff

February 17, 2009 at 8:49 pm

The “strict philosophical sense”

with 4 comments

Here’s an inconsistent triad:

  • The question of the ontological status of ordinary material objects is a serious question: its answer isn’t obvious.
  • Obviously there is a chair I’m sitting on.
  • Ontology is about what there is. (So, specifically, the question of the ontological status of ordinary material objects is just the question of whether there are such objects (chairs being among them).)

All three principles are pretty compelling. How can we resolve their inconsistency?

I suggest that there is an equivocation on “there is”. When we say ontology is about what there is, we are using “there is” in a different way than when we say there is a chair I’m sitting on. It is responsive to different constraints.

This is Quine’s picture: to find out what there is, we look at what we quantify over in our simplest theory of the world. The quantifiers are the symbols that appear in certain inferences: If a is a \phi, then there is a \phi; If we can infer that a isn’t a \phi from premises not involving a, then we can infer from the same premises that there isn’t a \phi. These rules, or something like them, constrain what we mean by “there is”, when we are doing our philosophical theory-building.

But the natural meaning of “there is” is constrained by the facts of English usage (perhaps together with some facts about the natural properties out there for us to talk about). There’s no reason to think beforehand that the constraints of theory-building are going to coincide with the constraints of ordinary usage. Clearly there’s an etymological relationship between the “strict philosophical” sense of “there is” and the ordinary English sense, but it looks plausible to me that they aren’t quite the same thing.

An analogy. We have an ordinary use of “animal” that excludes human beings. But biologists have discovered that there is a more useful category for systematic theory-building, one which mostly coincides with ordinary “animal”, but which includes human beings. This “strict biological sense” of the word “animal” doesn’t mean that a sign that says “No animals are allowed in the bus” is (strictly speaking) wrong. It’s just employing a different sense of “animal”.

I think a lot of philosophers think that when they say “strictly speaking”, they are manipulating the pragmatics of the discourse: the “strict philosophical sense” is the most literal sense. If what I’m saying is right, then this is a mistake. The strict philosophical sense isn’t any more literal than the ordinary sense; it is simply a sense that belongs to a different, philosophical register.

Written by Jeff

December 21, 2008 at 2:26 pm

How do we understand formal languages?

with 5 comments

Consider a sentence from some formal language; for example, a sentence of quantified modal logic:

  • \exists x(Fx \land \lozenge \neg Fx)
    “There is an F which could have been a non-F.”

What fixes the meaning of this sentence? How do we make sense of it? And then, what is the status of our judgments about truth conditions, validity, consequence, etc. for formal sentences?

A candidate answer. (Peter van Inwagen defends this view in “Meta-Ontology”, 1998.) The symbols in a formal language are defined in terms of some natural language, like English. For instance, \exists is defined to mean “there is”, \lozenge to mean “possibly”, and so on. We understand the formal sentence by replacing each symbol with its English definiens, and we understand the English sentence directly. On this view, formal languages are just handy abbreviations for the natural languages we have already mastered, perhaps with some extra syntactic markers to remove ambiguity.

Suppose A, an English speaker, claims that \phi is intuitively valid. If B wants to argue that \phi is in fact invalid, she has only three options. (1) Use a different English translation from A. In this case, though, B would merely be talking past A. (2) Deny that A correctly understands the English sentence—so B is controverting a datum of natural language semantics. (3) Deny A’s logical intuition. So B’s only options are pretty drastic: to deny a native speaker’s authority on the meaning of her own language, or to deny a (let’s say pretty strong) logical intuition.

I’m pretty sure the candidate answer is wrong. First, because the obvious English translations for a logical symbol often turn out to wrong—witness the logician’s conditional, or the rigid “actually” operator—and we can go on understanding the symbol even before we have found an adequate translation. Also, we don’t typically explain the use of a symbol by giving a direct English translation: rather, we describe (in English, or another formal language) generally how the symbol is to be used. Furthermore, we can have non-trivial arguments over whether a certain English gloss of a formal sentence is the right one.

Here’s an alternative picture. In order to do some theoretical work, we introduce a regimented language as a tool. What we need for the job is some sentences that satisfy certain semantic constraints. \phi should mean that snow is white. \psi should be valid. \alpha should have \beta as a consequence. We generally won’t have codified these constraints, but we internalize them in our capacity as theorists using a particular language; someone who doesn’t use the language in accordance with the constraints doesn’t really understand it. (This view is like conceptual role semantics, except that constraints that specify the meaning directly, in other languages, are allowed.)

In using the language, we assume that some interpretation satisfies our constraints—to use a formal language is, in effect, to make a theoretical posit. Insofar as our constraints underdetermine what such an interpretation would be, our language’s interpretation is in fact underdetermined. If no language satisfies all the constraints, then we’ve been speaking incoherently; we need to relax some constraints. The constraints are partly a matter of convention, but also partly a matter of theory: internally, in using the language, we commit ourselves to its coherence, and thus to the existence of an interpretation that satisfies the constraints; and externally, the constraints are determined by the theoretical requirements we have of the language.

Say A judges \phi to be valid. What this involves is A’s judgment that “\phi is valid” is a consequence of a certain set of implicit semantic constraints on the language. Again suppose that B denies A’s validity intuition. Now there are two ways to go. (1) Deny A’s logic: B might agree on the relevant constraints, but disagree that they have “\phi is valid” as a consequence. (2) Deny A’s constraints: B might say that some of the constraints A imposes are not appropriate for the language in question. This might be based on an internal criticism—some of A’s constraints are inconsistent—or, more likely, external criticism: some of A’s constraints don’t adequately characterize the role the language is intended to play. The important upshot is that, unlike on van Inwagen’s view, B can disagree not only on linguistic grounds or logical grounds, but also on theoretical grounds. (Of course, since on my view the constraints also fix the meaning of the language, there is no bright line between the linguistic and theoretical grounds for disagreement—this is Quine’s point.)

Written by Jeff

December 10, 2008 at 3:34 pm

Posted in Language, Logic

Tagged with , ,