Speak with the vulgar.

Think with me.

Posts Tagged ‘mereology

Oops

with one comment

I’ve been cleaning up my “Indefinite Divisibility” paper from last year. One of my arguments in it concerned supergunk: X is supergunk iff for every chain of parts of X, there is some y which is a proper part of each member of the chain. I claimed that supergunk was possible, and argued on that basis against absolutely unrestricted quantification. I even thought I had a kind of consistency proof for supergunk: in particular, a (proper class) model that satisfied the supergunk condition as long as the plural quantifier was restricted to set-sized collections. Call something like this set-supergunk.

Well, I was wrong. I’ve been suspicious for a while, and I finally proved it today: set-supergunk is impossible. So I thought I’d share my failure. In fact, an even stronger claim holds:

Theorem. If x_0 is atomless, then x_0 has a countable chain of parts such that nothing is a part of each of them.

Proof. Since x_0 is atomless, there is a (countable) sequence x_0 > x_1 > x_2 > \dots . For each positive integer k, let y_ k be x_{k-1} - x_ k. Then let z_ k be the sum of y_ k, y_{k+1}, y_{k+2}, \dots . Note that the z_ k’s are a countable chain. Note also that each z_ k is part of x_{k-1}.

Now suppose that z_\omega is a part of each z_ k. In that case, z_\omega is part of each x_ k. But since x_ k is disjoint from y_ k, this means that z_\omega is disjoint from each y_ k, and so by the definition of a mereological sum, z_\omega is disjoint from each z_ k. This is a contradiction.

Written by Jeff

January 15, 2010 at 6:22 pm

Composition as abstraction

with 9 comments

“The whole is nothing over and above the parts.” This is a nice thought, but it turns out to be difficult to make precise. One attempt is the “composition as identity” thesis: if the Xs compose y, the Xs are y.

This won’t work, at least not without great cost. The United States is composed of fifty states, and it is also composed of 435 congressional districts. If composition is identity, then the U.S. is the states, and the U.S. is the districts; thus the states are the districts. This is bad: composition-as-identity collapses mereologically coextensive pluralities, which means now your plural logic can be no more powerful than your mereology. So you lose the value of even having plural quantifiers. That’s a big sacrifice. (This argument is basically Ted Sider’s, in “Parthood”.)

But the problem here isn’t that the fusion of the Xs is something more than the mere Xs: rather, the fusion is something less. Mereological sums are less fine-grained than pluralities, so if we require each plurality to be identical to a particular sum, we lose the (important!) distinctions that plural logic makes.

This suggests a better way: mereological sums are abstractions from pluralities. Roughly speaking, sums are pluralities with some distinctions ignored. In particular, sums are what you get by abstracting from pluralities on the relation of being coextensive. (Analogously: colors are what you get when you abstract from objects on the same-color relation. Numbers are what you get when you abstract from pluralities on equinumerosity.)

Let’s polish this up a bit. Take overlap as primitive, and define parthood in the standard way:

  • x is part of y iff everything that overlaps x overlaps y.

This has a natural plural generalization:

  • The Xs are covered by the Ys iff everything that overlaps some X overlaps some Y.

Parthood is the limiting case of being covered when there’s just one X and one Y. (I’ll identify each object with its singleton plurality.) We can also define an equivalence relation:

  • The Xs are coextensive with the Ys iff the Xs cover the Ys and the Ys cover the Xs.

Now we can state an abstraction principle. Let Fus be a new primitive function symbol taking one plural argument.

  1. Fus X = Fus Y iff the Xs are coextensive with the Ys.

(Compare Hume’s Principle: #X = #Y iff the Xs are equinumerous with the Ys.) This is the main principle governing composition. It isn’t the only principle we’ll need. For all I’ve said so far, fusions could live in Platonic heaven; but we need them to participate in mereological relations:

  1. The following are equivalent:
    1. The Ys cover the Xs.
    2. The Ys cover Fus X.
    3. Fus Y covers the Xs.

This guarantees that Fus X really is the fusion of the Xs by the standard definition of “fusion”. There is one final assumption needed to ensure that our mereology is standard:

  1. Parthood is antisymmetric. (If x is part of y and y is part of x, then x = y.)

Equivalently: Fus x = x. In the singular case, composition really is identity.

These three principles imply all of standard mereology. So just how innocent are they?

I think they’re fairly innocent, given the right conception of how abstraction works. I like a “tame” account of abstraction which doesn’t introduce any new ontological commitments. (This means tame abstraction is too weak for Frege arithmetic or for Frege’s Basic Law V—this is a good thing.) The basic idea is that abstract terms refer indefinitely to each of their instances. For example, the singular term “red” refers indefinitely to each red thing: we consider all red instances as if they were a single thing, without being specific as to which. (Semantically, you can understand indefinite reference in terms of supervaluations.) Red has the properties that all red things must share. E.g., if any red thing must be rosier than any taupe thing, then we can also say that red is rosier than taupe. Speaking of red doesn’t commit to any new entity—it’s just speaking of the old entities a new way.

As for colors, so for fusions. “The fusion of the Xs” doesn’t refer to some new exotic thing: it refers indefinitely to each plurality coextensive with the Xs. You could say it refers to the Xs, as long as you don’t mind the difference between coextensive pluralities. Furthermore, since whenever the Xs are coextensive with the Ys they stand in exactly the same covering relations, Principle 2 is justified.

Principle 3, on the other hand, is not entirely innocent. Given the definition of parthood, it amounts to extensionality: no distinct (singular) objects are coextensive. I think it’s right to consider this a separate, serious commitment, one that (unlike the rest of mereology) doesn’t flow from the mere conception of a mereological sum. It might, however, flow from the conception of an object. If you aren’t too worried about speaking completely fundamentally, antisymmetry can be had cheaply, by considering “objects” to be coextension-abstractions from the basic objects, in just the same way that sums are coextension-abstractions from the basic pluralities.

So, indeed, the whole is nothing more than its parts. It can’t be identified with any particular plurality of its parts, but it can be identified indefinitely with every plurality of its parts.

[There’s a technical issue for the semantics I’ve alluded to here. I’m treating Fus X as semantically plural (it refers indefinitely to pluralities), but it is syntactically singular. In particular, as a singular term it can be ascribed membership in pluralities. But this means that I need the semantics to allow pluralities to be members of pluralities—and so on—and this isn’t ordinarily allowed. So it looks like I’ll need to give the semantics in terms of “superplurals”. (See section 2.4 of the SEP article on plural quantifiers.) Whether this semantic richness should be reflected in the language is a separate issue—I’m inclined to think not, but I haven’t really thought it through.]

Written by Jeff

April 25, 2009 at 12:21 pm

Posted in Logic, Metaphysics

Tagged with , ,

Indefinite divisibility

with 2 comments

If you’re interested, I’ve written a short paper on my nominalistic indefinite extensibility arguments. (This is also my way of making good on my offer in the comments to discuss a sort of consistency result for supergunk—it’s in the appendix.)

Written by Jeff

February 17, 2009 at 8:49 pm

Getting in touch with the universe

with 7 comments

In my last post I argued that the set-theoretic problems with “absolutely everything” carry over even for those who don’t believe in sets, by appealing to the possibility of “supergunk”. There’s another route to the same conclusion by way of some principles about contact. I think it’s kind of neat.

Let’s take contact to be a two-place relation between objects; it is reflexive (we count overlap as contact), symmetric, and monotonic: if X touches a part of Y, then X touches Y. These are all standard so far.

The following additional principles seem jointly possible:

  1. A pretty weak separation principle: if X and Y don’t touch, then there is some further Z that doesn’t touch either of them. (Think of Z as being located between X and Y, keeping them apart.)

  2. A very strong distribution principle: if X touches the fusion of the \phi’s, then X touches some \phi. (Since the last post, I’ve switched from plural quantification to schemes, since I think it helps avoid some issues.) We might call this contact supervenience: what touches the whole touches some part.

The finite version of distribution is completely tame and standard: if X touches Y + Z, then X touches Y or X touches Z. It’s very hard to imagine the finite version failing. It turns out that the general version can fail, though. For instance, none of the intervals \left[\frac{1}{n}, 1\right] touches the interval \left[-1, 0\right]; but their fusion does (under ordinary topology). But this is pretty counterintuitive (John Hawthorne has written a whole paper about the principle’s failure). And so, even if it turns out that actually contact doesn’t supervene on parts, it still strikes me as a way things could have been.

But these two principles together give rise to another extensibility argument. Suppose that something doesn’t touch X. Given any \phi’s that don’t touch X, their fusion doesn’t touch X by (2), and so by (1) there is some further thing that doesn’t touch X. So the \phi’s, whatever they may be, don’t exhaust the things that don’t touch X: the non-X-touchers are indefinitely extensible. Thus, in a world where (1) and (2) hold, it doesn’t make sense to talk about absolutely everything there is.

To sum up: (1) and (2) are jointly possible; therefore, generality absolutism is possibly false. Since generality absolutism isn’t contingent, generality absolutism is actually false.

Written by Jeff

December 6, 2008 at 7:59 pm

All things great and small

with 26 comments

This is a blog-sized summary of a paper I’m working on.

For more than a century now, there’s been a problem with “everything”. Here’s a simple version: say you have all of the sets. Then there ought to be a set of just those things—a set X that contains all the sets. But in that case X is a member of itself, which no set can be. Paradox!

In 1906 Bertrand Russell writes,

[T]he contradiction results from the fact that…there are what we may call self-reproducing processes and classes. That is, there are some properties such that, given any class of terms all having such a property, we can always define a new term also having the property in question. Hence we can never collect all of the terms having the said property into a whole; because, whenever we hope we have them all, the collection which we have immediately proceeds to generate a new term also having the said property.

Michael Dummett (1993) calls properties like this indefinitely extensible—the main example is “set”, but related paradoxes also show up for “cardinal number”, “order-type”, “property”, and “proposition”. Because of this a lot of philosophers are driven to conclude that we can’t speak intelligibly of all the sets (cardinals, properties, etc.). Whenever we think we’ve caught them all, another pops up to defy us. And if we can’t talk about every set, then we also can’t talk about plain everything—since that would have to include all the sets.

This kind of argument leaves open an escape to somebody with enough nerve: one way out is to deny outright that there are any sets (cardinals, properties, etc.). This is kind of an attractive view anyway, since sets are a lot spookier than, say, tables and chairs and galaxies and electrons—even without the paradoxes. The strong-nerved people who deny the existence of such things are called nominalists (contrasted with platonists or realists).

I have a way to close of the nominalists’ escape route. What we need is a new indefinitely extensible property that isn’t “abstract” (like “set”, etc.): instead, it applies to concrete, material objects. (Even nominalists don’t want to deny those!) I don’t claim that there actually are any such things, though: instead I claim that there could be. This is enough, because it would be very odd if it turned out that “absolutely everything”-talk was intelligible just by luck. The people who think it makes sense to talk that way think that it necessarily makes sense to talk that way. If they’re right, then it shouldn’t even be possible for something to be the way I suggest.

Here’s the idea. Material things could be made of atoms: they might have smallest parts that cannot be divided any further. Alternatively, they could be made of “atomless gunk” (David Lewis’s term (1991)): any piece of it contains ever-smaller bits. Inside our “atoms” we find protons, in the protons we find quarks, and it never stops. Gunk has a long pedigree as a theory of how the world is—and even if it happens to be false about our world, it sure seems like a way a world could possibly be.

But gunk doesn’t by itself give us what we need: it could be that the parts of a gunky material object eventually run out. If you follow finite chains of decreasing objects, there is always something further down—but if you follow infinite chains, you may succeed in getting all the way to the bottom, with nothing smaller below. But also, (it seems) that might not happen. As you go further and further down to smaller and smaller parts, there are always smaller parts further on. An object with parts like this I’ll call supergunk.1

More precisely, an object X is hypergunk iff it satisfies the following condition:

  • For any parts of X, the x’s, such that each x is a part of or has as a part each of the x’s, there is something that is a proper part of each of the x’s.

From this condition it follows that “part of X” is an indefinitely extensible property: X is indefinitely divisible. So if there’s trouble for the sets, there is just as much trouble for supergunk. And it sure seems like there could be supergunk (even if there isn’t any in the actual world). So the nominalist has a problem with “everything”, too.


  1. Daniel Nolan (2004) describes something he calls “hypergunk”, but unfortunately that’s a bit different. ↩

Written by Jeff

November 16, 2008 at 12:00 am