Well, I was wrong. I’ve been suspicious for a while, and I finally proved it today: set-supergunk is impossible. So I thought I’d share my failure. In fact, an even stronger claim holds:
Theorem. If is atomless, then has a countable chain of parts such that nothing is a part of each of them.
Proof. Since is atomless, there is a (countable) sequence . For each positive integer , let be . Then let be the sum of . Note that the ’s are a countable chain. Note also that each is part of .
Now suppose that is a part of each . In that case, is part of each . But since is disjoint from , this means that is disjoint from each , and so by the definition of a mereological sum, is disjoint from each . This is a contradiction.
The semantic value (SV) of a question has two jobs to do. First, it should fit nicely into the rest of our semantics: it should help us get the right truth conditions for sentences with embedded questions. Second, it should fit nicely into the rest of our pragmatics: it should help us explain what a speaker does when she asks a question. Ideally both of these should require minimal revision to the rest of what we were doing in those projects.
As I see it, the standard account (the SV of a question is a set of propositions that partition logical space) does a mediocre job on both counts. You can get things to work, but the account doesn’t really make it easy, and you end up having to build a lot of new machinery in other places, like attitude verbs. I think I might be able to do better.
Let’s start with a semantic constraint.
This evidently means something close to the following:
So here’s the first interesting observation: it looks like the embedded question introduces a quantifier, and it looks like the quantifier takes scope over the attitude verb.
Chris Barker and Chung-Chieh Shan give a nice way of doing this sort of thing compositionally, with a handy notation. Semantic values are represented by “towers”, where the higher levels of the tower take scope over the lower levels. It looks like this:
(The empty brackets are a convenient shorthand: abbreviates , where has the appropriate type.) We compose the towers of the same height by combining (by function application) the expressions at each level of the tower. There are also two type-shifters, “Lift”, which adds a to the top of a tower, and “Lower” which applies an upper-story function to the argument beneath it. (There are syntactic constraints on when to do these things, but I’m leaving that out for simplicity. I’m also leaving out Barker and Shan’s slightly more complicated story about binding.)
Here’s how it works in the example: Lift the tower on the left to get
Then compose the two to yield , and Lower this to get . (The underlying machinery for this notation uses continuations. This can all be worked out in regular old typed lambda calculus.)
That’s almost the complete account. There’s one extra ingredient: the question clause “who won” also carries the presupposition that someone won.
So here’s the theory. Suppose a question clause has the underlying form . Here is a slightly idealized version of “what” (it doesn’t assume the answer is inanimate). Then has the SV , and carries the presupposition . That’s the account.
We handle “Alice guessed who won” in the same way as “knows”: we get the presupposition , and the content . The nice thing to note is that the x of Alice’s guess need not be the same as the x who actually won. (I’m told Groenendijk and Stokhof have some trouble with this.)
Different question words can be understood in terms of the idealized “what”: “who” means “what person”, “when” means “at what time”, and “whether” means “what truth value”. The last case is worth fleshing out: the SV of “whether p” is . Effectively, “whether p” means “p or not p”—except that the disjunction can take scope over operators that govern the question. So consider the sentence “It’s unclear whether p”. The semantic value for “unclear” splits into a top-floor “un-” and a bottom-floor “clear” (this should be well-motivated for other reasons). The result:
I think this is a nice result, and it comes out very cleanly.
A reasonable pragmatics of questions falls out of this story, too.
Consider an unembedded question: “Who won?” Rendering this as , it gets the SV , and it carries the presupposition . So “who won?” turns out to be a peculiar kind of assertion, one that presupposes exactly what it asserts. The question is guaranteed not to add any new information to the common ground. What conversational purpose can it serve? Well, by the maxim of quantity, if I know Bill won, I shouldn’t just say “Someone won”—I should say “Bill won”. So if I say “Someone won”, this lets my interlocutors infer that I don’t know who won. So on my account, when I ask “Who won?”, I am adding no new information to the common ground—in that sense I’m not really asserting anything—but I’m still conveying the information that I don’t know who won. Finally, the question intonation has the effect of a hopeful pleading look: “Help me out here!” I am pointing out a lacuna in my state of information, and then I wait for someone to helpfully fill it.
The nice thing about this story is that it involves absolutely no new pragmatic machinery: it just applies regular stuff about presupposition, common ground, and implicature to the independently-motivated semantics.
I only really dealt with one kind of context for embedded question: contexts like those under “know”, “guess”, and “clear” that also accept that-clauses. But question clauses also occur places where that-clauses can’t. I’m not sure what the best way of approaching this is.
One strategy is to try to uncover propositional attitudes in these cases. For instance, “Alice wonders who won” might turn out to mean something like “Alice wishes to know who won”. The SV of “wonders” would be something like , which yields the interpretation “Alice wishes that (presupposing someone won) some x is such that Alice knows that x won”. This seems ok, but it’s not clear how far we can push the strategy.
(Quine in “Quantifiers and Propositional Attitudes” pulls similar shenanigans to deal with sentences like “I want a sloop” and “Ctesias is hunting unicorns”: superficially the attitude verbs take NP-complements, but they turn out to disguise propositional attitudes. I remember being annoyed at this when I first read it, and I thought you should just let wanting take a quantifier as its argument. But there’s a certain admirable economy in Quine’s approach, constraining our psychological repertoire to just propositional attitudes. Similar considerations weigh against positing new sorts of attitudes that have a question-y object.)
The case of “wonder” looks like it might be assimilated with what I’ll call subject-matter contexts.
I think the standard account of questions was aimed primarily at these kinds of cases. Indeed, David Lewis initially presented partitions of logical space as a theory of subject matter, not a semantic account of questions. And I don’t really want to start from scratch on a new theory of subject matter. Rather, I ought to be able to retrieve subject matters from the SVs of question clauses.
And indeed I can. Consider this conversion function:
If we apply this to the SV of a question (and Lower), we get back the set of propositions that comprise the corresponding subject matter. For instance, applying it to the SV of , we get .
Accordingly, we can give an account of “about” that applies to my SVs for question clauses. Here’s Lewis’s definition of “p is (entirely) about Q” (where p is a proposition and Q is a set of propositions): for each q in Q, q determines p—i.e., either q entails p or q entails not-p. (This isn’t really quite what we want—for most purposes we want something more like “partly about” rather than “entirely about”. But the point here is just to show how the basic pieces fit together.) Putting this together with the subject-matter conversion, we get this SV for “about”:
Applying this function to a question clause (and Lowering) returns the property of being a proposition about the subject matter of the question.
This part is a little more cumbersome than the standard account, but it isn’t all that bad. One way to proceed from here would be to try to reduce the various subject-matter contexts to combinations of propositional attitudes and propositional aboutness. Maybe it’s more complicated than that, but it seems worth a shot.
[Does anyone know how to do display-style formulas in wordpress?]
This won’t work, at least not without great cost. The United States is composed of fifty states, and it is also composed of 435 congressional districts. If composition is identity, then the U.S. is the states, and the U.S. is the districts; thus the states are the districts. This is bad: composition-as-identity collapses mereologically coextensive pluralities, which means now your plural logic can be no more powerful than your mereology. So you lose the value of even having plural quantifiers. That’s a big sacrifice. (This argument is basically Ted Sider’s, in “Parthood”.)
But the problem here isn’t that the fusion of the Xs is something more than the mere Xs: rather, the fusion is something less. Mereological sums are less fine-grained than pluralities, so if we require each plurality to be identical to a particular sum, we lose the (important!) distinctions that plural logic makes.
This suggests a better way: mereological sums are abstractions from pluralities. Roughly speaking, sums are pluralities with some distinctions ignored. In particular, sums are what you get by abstracting from pluralities on the relation of being coextensive. (Analogously: colors are what you get when you abstract from objects on the same-color relation. Numbers are what you get when you abstract from pluralities on equinumerosity.)
Let’s polish this up a bit. Take overlap as primitive, and define parthood in the standard way:
This has a natural plural generalization:
Parthood is the limiting case of being covered when there’s just one X and one Y. (I’ll identify each object with its singleton plurality.) We can also define an equivalence relation:
Now we can state an abstraction principle. Let Fus be a new primitive function symbol taking one plural argument.
(Compare Hume’s Principle: #X = #Y iff the Xs are equinumerous with the Ys.) This is the main principle governing composition. It isn’t the only principle we’ll need. For all I’ve said so far, fusions could live in Platonic heaven; but we need them to participate in mereological relations:
This guarantees that Fus X really is the fusion of the Xs by the standard definition of “fusion”. There is one final assumption needed to ensure that our mereology is standard:
Equivalently: Fus x = x. In the singular case, composition really is identity.
These three principles imply all of standard mereology. So just how innocent are they?
I think they’re fairly innocent, given the right conception of how abstraction works. I like a “tame” account of abstraction which doesn’t introduce any new ontological commitments. (This means tame abstraction is too weak for Frege arithmetic or for Frege’s Basic Law V—this is a good thing.) The basic idea is that abstract terms refer indefinitely to each of their instances. For example, the singular term “red” refers indefinitely to each red thing: we consider all red instances as if they were a single thing, without being specific as to which. (Semantically, you can understand indefinite reference in terms of supervaluations.) Red has the properties that all red things must share. E.g., if any red thing must be rosier than any taupe thing, then we can also say that red is rosier than taupe. Speaking of red doesn’t commit to any new entity—it’s just speaking of the old entities a new way.
As for colors, so for fusions. “The fusion of the Xs” doesn’t refer to some new exotic thing: it refers indefinitely to each plurality coextensive with the Xs. You could say it refers to the Xs, as long as you don’t mind the difference between coextensive pluralities. Furthermore, since whenever the Xs are coextensive with the Ys they stand in exactly the same covering relations, Principle 2 is justified.
Principle 3, on the other hand, is not entirely innocent. Given the definition of parthood, it amounts to extensionality: no distinct (singular) objects are coextensive. I think it’s right to consider this a separate, serious commitment, one that (unlike the rest of mereology) doesn’t flow from the mere conception of a mereological sum. It might, however, flow from the conception of an object. If you aren’t too worried about speaking completely fundamentally, antisymmetry can be had cheaply, by considering “objects” to be coextension-abstractions from the basic objects, in just the same way that sums are coextension-abstractions from the basic pluralities.
So, indeed, the whole is nothing more than its parts. It can’t be identified with any particular plurality of its parts, but it can be identified indefinitely with every plurality of its parts.
[There’s a technical issue for the semantics I’ve alluded to here. I’m treating Fus X as semantically plural (it refers indefinitely to pluralities), but it is syntactically singular. In particular, as a singular term it can be ascribed membership in pluralities. But this means that I need the semantics to allow pluralities to be members of pluralities—and so on—and this isn’t ordinarily allowed. So it looks like I’ll need to give the semantics in terms of “superplurals”. (See section 2.4 of the SEP article on plural quantifiers.) Whether this semantic richness should be reflected in the language is a separate issue—I’m inclined to think not, but I haven’t really thought it through.]
And this is the second:
The second premise is a methodological principle, rather than a general metaphysical claim (hence the imperative). It’s a ceteris paribus principle, and so the conclusion is a ceteris paribus conclusion: there is a presumption in favor of laws-first accounts.
Thoughts?
(I’ve left out the complications that are introduced to deal with laws that have different forms, such as “All non-F’s are G’s or H’s” (Tooley doesn’t think there are negative or disjunctive universals). But Tooley thinks there is a different nomological relation associated with each syntactic construction, so this doesn’t make a difference here.)
But it doesn’t look at all plausible to me that these constraints pick out a unique relation (assuming anything satisfies them at all). Look, here’s a non-nomological relation that satisfies conditions 1–4: the relation denoted by the two-place quantifier “All”—that is, the relation that holds between Fness and Gness just in case all Fs are Gs. Tooley hasn’t said anything that would distinguish his nomological relation from such run-of-the-mill categorical relations. This strikes me as a serious problem. Am I missing something?
(I’m working through David Armstrong’s What is a law of nature? now—I’ll see if he adds anything helpful.)
EDIT: I inadvertently left out one of Tooley’s constraints:
You might think this might help. You might say in particular that the “All” relation is in fact reducible to less-than-second-order universals only—since, after all, “All(F, G)” holds iff for every x, if Fx then Gx. But this “reduction” involves the concept “for every”, which plausibly involves the “All”-relation in disguise. (Analogously, one might “reduce” a purported nomological relation R by pointing out that “R(F, G)” holds iff it is nomologically necessary that for every x, if Fx then Gx.) I guess I’m not really sure what the rules are for reducing universals.
Armstrong makes a conjecture along the same lines: “I speculate that the laws of nature constitute the only irreducibly second-order relations between universals” (84). So presumably he thinks that either there is no “All”-relation, or else that it is reducible to a lower order. Does anyone have an idea why he would think this?
This afternoon I read Michael Tooley’s “The Nature of Laws” (in the anthology edited by John Carroll). In the section on the epistemology of laws, Tooley shows how we could become confident that a certain law holds, in a Bayesian framework. He then argues that this confirmation story is a distinctive benefit of his account (the DTA account):
[T]here is a crucial assumption that seems reasonable if relations among universals are the truth-makers for laws, but not if facts about particulars are the truth-makers. This is the assumption that m and n [the prior probabilities of certain statements of laws] are not equal to zero. If one takes the view that it is facts about the particulars falling under a generalization that make it a law, then, if one is dealing with an infinite universe, it is hard to see how one can be justified in assigning any non-zero probability to a generalization, given evidence concerning only a finite number of instances. For surely there is some non-zero probability that any given particular will falsify the generalization, and this entails, given standard assumptions, that as the number of particulars becomes infinite, the probability that the generalization will be true is, in the limit, equal to zero.
In contrast, if relations among universals are the truth-makers for laws, the truth-maker for a given law is, in a sense, an “atomic” fact, and it would seem perfectly justified, given standard principles of confirmation theory, to assign some non-zero probability to this fact’s obtaining.
This can’t be right. If Tooley is right in the first paragraph that the probability of any universal generalization over particulars is zero, then appealing to the “atomicity” of nomological facts is no help. The problem is that, on his own view, the nomological relation between universals logically entails the corresponding universal generalization over particulars. But this means that, by monotonicity, the probability of the relation can be no greater than the probability of the generalization. So if the generalization has zero probability, so too does the relation.
The upshot is that if Tooley’s point in the first paragraph is right, then it’s devastating for just about any account of the epistemology of laws—because any account of laws will have it that a generalization being true-by-law entails it being plain-old-true. So we’d better figure out why Tooley’s point is wrong.
But I think it might be false, at least in the right-to-left direction.
A potential counterexample comes from Aristotle. A bandit is lingering by a road, and a farmer is walking home down the same road. By chance (as we would say), they meet at place X at time T. There is a telic explanation in terms of the bandit’s purposes for his being at X at T, and there is a telic explanation in terms of the farmer’s purposes for his being at X at T. But there is no telic explanation for their meeting, even though I take it that their meeting just consists in both of them being at X at T. The meeting is just a coincidence.
Slightly more carefully: let BP be the bandit’s purposes and FP be the farmer’s purposes, and let BXT and FXT be the relevant location facts. Assuming that antecedent strengthening holds for “explains”, this means that (BP and FP) telically explains BXT, and (BP and FP) telically explains FXT. But since their meeting is coincidental, it seems plausible that (BP and FP) does not telically explain (BXT and FXT).
If distribution fails for telic explanation, then perhaps it fails for nomic explanations as well, for the same kind of reason.
Why does this matter? It’s relevant to my criticism last week of Lewis’s “best system” account of laws (not just my criticism). Briefly, I said: on Lewis’s account, the qualitative facts explain what the laws are, but the laws should explain the qualitative facts. That makes a very tight explanatory circle, and that’s bad.
A response might go: it’s true that the conjunction of the qualitative facts explain the laws. It’s also true that the laws explain each individual qualitative fact. But it doesn’t follow that the laws explain the conjunction of the qualitative facts—since the distributive principle fails—and so there is no bad explanatory circle.
Recall the puzzle: how is it that when we select hypotheses that best exemplify our theoretical values, we so often hit on the truth? A simple example: emeralds, even those we haven’t observed, are green, rather than grue. And lo, we believe they are green, rather than believing they are grue. It seems things could have been otherwise, in either of two ways:
Those people are in for a shock. Why are we so lucky?
A response in the Lewisian framework goes like this. Not all properties are created equal: green is metaphysically more natural than grue. In particular, it is semantically privileged: it is easier to have a term (or thought) about green than it is to have a term (or thought) about grue. This should take care of possibility 1. If there are people who theorize in terms of grue rather than green, their practices would have to be sufficiently perverse to overcome the force of green’s reference magnetism. There are details to fill in, but plausibly it would be hard for natural selection to produce creatures with such perverse practices.
But this still leaves possibility 2. Given that our theories are attracted to the natural properties, even so, why should a theory in terms of natural properties be true? The green-projectors in the world of grue emeralds have just as natural a theory as ours, to no avail.
But even though 2 is possible, we can still explain why it doesn’t obtain. What we need to explain is why emeralds are green—and we shouldn’t try to explain that by appeal to general metaphysics, but by something along these lines: the electrons in a chromium-beryllium crystal can only absorb photons with certain amounts of energy. That is, we explain why emeralds are green by appeal to the natural laws of our world.
Generalizing: “joint-carving” theories yield true predictions because their predictions are supported by natural laws. Why is this? On the Lewisian “best system” account of laws, it is partly constitutive of a natural law that it carve nature at the joints: naturalness is one of the features that distinguishes laws from mere accidental generalizations. So, much as reference magnetism makes it harder to have a theory that emeralds are grue than it is to have a theory that emeralds are green, so the best system account makes it harder to have a law that emeralds are grue than it is to have a law that emeralds are green. Then the idea is that, since our theories and our laws are both drawn to the same source, this makes it likely that they line up. Furthermore, since the laws explain the facts, this explains why our theories fit the facts.
Something isn’t right about this story; I’m having a hard time getting it clear, but here’s a stab. There’s a general tension in the best system account: on the one hand, the laws are supposed to explain the (non-nomic) facts; on the other hand, the (non-nomic) facts are metaphysically prior to the laws. But metaphysical priority is also an explanatory relation, and so it looks like we’re in a tight explanatory circle. (Surely this point has been made? I don’t know much of the literature on laws, so I’d welcome any pointers.)
This is relevant because the answer to the problem of fit relies on the explanatory role of laws—a role that seems difficult for the best systems account to bear up. But I feel pretty shaky on this, and would appreciate help.
The concept person is an ethical concept. To be the same person as X is, roughly, to be responsible for X’s actions, to have a special stake in what happens to X, to have special obligations and rights in how you treat X.
Ethical value is grounded in God’s evaluative attitudes. In broad strokes, to be good is to be loved by God. More complicated ethical concepts similarly come down to God’s attitudes in suitably complicated ways.
These two imply that for X and Y to be the same person amounts to God having the right evaluative attitudes—for short, it amounts to God regarding X and Y as the same person. Then the final premise is that God’s attitudes are not unduly constrained:
It’s clear how the conclusion follows.
You can count me among the skeptics of the second premise, but I think there’s something interesting here even if we give it up. If personhood is answerable to questions of value, then it isn’t clear why traditional metaphysical issues like physical or causal continuity would be relevant to survival. If those connections are severed, then the conceptual obstacles to resurrection seem much less threatening.
First, how did we get our beliefs? The rough story is something like this: we have gathered evidence and formed hypotheses to account for that evidence, lending our belief to the hypotheses that best exemplify our theoretical values: fit, simplicity, power.
Second, why are so many of the beliefs we got this way true? It is easy to imagine creatures whose theoretical values are very bad guides to the truth. (Two versions: there could be people in a universe like ours who are defective theory-choosers—(e.g.) they favor the grue-theory over the green-theory; or there could be people with values like ours in a universe where (e.g.) emeralds are really grue.) So it sure looks like it could have been otherwise for us; why isn’t it? I see six main lines of response.
Skepticism. The alleged datum is false: our theoretical values really aren’t a good guide to the truth. This would be very bad.
Anti-realism. Somehow or another, our theoretical values constitute the facts in question. I assume this isn’t plausible for most of the subjects I mentioned.
No reason. We’re just lucky. I think this response leads to skepticism (even though the problem didn’t start as a skeptical worry): if we find out there’s no reason for our theory choices to track the truth, then we shouldn’t be confident that they really do.
Evolution. I’m sure this is part of the story, but it can’t be all of it. Selection might account for reliable theory-choosing about middle-sized objects of the sort our ancestors interacted with (I think Plantinga’s worries about this case can be answered); but this doesn’t account for our more exotic true beliefs about (e.g.) math, morality, cosmology, or the future. There could be creatures subject to the same constraints on survival as our ancestors (at whatever level of detail is selectively relevant), and yet who choose bad theories at selectively neutral scales and distances. Why aren’t we like them?
Reference magnetism: certain theories are naturally more “eligible” than others; our beliefs are attracted to eligible theories; furthermore, the eligible theories are likely to be true. On the Lewisian version of this story, though, there is no reason why the last part should hold: we might very well be in a universe where the natural properties are distributed in a chaotic or systematically misleading way. In that case, reference magnetism would systematically attract our beliefs toward false theories. There might be a better non-Lewisian variant, where a property’s eligibility is somehow tied to its distribution, but I don’t know how this would go.
Theism. Someone (I won’t be coy—it’s God) who has certain theoretical values is responsible both for the way the universe is, and also for the theoretical values we have. He made the universe as he saw fit, which means it has the kind of simplicity he likes; and he made us in his image, which means we like the same kind of simplicity. So when we judge Fness as a point in favor of a theory, this makes it likely that God favors Fness, which in turn makes it likely that the universe is F.
(Why isn’t the fit perfect? The theist already needed an answer to the problem of evil—why the universe doesn’t perfectly fit our moral values; presumably that answer will also apply to our theoretical values.)
So it looks to me like the theist has a better explanation for an important fact than her rivals; this counts as evidence in theism’s favor.
Any good alternative explanations I’m missing?