Archive for the ‘Epistemology’ Category
One of my goals over spring break is to get familiar with some of the literature on laws of nature. I may blog some thoughts on it as I go.
This afternoon I read Michael Tooley’s “The Nature of Laws” (in the anthology edited by John Carroll). In the section on the epistemology of laws, Tooley shows how we could become confident that a certain law holds, in a Bayesian framework. He then argues that this confirmation story is a distinctive benefit of his account (the DTA account):
[T]here is a crucial assumption that seems reasonable if relations among universals are the truth-makers for laws, but not if facts about particulars are the truth-makers. This is the assumption that m and n [the prior probabilities of certain statements of laws] are not equal to zero. If one takes the view that it is facts about the particulars falling under a generalization that make it a law, then, if one is dealing with an infinite universe, it is hard to see how one can be justified in assigning any non-zero probability to a generalization, given evidence concerning only a finite number of instances. For surely there is some non-zero probability that any given particular will falsify the generalization, and this entails, given standard assumptions, that as the number of particulars becomes infinite, the probability that the generalization will be true is, in the limit, equal to zero.
In contrast, if relations among universals are the truth-makers for laws, the truth-maker for a given law is, in a sense, an “atomic” fact, and it would seem perfectly justified, given standard principles of confirmation theory, to assign some non-zero probability to this fact’s obtaining.
This can’t be right. If Tooley is right in the first paragraph that the probability of any universal generalization over particulars is zero, then appealing to the “atomicity” of nomological facts is no help. The problem is that, on his own view, the nomological relation between universals logically entails the corresponding universal generalization over particulars. But this means that, by monotonicity, the probability of the relation can be no greater than the probability of the generalization. So if the generalization has zero probability, so too does the relation.
The upshot is that if Tooley’s point in the first paragraph is right, then it’s devastating for just about any account of the epistemology of laws—because any account of laws will have it that a generalization being true-by-law entails it being plain-old-true. So we’d better figure out why Tooley’s point is wrong.
This distribution principle looks awfully plausible:
- A explains (B and C) iff (A explains B and A explains C).
But I think it might be false, at least in the right-to-left direction.
A potential counterexample comes from Aristotle. A bandit is lingering by a road, and a farmer is walking home down the same road. By chance (as we would say), they meet at place X at time T. There is a telic explanation in terms of the bandit’s purposes for his being at X at T, and there is a telic explanation in terms of the farmer’s purposes for his being at X at T. But there is no telic explanation for their meeting, even though I take it that their meeting just consists in both of them being at X at T. The meeting is just a coincidence.
Slightly more carefully: let BP be the bandit’s purposes and FP be the farmer’s purposes, and let BXT and FXT be the relevant location facts. Assuming that antecedent strengthening holds for “explains”, this means that (BP and FP) telically explains BXT, and (BP and FP) telically explains FXT. But since their meeting is coincidental, it seems plausible that (BP and FP) does not telically explain (BXT and FXT).
If distribution fails for telic explanation, then perhaps it fails for nomic explanations as well, for the same kind of reason.
Why does this matter? It’s relevant to my criticism last week of Lewis’s “best system” account of laws (not just my criticism). Briefly, I said: on Lewis’s account, the qualitative facts explain what the laws are, but the laws should explain the qualitative facts. That makes a very tight explanatory circle, and that’s bad.
A response might go: it’s true that the conjunction of the qualitative facts explain the laws. It’s also true that the laws explain each individual qualitative fact. But it doesn’t follow that the laws explain the conjunction of the qualitative facts—since the distributive principle fails—and so there is no bad explanatory circle.
I’ve been thinking more about the problem of fit I posted last week. Specifically, I’m trying to work out how a response appealing to reference magnetism would go.
Recall the puzzle: how is it that when we select hypotheses that best exemplify our theoretical values, we so often hit on the truth? A simple example: emeralds, even those we haven’t observed, are green, rather than grue. And lo, we believe they are green, rather than believing they are grue. It seems things could have been otherwise, in either of two ways:
- There might be people who project grue rather than green, in a world like ours.
- Or there might be people who (like us) project green, in a world where emeralds are grue.
Those people are in for a shock. Why are we so lucky?
A response in the Lewisian framework goes like this. Not all properties are created equal: green is metaphysically more natural than grue. In particular, it is semantically privileged: it is easier to have a term (or thought) about green than it is to have a term (or thought) about grue. This should take care of possibility 1. If there are people who theorize in terms of grue rather than green, their practices would have to be sufficiently perverse to overcome the force of green’s reference magnetism. There are details to fill in, but plausibly it would be hard for natural selection to produce creatures with such perverse practices.
But this still leaves possibility 2. Given that our theories are attracted to the natural properties, even so, why should a theory in terms of natural properties be true? The green-projectors in the world of grue emeralds have just as natural a theory as ours, to no avail.
But even though 2 is possible, we can still explain why it doesn’t obtain. What we need to explain is why emeralds are green—and we shouldn’t try to explain that by appeal to general metaphysics, but by something along these lines: the electrons in a chromium-beryllium crystal can only absorb photons with certain amounts of energy. That is, we explain why emeralds are green by appeal to the natural laws of our world.
Generalizing: “joint-carving” theories yield true predictions because their predictions are supported by natural laws. Why is this? On the Lewisian “best system” account of laws, it is partly constitutive of a natural law that it carve nature at the joints: naturalness is one of the features that distinguishes laws from mere accidental generalizations. So, much as reference magnetism makes it harder to have a theory that emeralds are grue than it is to have a theory that emeralds are green, so the best system account makes it harder to have a law that emeralds are grue than it is to have a law that emeralds are green. Then the idea is that, since our theories and our laws are both drawn to the same source, this makes it likely that they line up. Furthermore, since the laws explain the facts, this explains why our theories fit the facts.
Something isn’t right about this story; I’m having a hard time getting it clear, but here’s a stab. There’s a general tension in the best system account: on the one hand, the laws are supposed to explain the (non-nomic) facts; on the other hand, the (non-nomic) facts are metaphysically prior to the laws. But metaphysical priority is also an explanatory relation, and so it looks like we’re in a tight explanatory circle. (Surely this point has been made? I don’t know much of the literature on laws, so I’d welcome any pointers.)
This is relevant because the answer to the problem of fit relies on the explanatory role of laws—a role that seems difficult for the best systems account to bear up. But I feel pretty shaky on this, and would appreciate help.
We have a lot of true beliefs. A few examples: there are dogs; every set has a power set; the universe is around 13 billion years old; it is generally wrong to torture children for fun; there are stars at space-like separation from us; a bus will go up First Avenue tomorrow. How did we get so many true beliefs about so many subjects?
First, how did we get our beliefs? The rough story is something like this: we have gathered evidence and formed hypotheses to account for that evidence, lending our belief to the hypotheses that best exemplify our theoretical values: fit, simplicity, power.
Second, why are so many of the beliefs we got this way true? It is easy to imagine creatures whose theoretical values are very bad guides to the truth. (Two versions: there could be people in a universe like ours who are defective theory-choosers—(e.g.) they favor the grue-theory over the green-theory; or there could be people with values like ours in a universe where (e.g.) emeralds are really grue.) So it sure looks like it could have been otherwise for us; why isn’t it? I see six main lines of response.
Skepticism. The alleged datum is false: our theoretical values really aren’t a good guide to the truth. This would be very bad.
Anti-realism. Somehow or another, our theoretical values constitute the facts in question. I assume this isn’t plausible for most of the subjects I mentioned.
No reason. We’re just lucky. I think this response leads to skepticism (even though the problem didn’t start as a skeptical worry): if we find out there’s no reason for our theory choices to track the truth, then we shouldn’t be confident that they really do.
Evolution. I’m sure this is part of the story, but it can’t be all of it. Selection might account for reliable theory-choosing about middle-sized objects of the sort our ancestors interacted with (I think Plantinga’s worries about this case can be answered); but this doesn’t account for our more exotic true beliefs about (e.g.) math, morality, cosmology, or the future. There could be creatures subject to the same constraints on survival as our ancestors (at whatever level of detail is selectively relevant), and yet who choose bad theories at selectively neutral scales and distances. Why aren’t we like them?
Reference magnetism: certain theories are naturally more “eligible” than others; our beliefs are attracted to eligible theories; furthermore, the eligible theories are likely to be true. On the Lewisian version of this story, though, there is no reason why the last part should hold: we might very well be in a universe where the natural properties are distributed in a chaotic or systematically misleading way. In that case, reference magnetism would systematically attract our beliefs toward false theories. There might be a better non-Lewisian variant, where a property’s eligibility is somehow tied to its distribution, but I don’t know how this would go.
Theism. Someone (I won’t be coy—it’s God) who has certain theoretical values is responsible both for the way the universe is, and also for the theoretical values we have. He made the universe as he saw fit, which means it has the kind of simplicity he likes; and he made us in his image, which means we like the same kind of simplicity. So when we judge Fness as a point in favor of a theory, this makes it likely that God favors Fness, which in turn makes it likely that the universe is F.
(Why isn’t the fit perfect? The theist already needed an answer to the problem of evil—why the universe doesn’t perfectly fit our moral values; presumably that answer will also apply to our theoretical values.)
So it looks to me like the theist has a better explanation for an important fact than her rivals; this counts as evidence in theism’s favor.
Any good alternative explanations I’m missing?