# Speak with the vulgar.

Think with me.

## Improbable laws

One of my goals over spring break is to get familiar with some of the literature on laws of nature. I may blog some thoughts on it as I go.

This afternoon I read Michael Tooley’s “The Nature of Laws” (in the anthology edited by John Carroll). In the section on the epistemology of laws, Tooley shows how we could become confident that a certain law holds, in a Bayesian framework. He then argues that this confirmation story is a distinctive benefit of his account (the DTA account):

[T]here is a crucial assumption that seems reasonable if relations among universals are the truth-makers for laws, but not if facts about particulars are the truth-makers. This is the assumption that m and n [the prior probabilities of certain statements of laws] are not equal to zero. If one takes the view that it is facts about the particulars falling under a generalization that make it a law, then, if one is dealing with an infinite universe, it is hard to see how one can be justified in assigning any non-zero probability to a generalization, given evidence concerning only a finite number of instances. For surely there is some non-zero probability that any given particular will falsify the generalization, and this entails, given standard assumptions, that as the number of particulars becomes infinite, the probability that the generalization will be true is, in the limit, equal to zero.

In contrast, if relations among universals are the truth-makers for laws, the truth-maker for a given law is, in a sense, an “atomic” fact, and it would seem perfectly justified, given standard principles of confirmation theory, to assign some non-zero probability to this fact’s obtaining.

This can’t be right. If Tooley is right in the first paragraph that the probability of any universal generalization over particulars is zero, then appealing to the “atomicity” of nomological facts is no help. The problem is that, on his own view, the nomological relation between universals logically entails the corresponding universal generalization over particulars. But this means that, by monotonicity, the probability of the relation can be no greater than the probability of the generalization. So if the generalization has zero probability, so too does the relation.

The upshot is that if Tooley’s point in the first paragraph is right, then it’s devastating for just about any account of the epistemology of laws—because any account of laws will have it that a generalization being true-by-law entails it being plain-old-true. So we’d better figure out why Tooley’s point is wrong.

Written by Jeff

March 14, 2009 at 3:27 pm

Posted in Epistemology, Metaphysics, Science

Tagged with ,

### 3 Responses

1. David Armstrong attributes roughly this point to Peter Forrest, and endorses it (What Is a Law of Nature? p. 105). But he doesn’t seem to think the problem of zero (in his version, infinitesimal) priors is very serious.

Jeff

March 18, 2009 at 4:17 pm

2. If you subscribe to a Bayesian account, it seems it should be very hard to justify assigning zero (or one) prior probability to anything at all, provided you don’t have reason to believe it impossible; and you certainly don’t have a reason to believe any particular universal generalization which fits all the facts you are aware of to be impossible.

Alexey Romanov

August 26, 2009 at 8:41 pm

3. Something like that seems right. It’s a bit subtler, though, since really a Bayesian can assign a possible hypothesis zero probability: standard examples are the hypothesis that a dart exactly hits point X on a dartboard, or the hypothesis of an infinite sequence of heads. On the right evidence I can even come to assign positive probability to such a hypothesis.

Moreover, I’d guess any sensible prior will assign zero probability to a particular universal generalization about, say, the distribution of blackness over an infinite population of ravens. There are too many alternative possibilities to sensibly do much else. The question is what structure those hypotheses have (like the continuous structure of points on a dartboard, or the product structure of coin flip sequences) that allows us to assign positive probabilities to regions of probability space, and thus make reasonable inferences. It would be nice to think about this a bit more.

Jeff

August 27, 2009 at 12:20 am