The Doomsday Argument is Alive and Kicking
- A critical note on Korbs and Olivers attempted refutation
- (c) Nick Bostrom
- Dept. of Philosophy, Logic and Scientific method
- London School of Economics; Houghton St.; WC2A AE; London; UK
- Email: [email protected]
- Homepage: http://www.hedweb.com/nickb
Back to anthropic-principle.com
ABSTRACT
A recent paper by Korb and Oliver in this journal attempts to refute the so-called Carter-Leslie Doomsday argument. Here I organize their remarks into four objections and find that they all fail. Thus further efforts are called upon to find out what, if anything, is wrong with Carters and Leslies disturbing reasoning.
Objection 1
Korb and Oliver propose the following constraint on any good inductive method:
Targeting Truth (TT) Principle: No good inductive method should --- in this world --- provide no more guidance to the truth than does flipping a coin. (Korb & Oliver, p. %%)
They claim that the Doomsday argument violates this reasonable principle. In support of the claim they ask us to consider
a population of size 1000 (i.e., a population that died out after a total of 1000 individuals) and retrospectively apply the Argument to the population when it was of size 1, 2, 3 and so on. Assuming that the Argument supports the conclusion that the total population is bounded by two times the sample value (we shall give some reason for this interpretation below), then 499 inferences using the Doomsday Argument form are wrong and 501 inferences are right, which we submit is a lousy track record for an inductive inference schema. Hence, in a perfectly reasonable metainduction we should conclude that there is something very wrong with this form of inference. (p. %%)
In this purported counterexample to the Doomsday argument, the TT principle is not violated 501 right and 499 wrong guesses is strictly better than what one would expect to get by a random proceedure such as flipping a coin. The reason why the track record in the above example is only marginally better than chance is that it assumes that the doomsdayers bet on the most stringent hypothesis that they would be willing to bet on at even odds. This means, of course, that their expected utility will be minimal. It is not remarkable then that in this case a person who applies the Doomsday reasoning will only do slightly better than a person who doesnt use such reasoning. If the bet were on the proposition not that the total population is bounded by two times the sample value but instead that it is bounded by, say, three times the sample value, then the doomsdayers advantage would be more drastic. And the doomsdayer can be even more certain that the total value will not be more than thirty times the sample value.
Conclusion: Objection 1 does not show that the Doomsday argument violates the TT principle, nor does it show that the Doomsday reasoning at best improves the chances of being right only slightly.
Objection 2
The second objection that I can discern in the Korb-Oliver paper is the following:
[I]f you number the minutes of your life starting from the first minute you were aware of the applicability of the Argument to your life span to the last such minute and if you then attempt to estimate the number of the last minute using your current sample of, say, one minute, then according to the Doomsday Argument, you should expect to die before you finish reading this article. (fn. 2, p. %%)
The claim that the Doomsday argument implies that you should expect to die before youve finished the reading is incorrect. The only thing that the Doomsday argument implies is that in certain circumstances you should make a shift in your probability assignments due to the fact that in certain respects you can regard yourself as a random sample from a certain population . As Leslie stresses again and again in his book (Leslie 1996), what probability assignment you end up with after you have made this shift depends on your "priors", i.e. the probability assignments you started out with before taking the Doomsday argument into account. In the case of the survival of the human race or its intelligent machine descendants, the priors may be based on your estimates of how likely it is that we will be extinguished as a result of an all-out nuclear war, a nanotechnological disaster, germ warfare etc. In the case of your own life expectancy, you will consider factors such as the average human life span, your state of health, and what physical dangers there are in your environment that could cause your demise before finishing reading the article. Such considerations would presumably lead you to think that the risk that you will die within the next half hour is extremely small. If so, then even a considerable probability shift due to doomsday-like reasoning should not make you expect to die before finishing the article.
I think, however, that we concede too much if we grant that even a modest probability shift should be made in this case. I have two reasons for this (which I will only outline here).
First, the example presupposes a specific solution to what has been called the problem of the reference class (Bostrom 1998). Briefly stated, this is the problem of what class of entities that you should consider yourself to be a random sample from. Is it the class of all conscious entities? Or all entities that have a conception of their birth rank? Or all entities that are intelligent enough to be able to understand the Doomsday argument if it were explained to them? Or all entities who are in fact aware of the Doomsday argument? In my opinion, the problem of the reference class is still unsolved, and it is a serious one for the doomsdayer. The objection under consideration presupposes that the reference class problem is resolved in favor of the last alternative, that the reference class consists of exactly those beings who are aware of the Doomsday argument. This might not be the most plausible solution.
The second reason not to agree that a probability shift should be made in the above example is that it violates the no-outsider requirement (Bostrom 1998). Consider the case of the survival of the human race. What the no-outsider requirement comes down to in this case is that there are no (or not too many) aliens. For suppose there were very many aliens. Then the doomsdayer could say that the fact that a longer-lasting human race will make it more probable that you are a human rather than an alien (because the ratio of humans to aliens would be higher) will compensate for the fact that a shorter-lasting human race will make it more probable that you are an early member of Homo sapiens (roughly the 100 billionth) rather than a later one. It can be shown (Dieks 1992; Bostrom 1998) that in the limiting case where the number of aliens is infinite, this compensation is exact. (And the more aliens, the more nearly exact will the compensation be).
If one wants to apply the Doomsday argument to predict an individuals life span, one has to consider person-segments ("consciousness-moments") rather than persons. What the no-outsider requirement means in this case is that for the Doomsday argument to result in a net probability shift, there must be no (or at least not too many) person-segments other then those belonging to the person who is making the prediction. In the real world, we know that this condition is not satisfied, since there are many more than one human on the Earth. It follows that we cannot use the Doomsday argument to predict an individuals life span, even if we assume that the argument itself is fundamentally sound.
Conclusion: Objection 2 fails to take the prior probabilities into account. These would be extremely small for the hypothesis that you will die within the next thirty minutes. Thus, contrary to what Korb and Oliver claim, even if the doomsdayer thought the Doomsday argument applied to this case, he would not make the prediction that you would die within 30 minutes. However, the doomsdayer should not think that the Doomsday argument is applicable in this case, because it violates the no-outsider requirement and it presupposes an arguably implausible solution to the reference class problem.
Objection 3
The third objection starts off with the claim that a sample size of one is too small to make any substantial difference to ones rational beliefs.
It is quite simple: a sample size of one is "catastrophically" small. That is, whatever the sample evidence in this case may be, the prior distribution over population sizes is going to dominate the computation. The only way around this problem is to impose extreme artificial constraints on the hypothesis space. (p. %%)
They follow this claim by conceding that in a case where the hypothesis space contains only two hypotheses, a substantial shift can occur:
If we consider the two urn case described by Bostrom, we can readily see that he is right about the probabilities. (p. %%)
(The probability in this example shifted from 50% to 99.999%, which is surely "substantial", and a similar result would be obtained for a broad range of distributions of prior probabilities.) But Korb and Oliver seem to think that such a substantial shift can only occur if we "impose extreme artificial constraints on the hypothesis space" by considering only two rival hypotheses rather than many more. By increasing the number of hypotheses about the ultimate size of the human species that we choose to consider, we can, according to Korb and Oliver, make the probability shift that the Doomsday argument induces arbitrarily small:
In any case, if an expected population size for homo sapiens ... seems uncomfortably small, we can push the size up, and so the date of our collective extermination back, to an arbitrary degree simply by considering larger hypothesis spaces. (p. %%)
The argument is that if we use a uniform prior over the chosen hypothesis space (h
1 , h 2 , ..., h n , where h i is the hypothesis that there will have existed a total of i humans), then the expected number of humans that will have lived will depend on n: the greater the value we choose for n, the greater the expected future population. Korb and Oliver compute the expected size of the human population given some different values of n and find that the result does indeed vary.But what is the justification for using a uniform prior? No justification is given. In fact, the assumption is unjustifiable, since the priors required by the Doomsday argument are the probabilities that we assign to hypotheses before we take the Doomsday argument into account. These probabilities will depend on ones knowledge and guesses about a wide variety of empirical factors, such that the likelihood that in the future it will be possible for a terrorist group to manufacture a doomsday virus, the risks of nanotechnological disaster or of some as yet unimagined danger, as well as on estimates of future birth rates, natural death rates, etc. These are the priors that the Doomsday argument says (Leslie 1996, pp. 238 ff.) we should feed into Bayes formula, together with our birth rank, in order to estimate how likely it is that intelligent life will go extinct in the next century. Showing that implausible consequences follow from the assumption of a uniform prior does nothing to discredit the Doomsday argument, since the latter does not use a uniform prior. It uses a prior informed by and dependent on empirical considerations.
The assumption of a uniform prior is completely gratuitous. I think it is also highly implausible even as an approximation of the real empirical prior. Personally I think it is fairly obvious that given what I know, the probability that there will have existed between 100 billion and 200 billion humans is much greater than the probability that there will have existed between 10
20 and (10 20 +100 billion) humans.Notice that even if the doomsdayer had to use a uniform prior (which she emphatically doesnt), it would still be the case that she would have to make a big shift in her probability estimates due to the fact that she has a birth rank of about 70 billion. The Doomsday argument is completely neutral as to the priors; it simply modifies whatever probabilities we put in, to take account of the purported fact that you should consider yourself as a random sample from some suitably defined reference class.
Conclusion: Objection 3 fails. A sample of size one can make a big probability shift. It is incorrect to impute a uniform prior distribution over the hypotheses to which we apply the Doomsday argument. And even with a uniform prior, the Doomsday argument implies a substantial shift.
Objection 4
The forth and last objection that the target paper raises is that we are not random samples from the human species (or the human species cum its intelligent robot descendants) because there is a systematic correlation between our genetic make up and our personal identity:
[T]he notion that anyone is uniformly randomly selected from among the total population of the species is beyond far fetched. The bodies that we are, or supervene upon, have a nearly fixed position in the evolutionary order; for example, given what we know of evolution it is silly to suppose that someone's DNA could precede that of her or his ancestors (p. %%)
The doomsdayer will grant all this. But even if the exact order of all humans that will ever have lived could be retrieved from a table of their genome, the only thing this would show is that there would be more than one way of finding out somebodys birth rank. In addition to the normal way of determining it observing what year it is and combining that information with our knowledge of past population figures of the human species there would now be the additional way of obtaining the same number, namely by analyzing somebodys DNA and comparing that to a list correlating DNA with birth rank.
The same holds for other correlations that may obtain. For example, the fact that I am wearing contact lenses indicates that I am living after the year 1900 A.D. This gives me another way of estimating my birth rank checking whether I have contact lenses and, if I have, draw the conclusion that it is past the year 1900 A.D. and combine this insight with information about past population figures. None of these correlations add anything new once I have found at least one way of determining my birth rank.
In what sense, then, can I be said to be a random sample from the set of all humans that will ever have lived, even if many aspects of my being tie me to the late twentieth century? In the same sense as that wherein I can be said to figure as a random sample in other cases of anthropic reasoning, cases where it would be highly implausible to deny the correctness of that supposition. I can imagine (this is the so-called amnesia chamber thought experiment (Bostrom 1998) in outline) that I have amnesia and I am unable to recall any specific facts about myself that would settle my birth rank. If I in this state decide what are the conditional probabilities for various hypotheses about the prospects of the human species upon my having a given birth rank, and I then find out about my birth rank and update my views by conditionalizing on this information, then the probability assignment that I arrive at should be the same as what I should rationally believe if I never suffered from amnesia in the first place. This is so because the information that I do in fact have is the same as the information that I would have if I first suffered from amnesia and then rediscovered all the facts that I had forgotten. So in order to determine what I should in fact believe now, a useful heuristic is to think in terms of what I should have come to believe had I first forgotten certain facts about myself and then later rediscovered them.
Refusing to regard yourself as a random sample from a group just because your genes determine that you are a specific member of that group leads to implausible consequences, as the following example by John Leslie shows:
A firm plan was formed to rear humans in two batches: the first batch to be of three humans of one sex, the second of five thousand of the other sex. The plan called for rearing the first batch in one century. Many centuries later, the five thousand humans of the other sex would be reared. Imagine that you learn youre one of the humans in question. You dont know which centuries the plan specified, but you are aware of being female. You very reasonably conclude that the large batch was to be female, almost certainly. If adopted by every human in the experiment, the policy of betting that the large batch was of the same sex as oneself would yield only three failures and five thousand successes. ... [Y]ou mustnt say: My genes are female, so I have to observe myself to be female, no matter whether the female batch was to be small or large. Hence I can have no special reason for believing it was to be large. (Leslie 1996, pp. 222-23)
So the presence of a correlation between my genes and my birth rank, or in the above example my birth group, does not obviate the need to consider myself as a random sample from the set of observers. However, such correlations are relevant to the extent to which they influence how many members there are in the reference class according to the rival hypotheses. For example, suppose for the sake of illustration that the correct solution to the problem of the reference class is that the reference class consists of all and only those humans who will ever have thought about the Doomsday argument. Suppose that on empirical grounds we come to believe that there are two major scenarios and they are approximately equally likely: either the human species is extinguished after 100 billion humans have been born or else the human species is extinguished after a million trillion humans have been born. Now, at first blush, given these assumptions, it might look as though the doomsdayer is committed to the claim that the fact that her own birth rank turns out to be roughly 70 billion gives her excellent reason for believing that humankind will soon become extinct. If, however, some good empirical reason were forthcoming why only those humans who are living near the end of the twentieth century will be aware of the Doomsday argument, while the people that might live in the distant future would never think about the argument, and this empirical insight didnt change our estimate of the prior probability of the two scenarios under consideration, then the doomsdayer would abstain from making any probability shift. For in this case, the rival hypotheses would be on a par as to what fraction of all members of the reference class they imply would observe what I am observing. Each hypothesis would imply (together with the other assumptions made) that any member of the reference class would find herself living before 100 billion humans have been born. Hence, finding oneself being alive at such a time would not help one to arbiter between the hypotheses.
This line of reasoning does not lead to a rejection of the Doomsday argument, but depending on how the reference class problem is resolved and on ones guesses about the mental activity of possible future life forms, it might lead to a drastic modification of the conclusion that the "Doomsday argument" (if correct) establishes. In any case, Korb and Oliver do not discuss the relation between the empirical correlations and the reference class problem, so we dont have to further pursue this line of reasoning here.
Conclusion: It is true that there is a systematic correlation between, e.g., our genetic make up and our birth rank. The presence of such a correlation gives us an alternative (though impractical) way of ascertaining our birth rank, but it does not affect the evidential relation between this birth rank and any general hypothesis about humankinds future. This is shown by the amnesia chamber thought experiment and by Leslies example (neither of which Korb and Oliver attempt to criticize). Thus the fourth objection fails to refute the Doomsday argument.
Back to anthropic-principle.com
NOTES
1. Korb & Oliver (1998). Page references will refer to that paper.
2. John Leslie, though, argues against the no-outsider requirement. See e.g. Leslie (1996) pp. 229-30.
3. Strictly speaking, the no-outsider requirement needs not be satisfied; it needs only be satisfied to the best of the subject's knowledge. If the subject is unsure whether or to what extent there are outsiders, then his rational probability shift (provided that the Doomsday argument is basically sound) is given by the generalized Doomsday argument formula in Bostrom (1998).
4. We can also add the assumption that the world is deterministic so as to steer clear of the controversy that exists about how the Doomsday argument works in an indeterministic world. Cf. Leslie (1996), pp. 233ff. and Eckhardt (1997) pp. 245ff.
References
Bostrom, N. (1998) "Investigations into the Doomsday Argument". Forthcoming.
Delahaye, J-P. (1996) "Reserche de modeles pourlargument de lApocalypse de Carter-Leslie". Unpublished manuscript.
Dieks, D. (1992) "Doomsday Or: the Dangers of Statistics". Philosophical Quarterly, 42 (166), pp. 78-84.
Eckhardt, W. (1997) "A Shooting-Room view of Doomsday". Journal of Philosophy, Vol. XCIV, No. 5, pp. 244-259.
Korb, K. & Oliver, J. (1998) "A Refutation of the Doomsday Argument". Mind, %%
Leslie, J. (1996) The End of the World: The Ethics and Science of Human Extinction. Routledge. London.