John S. Wilkins1, Wesley R. Elsberry2
Unedited version. Published as: Wilkins, John S, and Wesley R Elsberry. 2001. The advantages of theft over toil: the design inference and arguing from ignorance. Biology and Philosophy 16 (November):711-724.
Intelligent design theorist William Dembski has proposed an "explanatory filter" for distinguishing between events due to chance, lawful regularity or design. We show that if Dembski's filter were adopted as a scientific heuristic, some classical developments in science would not be rational, and that Dembski's assertion that the filter reliably identifies rarefied design requires ignoring the state of background knowledge. If background information changes even slightly, the filter's conclusion will vary wildly. Dembski fails to overcome Hume's objections to arguments from design.
Sam Spade enters his office to find "Fingers" Finagle, a reformed safecracker, standing in front of his open safe holding the priceless artifact the Cretan Sparrow that Spade was looking after for a client. "Fingers" insists he did not crack the safe, but merely spun the combination dial a few times idly, and it opened by itself. Spade knows from the promotional literature that came with the safe when he bought it at the Chump end-of-season sale that is has over 10 billion (1010) possible combinations, and that only one of these will open it. Moreover, he knows that the dial must be turned in alternating directions, not - as "Fingers" claims he did - in the same direction repeatedly. What does Spade know about this situation? Is the safe open by design, or by accident? William Dembski (1998) thinks he can answer this question definitively.
Dembski has proposed an "explanatory filter" (EF) which, he claims, enables us to reliably distinguish events that are due to regularities, those that are due to chance, and those that are due to design. Such a filter is needed, he believes, to determine the reason for cases like Spade's safe, the discrimination of signals by the Search for Extra-Terrestrial Intelligence project (SETI) that are due to intelligent senders from those that are caused by ordinary phenomena like quasars, and most critically, whether all or some aspects of the biological world are due to accident or design. In other words, Dembski's filter is a reworking of Paley's design inference (DI) in the forensic manner of identifying the "guilty parties".
We will argue that Dembski's filter fails to achieve what it is claimed to do, and that were it to be adopted as a scientific heuristic, it would inhibit the course of science from even addressing phenomena that are not currently explicable. Further, the filter is a counsel of epistemic despair, grounded not on the inherent intractability of some classes of phenomena, but on the transient lacunae in current knowledge. Finally, we will argue that design is not the "default" explanation when all other explanations have been exhausted, but is another form of causal regularity that may be adduced to explain the probability of an effect being high, and which depends on a set of background theories and knowledge claims about designers.
Spade's immediate intuition is that "Fingers" has indeed burgled the safe, but Spade is no philosopher and he knows it. He has, however, read The Design Inference by the detective theoretician, Dembski, and so he applies the filter to the case in hand (literally, since he has "Fingers" by the collar as he works through the filter on the whiteboard).
The EF is represented as a decision chart (p37):
HP events are explained as causal regularities. If it is very likely that an event would turn out as it did, then it is explained as a regularity. IP events are events which occur frequently enough to fall within some deviation of a normal distribution, and which are sufficiently explained by being between those extremes. The rolling of a "snake eyes" in a dice game is an IP event, as is the once-in-a-million lottery win. SP events come in two flavours: specified and unspecified. Unspecified events of small probability do not call for explication. An array of stones thrown will have some pattern, but there is no need to explain exactly that pattern, unless the specifiable likelihood of a pattern is so small that its attainment calls for some account. If an array of stones spells out a pattern that welcomes travellers to Wales by British Rail, then that requires explanation; to wit, that the stones were placed there by an employee of British Rail, by design. The minuscule probability that a contextually significant message in English would occur by chance is ruled out by the specified complexity of that sentence. This Dembski calls the Law of Small Probabilities - specified events of small probability do not occur by chance.3
Spade, though not given to deep reflection, nevertheless studied statistics at the Institute of Forensic Studies, and so he wishes to be thorough. He traverses the filter step-by-step.
HP? No, the door regularly remains locked without intervention, and "Fingers" did not know the combination.
IP? No, there is no significant chance that random spinning of the dial would happen on the combination. Even had "Fingers" chanced to spin the dial the right directions - an IP event - the chance is one in ten billion (10-10) that he would have happened on the combination. The chance is effectively zero, using the Law of Small Probabilities.
SP? Yes, the event has a very small probability.
sp/SP? Yes, the prior probabilities are exactly specified in addition to being very small.
Conclusion: "Fingers" opened the safe by design, not by accident.
"Fingers" is duly charged and arraigned for burglary. He engages the renowned deep thinking lawyer, Abby Macleal, and she defends him with skill. Before we get to the courtroom scene, however, let us go back in time, over a century, to the musings of a young naturalist.
This naturalist - call him Charles - is on a voyage of discovery. He has read his Paley; indeed, he might almost have written out Paley's Evidences with perfect correctness by memory. Although he has not heard of Dembski's filter, he knows the logic: whatever cannot be accounted for by natural law or chance must be the result of design. Young Charles encounters some pattern of the distribution and form of a class of organisms - let us suppose they are tortoises - on an isolated archipelago and the nearest large continent. Each island has a unique tortoise most similar to the autochthon of the neighboring island and the island closest to the continent is most similar to that species. On the basis of the biological theories then current, he knows that there is no known process that can account for this pattern. It is so marked that one can draw a tree diagram from the continental form to the islands, and it will match a diagram showing the similarity of each form to the others. What should Charles rationally infer from this? Let us assume for comparative purposes that Charles is in possession of the filter; he will therefore reason like this:
E: Species are distributed such that morphological distance closely matches geographic distance.
HP? No, there is no regularity that makes this distribution highly probable.
IP? No, the likelihood of such a distribution is extremely low.
SP? Yes, it is a very small probability (made even smaller as more variables are taken into account).
sp/SP? Yes, the problem is (more or less) specified.
Conclusion: The tortoises have the biogeographic distribution and formal distribution they do by design.
By Dembski's framework, Rational Charles should have ascribed the tortoises' situation to intelligent agency, and his subsequent research should have been directed to identifying that agency, perhaps by building balsa rafts to test the likelihood that continental sailors might have taken varieties now extinct on the continent and placed them each to an island according to some plan. An even more parsimonious explanation, and one more agreeable to the Rev. Paley's natural theology, might be that a single agent had created them in situ, along a plan of locating similar species adjacent to each other, which has the added virtue of explaining a large number of similar distributions known throughout the world, as Alfred, a later young voyager, was to note.
Unfortunately for the progress of rational science, Actual Charles is not rational in this manner. He infers that some unknown process accounts for this distribution as a regularity, instead of inferring design. He irrationally conjectures that all the variants are modified descendants of the continental species, and that the morphological and geographical trees are evidence of a family tree of species evolution; and thus the theory of common descent is born. Charles is, rightly, castigated by his friends for irrationality and lack of scientific rigor. His leap to an unknown process is unwarranted, as is his subsequent search for a mechanism to account for it. Were his ideas to be accepted, perhaps out of fashion or irreligion, science would be put back for more than a century until Dembski came along to put it right.
Lest this seem to be a parody of Dembski's views, consider his treatment of the evolution versus creation debate and the origins of life. Dembski (wrongly) conflates the two, treating the origins of life as a test case for the validity of evolutionary theory (it isn't - even if the major groups of living organisms had separate origins, or were created by an agent, their subsequent history could and would have an explanation in terms of "undesigned" evolution). Creationists - the actual ones that do reject evolutionary theories in the way that Rational Charles should have in the 1830s - challenge what Dembski putatively does not, that species share common ancestors with their closest relatives and that natural selection accounts for adaptation. As an adjunct to their arguments, they also, along with Dembski, give credence to the "calculations" of the probability that prebiotic processes would spontaneously form the building blocks of life (the LIFE event), especially of genetic molecules, that various authors have given. Dembski discusses Stuart Kauffman's (and others') blocking of the design inference (Kauffman 1993, 1995) with the following argument:
Premise 3: If LIFE is due to chance then LIFE has small probability.
Premise 4: Specified events of small probability do not occur by chance (the Law of Small Probabilities).
Premise 5: LIFE is not due to a regularity.
Premise 6: LIFE is due to regularity, chance, or design (the filter).
Conclusion: LIFE is due to design.
Of Dawkins' arguments (Dawkins 1986: 139, 145-146) that there is a lot of "planetary years" available because there are a very large number of planets in the universe in which LIFE might have occurred and a lot of time available on each, Dembski says "... because Dawkins never assigns an exact probability to LIFE, he never settles whether LIFE is a high probability event and thus could legitimately be attributed to a regularity" (p58, italics added). Therefore, he says, we may infer that Dawkins accepts Premise 5! But what Dawkins actually says is that the improbability of life occurring had better not exceed the probability that it arose by chance on any one of the available number of planets on which it might have done. This sets a minimum bound to the probability of life, and Dawkins says that on (then) current knowledge, he doesn't know how probable life is. For all he knows, life is indeed due to a regularity. Kauffman's work on the dynamics of autocatalytic polymer sets supports the notion that the upper bound to the probability of life occurring is very high indeed, and life is to be "expected" in appropriate conditions. Dembski's comment? This is a "commitment". The implication is that it is a mere belief or act of faith on Kauffman's part. In fact, it is considerably more than that, and the real problem for origins of life researchers is not to find a possible scenario, but to decide which of a growing number of them holds the most promise, or which combination. But Dembski's filter makes it unnecessary to even try.4
So, let us return now to the courtroom drama in time to hear Abby Macleal rebut prosecutor Pearl E. Mason's case. Abby calls retired Chump engineer Lachlan (Locky) Smith to the stand, and elicits from him the information that the Chump safe Spade owns has an inherent design flaw. If the tumbler is spun five times or more, centrifugal force will cause the lock to spontaneously open. Spade suddenly realizes why he got it so cheap. "Fingers" is acquitted, and initiates civil action for mental anguish and loss of reputation. Clearly, the background information has changed the probability assignments. At the time Spade found "Fingers" at the open safe, he was in possession of one set of background information, Bi. The probability of the event E requiring explanation led to a design inference. After Smith's testimony, a different set of background information, Bj, comes into play, and so the filter now delivers a "regularity" assignment to E. Suppose, though, that Smith had delivered yet another background set, Bk, by testifying that the model in question only actually used two of the five cylinders in the lock. Given that there are 100 possible numbers that might match the successful open state for each cylinder, the probability of a random opening is now 10-4, which is a much higher probability, given the number of Chumps of that model in use in the Naked City (particularly after Chump's massive sell-off of that model to clear the faulty stock). Now the same filter delivers us a chance explanation given Bk. The point is that Dembski's filter is supposed to regulate rational explanation, especially in science, and yet it is highly sensitive to the current state of knowledge. One single difference of information can change the inference from design to regularity to chance. This goes to the claim that Dembski's explanatory filter reliably finds design. Reliability, Dembski tells us, is the property that once an event is found to have the property of "design", no further knowledge will cause the event to be considered to have the property of "regularity" or "chance". What the filter lacks that real-world design inferences already have is a "Don't know" decision. If we can say of a problem that it is currently intractable or there is insufficient information to give a regularity or chance explanation now, then the Filter tells us we must ascribe it to design if it is specifiable. But it can be specifiable without the knowledge required to rule out regularity or chance explanations. This is clearly a god of the gaps stance, and it can have only one purpose: to block further investigation into these problems.
Supposing we do insert a "don't-know" branch: where should it go? There is an ambiguity in Dembski's treatment of his argumentative framework. The Explanatory Filter is written about as if it describes a process of analysis, but Dembski's further argumentation is cast in terms of a first-order logical calculus. In a process, we would come to a "don't-know" conclusion after some evaluation of alternatives, but in a logical framework, there is no temporal dependency. We will here ignore the demands of process and concentrate on the logic. As Dembski's filter eliminates hypotheses from high probability to low probability, clearly an inability to assign a probability in the first place makes the decision the first branch point. So if, on Bi, the probability of E is undecidable, that needs to be worked out first:
Undecidable probabilities lead us to a blocking of the inference at all. No further inferences can be drawn, and no design is required to explain any event for which there is no assignment. However, even if E is decidable on Bi, that in no way licences the expectation that on Bj or Bk those probabilities will remain fixed. For example, when Dawkins wrote in 1986, the state of knowledge about prebiotic chemical reactions was sparse; the range of possible RNA codes and molecular alternatives was not properly understood. As knowledge has grown, our estimate of the probability that some ribonucleotides, or perhaps ribonucleoproteins, or even polyaminoacids, might enter into protobiotic autocatalytic cycles has become much higher. Some even think that in a geologically short time after the cooling of the earth's surface, with the right conditions (themselves now expected to be of reasonably high probabilities on earth) life is almost certain to arise. Perhaps, then, we need another branching at each decision, leading to "Don't-know-yet". As Dembski's probabilities are Bayesian assignments made on the basis of a set of prior knowledge and default hypotheses, this seems to be a perfectly reasonable move. However, it has one glaring problem - it blocks any inferences of design, and that is too much. There are well attested cases of design in the world: we humans do things by design all the time. So an explanatory filter had better not exclude design altogether. How can it be included here? When is a design inference legitimate?
The problem with a simple conclusion that something is designed, is its lack of informativeness. If you tell me that skirnobs are designed but nothing else about them, then how much do I actually know about skirnobs? Of a single skirnob, what can I say? Unless I already know a fair bit about the aims and intentions of skirnob designers, nothing is added to my knowledge of skirnobs by saying that it is designed. I do not know if a skirnob is a good skirnob, fulfilling the design criteria for skirnobs, or not. I do not know how typical that skirnob is of skirnobs in general, or what any of the properties of skirnobs are. I may as well say that skirnobs are "gzorply muffnordled"5, for all it tells me. But if I know the nature of the designer, or of the class of things the designer is a member of, then I know something about skirnobs, and I can make some inductive generalizations to the properties of other skirnobs.
The way we find out such things about designers is to observe and interact, and if we can, converse, with them. In this way we can build up a model of the capacities and dispositions of designers. Experience tells me that a modernist architect will use certain materials to certain effect. Lacking any information about modernist architects leaves me none the wiser knowing that an architect is modernist (in contrast to other architects). Once we have such knowledge of designers, though, what we can say about them is that they generate regularities of outcomes. We know, for example, what the function of the Antikythera Device, a clockwork bronze assembly found in an ancient Greek shipwreck, was because we know the kinds of organisms that made it, we know the scientific, religious and navigational interests they had, we know about gears, and we know what they knew about the apparent motions of the heavens. Hence we can infer that the Antikythera Device is an astrolabe, used for open sea navigation by the stars, or a calendrical calculator, or both (de Solla Price 1974). But suppose it was found by interstellar visitors long after humans went extinct. What would they know about it? Unless they had similar interest and needs to ourselves, or were already able to reconstruct from other contexts what human needs and interests were, for all they know it might be the extrusion of some living organism (which, in a sense, it is), just like a sand dollar. It might never occur to them to compare it to the apparent motion of the heavens from earth circa 500 BCE.
So a revision to Dembski's filter is required beyond the first "Don't-know" branch. This sort of knowledge of designers is gained empirically, and is just another kind of regularity assignment. Because we know what these designers do to some degree of accuracy, we can assess the likelihood that E would occur, whether it is the creation of skirnobs or the Antikythera Device. That knowledge makes E a HP event, and so the filter short-circuits at the next branch and gives a design inference relative to a background knowledge set Bi available at time t. So now there appears to be two kinds of design - the ordinary kind based on a knowledge of the behavior of designers, and a "rarefied" design, based on an inference from ignorance, both of the possible causes of regularities and of the nature of the designer.6
Dembski (1999) critiques "evolutionary algorithms" (an `everything and the kitchen sink' category in Dembski's idiosyncratic usage) as being incapable of yielding specified complexity, and thus actual design. The way in which this is done by Dembski bears upon our point that ordinary design is just another kind of regularity. First, recall how Dembski finds the complexity of an event, which is to use the probability of that event given a chance hypothesis as an estimate of the complexity. When Dembski gives an example involving known agent causation, such as a Visa card number or a phone number, the complexity is assigned on the basis that these numbers are drawn by chance. Yet we know that if a Visa credit card application is accepted, or the phone company accepts an application for phone service, then the likelihood that one's number will be assigned is very close to 1. However, when Dembski calculates the complexity of an event where he knows that an evolutionary algorithm is the proximal cause, the use of a chance hypothesis for comparison is eschewed. Instead, Dembski tells us that the complexity in this case is simply the likelihood that the evolutionary algorithm itself would yield the event. This probability is often very close to 1, and thus Dembski introduces the phrase "probability amplifier" to apply to evolutionary algorithms. But evolutionary algorithms are not the only probability amplifiers around; known intelligent agents are just as much probability amplifiers in Dembski's sense of the phrase. Dembski's inconsistency in handling complexity measurement in these two cases can be resolved in two ways. First, Dembski could consistently measure complexity by reference to the probability given a chance hypothesis, to be used in comparison to the suspected causal hypothesis. This would mean that evolutionary algorithms, among other things, could clearly be responsible for events that are classed as "design" in his original Explanatory Filter. Design would not be capable of distinguishing agent causation, as Dembski has so far claimed, from causation by natural processes. The second way that the inconsistency can be resolved is as we have already indicated, by recognizing a distinction between ordinary design and rarefied design. For those events where our background information includes information about how agents or processes produce events of high probability, we would assign those to the HP category and explain them with reference to regularity. This would preserve a place for a class of rarefied design in the Explanatory Filter, but Dembski's earlier arguments about design indicating agent causation because his Explanatory Filter captures our usual means of recognizing design would only apply to the class of ordinary design, not the desired rarefied design. It is only by the attempt to inconsistently treat agent causation as a privileged hypothesis that Dembski can (erroneously) claim that ordinary design and rarefied design share a node on the Explanatory Filter.
Where does this leave us? Dembski's filter is no longer looking so appealing. Now there are many possible reasons to accept alternative explanations to rarefied design because of uncertainty and on the basis of different background information. It now has many points at which elimination of alternative explanations to regularity and chance through to a rarefied design inference is blocked due to uncertainty and on the basis of different background information. In fact, it now looks rather like this:
So far from leaving us satisfied with a rarefied design inference, as distinct from an ordinary design inference, in cases where we cannot give a satisfactory explanation in terms of causal regularities or chance, we now have to weigh carefully the reliability of our expected probability assignments given Bi...n, and the likelihood that any one of these will apply. Maybe Actual Charles' inferences were not so irrational after all. Perhaps Historical Charles (Darwin's) inferences were actually a rational bet based on a confidence that B would change. Similarly the work of Kauffman, Szathmáry (1997), Wächtershäuser (1997), and many others working on the origins of life rests on the assumptions of an as-yet-unknown causal mechanism or process is a rational bet, despite the lack of specified probability assignments. Only further work and time will tell, and the matter cannot be determined a priori on the basis of Dembski's filter or anything like it.
So instead of design being the penultimate default hypothesis in the decision tree, rarefied design becomes, at best, a tenuous conclusion to draw. There is an in-principle difference between rarefied and ordinary design inferences, based on the background knowledge available about ordinary, but not rarefied, design agencies. Rarefied design inferences tell us nothing that can be inductively generalized. Consequently, analogies between artifacts of ordinary design, which are the result of causal regularities of (known) designers, and the "artifacts" of rarefied design do not hold (as Philo noted in Hume's Dialogues Concerning Natural Religion, Book V).7 Indeed, we might even conclude that the specified small probability of rarefied design is itself an artifact of our prior expectations. As our background knowledge changes and grows (due to the "irrational" inferences of people like Actual Charles), so too do the specifications, and sp/SP can become HP or IP. Why is there a rarefied design option in the filter at all? Dembski has not dealt with such Humean objections. His a priori expectation is that events of specified small probability (relative to whichever specification) do not happen by themselves through chance or regularity, and hence require some other "explanation". But if this is merely a statement about our expectations, and we already require a "don't know" or "don't know yet" option in our filter, why are we ever forced to a rarefied design conclusion? Surely we can content ourselves with regularities, chance and "don't know" explanations. Such overreaching inferences as a rarefied design inference carry a heavy metaphysical burden, and the onus is on the proponents of such an a priori assumption to justify it. Otherwise, "don't know" is adequate to the empirical task where background information is equivocal. This leads us to a truncated filter of the form of the following flow diagram:
If Bi is replaced, then the explanation of E may change, and this is as it should be. Science is not a process of deduction from fixed axioms and a priori background specifications, or it is only transiently. We may expect that some background assumptions are harder to change than others, but they are all revisable in the light of better knowledge. The need for a rarefied design inference which does not offer scope for inductive inferences or add any predictive value beyond that already gained from B and HP/IP assignments is not apparent. One might cynically suppose that it is there not for heuristic and explanatory reasons but as a way to confer legitimacy upon natural theology and teleology.
If science is to be possible given a fallibilistic account of knowledge and if the knowledge it generates depends on empirical rather than innate rational information, then no rarefied design inference is needed, and all inferences are sensitive to the current state of knowledge. A priori assignments such as Dembski's filter requires make the human enterprise of discovery through trial and error impossible except where the metaphysical commitments of scientists and the broader society are unthreatened. On Dembski's account, any hypothesis - for example that humans are descended from apes - that offended public metaphysics would never get a chance to be considered and developed, since it would be deemed unnecessary before it could be. Naturally, once it had been incorporated into the Bi of the present, it could not be rejected except on evidentiary grounds by a DI-proponent, but neither would it have been incorporated in the first place if the filter were widely accepted. In effect, the DI filter is a counsel of heuristic despair, or perhaps of hope that some matters are protected from a scientific investigation that is not dependent upon final causes. Like all such accounts from Paley onwards, Dembski's filter has all the advantages of theft over honest toil, as "Fingers" knew well when he deliberately made use of the common knowledge of all safecrackers in the Naked City that the Chump safes had an inherent design flaw. It is a pity that the court didn't also have that bit of background knowledge in their own design inference.
Dawkins, Richard. The blind watchmaker. Harlow: Longman Scientific and Technical, 1986.
Dembski, William A. The design inference: eliminating chance through small probabilities. Cambridge; New York: Cambridge University Press, 1998.
Dembski, William A. Explaining specified complexity. Meta-Views #139, http://listserv.omni-list.com/scripts/wa.exe?A2=ind99&L=metaviews&D=1&O=D&F=&S=&P=14496, 1999. Accessed 24 October 2000.
Fitelson, Brandon, Christopher Stephens, and Elliott Sober. "How Not to Detect Design - Critical Notice: William A. Dembski, The Design Inference." Philosophy of Science 66, no. 3 (1999): 472-488.
Kauffman, Stuart A. The origins of order: self-organization and selection in evolution. New York: Oxford University Press, 1993.
Kauffman, Stuart A. At home in the universe: the search for laws of self-organization and complexity. New York: Oxford University Press, 1995.
Kitcher, Philip. "Function and Design." In Nature's purposes: analyses of function and design in biology, ed. Colin Allen, Marc Bekoff, and George Lauder. Cambridge, MA: MIT Press: A Bradford Book, 1998.
Pennock, Robert T. Tower of Babel: the evidence against the new creationism. Cambridge, Mass.: MIT Press, 1999.
Sober, Elliott. Philosophy of biology Dimensions of philosophy series. Boulder, Colo.: Westview Press, 1993.
de Solla Price, D. "Gears from the Greeks: The Antikythera Mechanism -- a Calendar Computer from ca. 80 BC." Transactions of the American Philosophical Society 64, no. 7 (1974): 1-70.
Szathmáry, E. "Origins of life. The first two billion years." Nature 387, no. 6634 (1997): 662-3.
Wächtershäuser, G. "The origin of life and its methodological challenge." Journal of Theoretical Biology 187, no. 4 (1997): 483-94.
1. History and Philosophy of Science, The University of Melbourne. Email: wilkins@wehi.edu.au, corresponding address: PO Box 542, Somerville 3912, Australia.
2. Dept. of Wildlife & Fisheries Sciences, Texas A&M University. Email: welsberr@inia.cls.org, corresponding address: 3027 Macaulay Street, San Diego, CA 92106, USA.
4. On pp105-106, Dembski discusses the problem of assigning a measure of the complexity of a problem needed to assign the probability estimate of E. He says, "I offer no algorithm for assigning complexities nor theory for how complexities change over time. For practical purposes it suffices that a community of discourse cans settle on a fixed estimate of difficulty." In effect, design inferences are necessarily fixed to the community's discourse at a single time. See Fitelson et al. 1999 for a discussion of Dembski's implicit use of background information.
6. Dembski's design is not necessarily the result of intelligent agency, he says (p9), but if it is not, design is just a residue of the EF. The EF classifies events, not causes. NS appears to be well-suited to produce events that land in the design bin. Causal regularities that account for design-like features (Kitcher 1998) such as self-organization or natural selection are covered by the regularity option. This is the problem with Dembski's use of these terms. They lead to confusing events and causes. As Dembski notes, events caused by intelligent agents can be classified in any of the three bins of his EF. We would extend this to say that events due to NS can also be found in multiple bins, including "design", and the analogy he draws is explicitly with output of intelligent agency. The reason we can "reverse engineer" the "design" of extinct organisms is because we know a lot about the ecology, biology and chemistry of organisms from modern examples. But we can do none of this with rarefied design.
7. Sober (1993) argues that Hume got the Teleological Argument wrong - instead of the teleological argument being an analogical inductive inference, he says, Paley's argument (which came after Hume's criticisms) is actually an abductive argument to the best explanation. But for the reasons given here, we think Sober is wrong. One cannot make a substantive case for rarefied design except through analogy, which as Hume showed fails, and if analogy is not used, no explanatory power inheres in rarefied design inferences. Of course, this does not preclude ordinary design inferences of the origin of life, for example (cf. Pennock 1998 on the Raelians), but the evidence is lacking for that, especially compared to regularity explanations in terms of known chemistry and physics and geology.