The Mutationism Myth (6): Back to the Future

This post wraps up a 6-part series on the Mutationism Myth (a more scholarly version of this material ended being published in J. Hist. Biol. by Stoltzfus and Cable, 2014), and sets the stage for the future by locating the primary weakness of the 20th century neo-Darwinian consensus in its theory of variation.

I’d like to thank Larry Moran for originally hosting this series of posts on Sandwal in 2010.

The Mutationism Myth, part 6. Back to the Future

The Mutationism Myth, a story about how the discovery of genetics affected evolutionary thought, continues to be part of modern neo-Darwinism’s monologue with itself (e.g., Charlesworth and Charlesworth, 2009), being used even by leading thinkers calling for an “Extended” Synthesis (e.g., Pigliucci, 2010 1). Since April, we’ve been deconstructing the Mutationism Myth by exploring what the early Mendelians actually thought, and how their view was replaced by the Modern Synthesis (MS).

Today we’ll take the opportunity to review what we’ve learned and start unpacking its relevance for the future of evolutionary biology.


In the Mutationism Myth (see part 1 for examples), the founders of genetics misinterpret their discovery, concluding that evolution takes place by large mutational jumps, without selection. The false gospel of these “mutationists” brings on a dark period of confusion and error that lasts until the 1930s, when theoretical population geneticists (Fisher, Haldane and Wright) prove that Mendelian genetics is not only compatible with selection, but provides the missing link that completes Darwin’s theory and unites all the biological disciplines. The “Modern Synthesis” combining genetics and selection becomes the foundation for all subsequent evolutionary thought.

As we discovered, the Mutationism Myth isn’t very accurate. Heredity is not missing from Darwin’s theory; selection is not missing from the Mendelian view. Darwin had a theory of heredity both in the sense of a set of phenomenological laws, and in the sense of a mechanism to account for them (part 2). Both were wrong. The Mendelians synthesized genetics and selection, rejecting Darwin’s “Natural Selection” theory due to its dependence on fluctuations or “indefinite variability” (defined by Darwin as the subtle variations that arise anew each generation in response to conditions of life). As we know today, such enrivonment-induced fluctuations are non-heritable.

In part 3, we found that the Mendelians laid the conceptual foundations of evolutionary genetics (later formalized mathematically), while part 4 addressed how their view diverged from Darwinian orthodoxy. The Mendelians assumed that new hereditary variants arise rarely and discretely, by mutations whose effects may be large or small. Each new mutation is likely to be rejected, but it may be accepted by chance, especially if it improves fitness. Because, in this view, change depends on discrete events of mutation, the Mendelians (part 4) considered the process of mutation to be a source of initiative, discontinuity, creativity and direction in evolution (Stoltzfus, 2006). This view expanded the role of variation well beyond the subordinate role of raw materials that Darwin had imagined.

The Mendelians were unable to convince naturalists (the majority of their biologist peers) to accept their new view of evolution, nor even a new view of inheritance. Many naturalists remained wedded to Lamarckian and Darwinian views of “soft inheritance”.

As we found out in part 5, the “Modern Synthesis” (modern neo-Darwinism) claimed to reconcile Darwin’s own view with genetics, though it quietly ignored Darwin’s errors while depicting the Mendelians as foolish “saltationists”, dismissing their “lucky mutant” view and their ideas about the role of mutation in evolution. In the MS view, each species has a “gene pool” that automatically soaks up and “maintains” hereditary variation, providing abundant “raw materials” for adaptation. The key innovations of this view were to define “evolution” as “shifting gene frequencies” in the “gene pool”, to erase the link between “Darwinism” and Darwin’s own theory of soft inheritance, and to develop a theory of causation in terms of population-genetic “forces”, in which continuous shifts in allele frequencies are the common currency of causation. The new theory put mutation in a subordinate position of supplying infinitesimal “raw materials” for selection. As a result, the MS created a consensus where the Mendelians had failed: naturalists such as Ernst Mayr found that they could accept Mendelian genetics without giving up adaptationist preconceptions.

A bright line

The backdrop for this whole discussion (in case you missed it) is that the MS is strikingly wrong in its neo-Darwinian departures from the Mendelian view. I’ve implied this several times, and perhaps I’ve waved my hands and pointed vaguely to mountains of molecular evidence contradicting the MS, but I haven’t made this point perfectly clear.

In a moment, I will do that, but first I want to make clear what is at stake.

The Mendelians allowed that evolutionary change could be initiated by an event of mutation, and they interpreted this to mean that mutation was (to an unknown degree) a source of initiative, discontinuity, creativity and direction in evolution. The MS represents a very deliberate rejection of this view, and proposes instead that evolution is a complex sorting out of available variation to achieve a new multi-locus equilibrium, literally by “shifting gene frequencies” in the “gene pool”. The rate of evolution, in this view, does not depend on mutation, which merely supplies the “gene pool” with variation; evolution is not shaped by mutation, which is the “ultimate” source of variation, but not the proximate source.

When I made this distinction at a 2007 symposium in honor of W. Ford Doolittle, Joe Felsenstein was in the audience and pointed out that, while Fisher may have looked at things in this way, Wright’s stochastic view took into account random events, like new mutations. It’s true that Wright’s “shifting balance” model assigns a prominent role to random genetic drift, while Fisher’s view was deterministic. However, these are just two different flavors of the same “shifting gene frequencies” paradigm: neither view incorporates new mutations. The absence of new mutations from Wright’s shifting balance process is apparent from the fact that Patrick Phillips (1996) extended it to include a new starting phase (“phase 0”) of “waiting for a compensatory mutation”.

The fact that contemporary evolutionary biologists, for the most part, don’t understand this aspect of their intellectual heritage is not evidence of a cover-up. Scientists don’t get much chance to learn history. The history that they absorb is mainly from stories that appear in scientific writings, like the Mutationism Myth and the Essentialism Story, stories that represent Synthesis Historiography (Amundsen, 2005), the discipline of telling history in ways that make things turn out right for the Modern Synthesis. Synthesis Historiography teaches us that “saltationism” (Mayr’s pejorative term for the Mendelian view) and other alternatives to neo-Darwinism are nonsensical, “doomed rivals”, supported only by “typologists”, creationists, vitalists and other crazies. That is, Synthesis Historiography teaches the TINA doctrine: There Is No Alternative.

As contemporary research drifts away from the “gene pool” theory and the Darwinian doctrines of the MS, each evolutionary biologist remains confident that, due to the TINA doctrine, his own view must be “neo-Darwinian”. In reality, alternatives are being explored with increasing vigor in molecular evolution, evo-devo, and evolutionary genetics.

A few folks today are in the reverse situation of being familiar with MS orthodoxy, but not with recent research. Dawkins (2007) stakes his critique of a book by “intelligent design” creationist Michael Behe entirely on his faith in the gene pool theory. Behe claims, in effect, that there was not sufficient time for all the mutations needed to account for evolution. Dawkins responds by attacking the premise that evolutionary rates depend on mutation rates:

“If correct, Behe’s calculations would at a stroke confound generations of mathematical geneticists, who have repeatedly shown that evolutionary rates are not limited by mutation. Single-handedly, Behe is taking on Ronald Fisher, Sewall Wright, J.B.S. Haldane, Theodosius Dobzhansky, Richard Lewontin, John Maynard Smith and hundreds of their talented co-workers and intellectual descendants. Notwithstanding the inconvenient existence of dogs, cabbages and pouter pigeons, the entire corpus of mathematical genetics, from 1930 to today, is flat wrong. Michael Behe, the disowned biochemist of Lehigh University, is the only one who has done his sums right. You think? The best way to find out is for Behe to submit a mathematical paper to The Journal of Theoretical Biology, say, or The American Naturalist, whose editors would send it to qualified referees.”

With his signature over-the-top rhetoric, Dawkins insists that “mathematical genetics” has proven that evolutionary rates are not limited by mutation. Allowing for some exaggeration, this is an accurate representation of MS orthodoxy ca. 1959, the approximate vintage of Dawkins’s views. If Mayr had been alive, he might have said the same thing.

Meanwhile, no one who has been active in evolutionary genetics research in the past 15 years would represent the current state of knowledge in this way. If you want to know what a contemporary researcher would say, take a look at the most recent issue of Evolution in which the article by Douglas Futuyma (famous for his evolution textbook) gives many examples of how evolutionists (including himself) repeated the doctrine that mutation does not “limit” evolution, but argues that we are no longer making this dubious assumption. Another example would be the piece by Ronny Woodruff and James Thompson (1998) that introduces their symposium volume on Mutation and Evolution.2

Yet the MS and its “gene pool” theory have left their mark on evolutionary biology, even if the MS itself has largely disappeared from the collective memory of researchers. One indelible mark is what Gillespie calls “The Great Obsession” of population genetics to understand the “maintenance of variation”, but that’s a story for another day.

Another indelible mark is the long absence of mutationist models of “adaptation”, a topic that has blossomed just in the last dozen years. Allen Orr has achieved well deserved fame for his innovations in this area, and we’ll discuss his work briefly in the next section. For now, let us note how other researchers have pointed out the absence of such models:

“Almost every theoretical model in population genetics can be classified into one of two major types. In one type of model, mutations with stipulated selective effects are assumed to be present in the population as an initial condition . . . The second major type of models does allow mutations to occur at random intervals of time, but the mutations are assumed to be selectively neutral or nearly neutral.” (Hartl & Taubes, 1998)

“The process of adaptation occurs on two timescales. In the short term, natural selection merely sorts the variation already present in a population, whereas in the longer term genotypes quite different from any that were initially present evolve through the cumulation of new mutations. The first process is described by the mathematical theory of population genetics. However, this theory begins by defining a fixed set of genotypes and cannot provide a satisfactory analysis of the second process because it does not permit any genuinely new type to arise. ” (Yedid and Bell, 2002)

These authors are not trying to make a point about history or about the Modern Synthesis: they are simply claiming the novelty of their own models of adaptation that incorporate new mutations. And what they are saying is that the paradigm of 20th-century population genetics is “shifting gene frequencies”: overwhelmingly, it’s a body of theory about what happens to the variation that is present in a population as an initial condition, not about a larger-scale process in which there are new beneficial mutations.3

One small step for a phage, one giant leap for evolutionary biology

The actual role of mutation in evolution is not what is theorized in the MS. Many arguments could be made to support this contention, but I’m going to make just one argument drawing on one source, namely Rokyta, et al., 2005. I choose this argument because it is particularly compelling and concise. My argument addresses the lucky mutant view of initiative or (to put it another way) dynamics.

Rokyta, et al. is a study of parallel evolution in an experimental population of the bacteriophage phiX174, published in Nature Genetics. It was hailed as “the first empirical test of an evolutionary theory” (Bull & Otto, 2005), where the theory in question is Orr’s (2002) ingenious extension of Gillespie’s (1984) “mutational landscape” model to take into account predictions of extreme value theory.4

In spite of the fancy name, the “mutational landscape” model of sequence evolution is simple. Rather than considering all conceivable evolutionary changes from a starting sequence, we simplify the problem by considering only changes that occur via 1-bp mutations. That set of possibilities, by definition, is the “mutational landscape” or (my preferred term) the “evolutionary horizon”. Each change will shift the evolutionary horizon, but it’s easy to recompute the horizon, because it’s easy to enumerate (theoretically) all the alternative sequences.

We are going to make this a model of beneficial changes (“adaptation”). A beneficial mutation is introduced into the population of N individuals at some total rate Nu, and faces acceptance with a probability of 2s, based on the classic formula p = 2s for the probability of fixation of a new beneficial mutation.5 For beneficial substitution i with selection coefficient si, the probability6 is just Nu*2si. If we divide an individual Nu*2si by the sum of all such values on the horizon, we get a normalized probability: the probability that the next step in our evolving system is step i. The factor Nu*2 is the same for every step, so it cancels out: only the si values matter. To evolve our sequence, we just sample from this probability distribution of possible steps, then recompute the new evolutionary horizon in preparation for the next step. Easy! 7

From past experiments, Rokyta, et al. know which steps on the horizon are beneficial, and they even know the selection coefficients. They know that sometimes, the same evolutionary steps happen in parallel, in replicate phage populations. They can compare the observed pattern of parallel evolution with the pattern predicted from theory.

Now, the preceding description suggests something fascinating: the cutting edge of evolutionary genetics today, with papers that get published in Nature Genetics with commentaries, uses experimental systems to explore the “lucky mutant” view of parallel evolution.

But the story gets even better. Rokyta, et al actually reject Orr’s model, in its original version. They find more parallel evolution than expected. Why? Because the model treats all mutation rates equally. Note above that we canceled out mutation rates on the grounds that they are all the same. But that’s not realistic. Some mutations are more likely than others, and this will affect the rate at which they are introduced into the population and subjected to acceptance or rejection. The more heterogeneity in rates of mutation, the more parallel evolution. Rokyta, et al. found that if they revised the model to take into account transition:transversion bias (I think it’s about 5-or 6-fold under the experimental conditions), then the predicted amount of parallelism matched the observed amount.

Just let that soak in for a moment. We have an experimental study and a precise model. Evolution in this model is characterized by origin-fixation dynamics, dependent on the rate of mutational introduction of new alleles, and on their probability of fixation. Both factors affect the outcome of evolution; both factors affect the chance of parallelism. The experimental study eliminates (statistically) a model that lacks mutational bias in the introduction of new alleles. Thus the study clearly illustrates the dual causation of evolutionary change, in regard to its dynamics.

Back to the future

The MS is wrong, and not in a small way: it’s wrong because reality just looks too much like the antithesis of the MS, i.e., the mutationist view. For instance, as we found out in part 4, Vavilov (1922; see Stoltzfus, 2006) understood the dual causation of parallel evolution, including the role of parallel variation. By contrast, Mayr famously said that the search for homologous genes or homologous mutations was foolish.

This mistaken prediction is repeated ad nauseam in the evo-devo literature. If you have been following along, you now understand why Mayr would make such a prediction. The MS makes substantive claims about evolution, among which are the claim that, while mutation is ultimately necessary to keep the “gene pool” from drying up, selection doesn’t need to wait for a new mutation, but draws together a multi-locus optimum from the abundance of raw materials in the gene pool; “evolution” is so far removed from the process of mutation, with so many complex dynamic processes interceding, that the outcome of evolution does not depend on specific events of mutation. If evolution really were like that, parallel mutation would be unimportant. That is, Mayr’s prediction accurately reflects the logic of the MS. But as the Rokyta, et al study (and many others) show, the prediction is not fulfilled.

According to Dawkins, “the entire corpus of mathematical genetics” from 1930 to “today” (i.e., about 1959, for Dawkins) would be “flat wrong” if one accepts the premise that evolution depends on new mutations, or that it is limited by the mutation rate. While this view is not often defended, that isn’t because it’s Dawkins’s own personal opinion. Dawkins is accurately characterizing a theory that makes substantive claims about the world, a theory that most of us have forgotten. One of these claims is that “evolution” can be represented mathematically as a process of shifting the frequencies of alleles already present in an initial population, without new mutation; sometimes this doctrine is invoked by saying that “macroevolution” can be extrapolated from “microevolution”.

If evolution actually worked like this, then evolutionary change would not exhibit a dependence on the rate of mutation, and Dawkins would be right in his criticism of Behe. But this is wrong. In fact, the dependence is so sensitive that effects of only a few-fold are noticeable, as the Rokyta, et al study (and many others) shows.

I’m not going to mince words. The MS is wrong, and not in a small way: reality looks too much like the mutationist view that we (the scientific community) rejected when we bought into the MS. We need another theory1, perhaps several others.

The road less traveled

What’s wrong about the MS, and what its replacement(s) must replace, is its theory of the role of variation in evolution. In future posts on the Curious Disconnect, I intend to focus on this issue. The Mutationism Myth suggests a lesson about how to develop (or rather, how not to develop) a theory of variation.

Darwin knew that hereditary variation played a vital role in evolution. He studied the subject intensely. He found that organisms vary in many different ways, and on many scales, but the evidence on heredity was bewildering and inconclusive. Lacking the means to derive a mechanism of evolution by reasoning upward from genetics, Darwin reasoned downwards from his premises that 1) organisms are exquisitely and pervasively adapted to their niches, 2) selection must have played some role in this, and 3) Mother Nature never makes a jump. Gould argues that Darwin’s willingness to posit precise restrictions on variation was a stroke of genius.8 Darwin knew that discrete “sports” (mutants) could be heritable, but he discounted them: they could not make his theory work as desired. Instead, he staked his “natural selection” theory on the heritability of fluctuations, because they were infinitesimal, indefinite (unbiased), and “everywhere present”, being induced in abundance whenever organisms encountered altered “conditions of life”. Inferring the heritability of fluctuations completed his theory and made it work.

But it was wrong: the fluctuations that made Darwin’s theory work are non-heritable, as the Mendelians discovered.

The architects of the MS tried again, with advantages unavailable to Darwin. Not only did they know genetics, they had some mathematical tools to work out unforeseeable implications of genetic concepts. However, they didn’t have the knowledge to distinguish among different, genetically consistent modes of evolution. They had to fill in this gap somehow. Their downwards Darwinian reasoning and their upwards Mendelian reasoning met in the middle with the “gene pool”: a theory of population genetics that would supply abundant, infinitesimal, “random” variations, in order to rationalize their commitment to the same premises Darwin accepted. That was the genius of the MS.

But again, it was wrong.

If we look at Darwinism in Popperian terms, as a theory1 that takes risks and generates potentially falsifiable claims, then (counterintuitively) it is largely a theory of the role of variation in evolution. The claims that selection is “important”, and that it has some inalienable role in adaptation, carry little risk and have been widely accepted for 150 years. By contrast, the restrictions that Darwinism places on variation, in order to make it a subordinate factor that supplies “raw material” to selection, are risky and controversial, e.g., the claim that variation is random with respect to the direction of evolution, or that the rate of evolution does not depend on the rate of mutation, or the “gradualist” claim that variation is not a source of discontinuity. The architects of the MS invested the “gene pool” with nearly magical properties in order to improve the prospects for adaptation. Problematic claims about the role of variation are, and have been for 150 years, the overwhelming basis for scientific criticism of Darwinism.

And this problematic view of variation is based on reasoning from the premise that organisms are exquisitely and pervasively adapted to their niches, to the conclusion that variation must play just the right role of supplying abundant raw materials to make this possible. I believe that there is something fundamentally wrong with this mode of reasoning. Perhaps it betrays a kind of subconscious Panglossian agenda. Every time I give a lecture on mutation-biased evolution, someone suggests that perhaps the mutation biases themselves are adaptive, as though this inference could restore one’s faith that everything turns out for the best, and that “the ultimate source of explanation in biology is the principle of natural selection” (Ayala, 1970). Remarkably, the evo-devo-inspired view that seems destined for inclusion in the emerging “Extended Synthesis” is headed down much the same path, with a focus on the idea that the process of variation has been jiggered to make things turn out right for adaptation. What’s revealing about this new view is how little attention its proponents have paid to understanding precisely, in terms of population-genetic causation, how the process of variation shapes evolution, before jumping ahead to the shadowy inference that the process of variation itself was shaped by selection for this very role.

We are not going to go down that same road here on the Curious Disconnect, which should make things all the more interesting.


Ayala, F. J. 1970. Teleological Explanations in Evolutionary Biology. Philosophy of Science 37:1-15.

Bull, J. J., and S. P. Otto. 2005. The first steps in adaptive evolution. Nat Genet 37:342-343.

Charlesworth, B., and D. Charlesworth. 2009. Darwin and genetics. Genetics 183:757-766.

Dawkins, R. 2007. Review: The Edge of Evolution. Pp. 2. International Herald Tribune, Paris.

Gould, S. J. 2002. The Structure of Evolutionary Theory. Harvard University Press, Cambridge, Massachusetts.

Hartl, D. L., and C. H. Taubes. 1998. Towards a theory of evolutionary adaptation. Genetica 103:525-533.

Medawar, P. B. 1967. The Art of the Soluble. Methuen and Co., London.

Nei, M. 2007. The new mutation theory of phenotypic evolution. Proc Natl Acad Sci U S A 104:12235-12242.

Orr, H. A. 2002. The population genetics of adaptation: the adaptation of DNA sequences. Evolution Int J Org Evolution 56:1317-1330.

Phillips, P.C. 1996. Waiting for a compensatory mutation: phase zero of the shifting-balance process. Genetical Research, Cambridge 67:271-283.

Rokyta, D. R., P. Joyce, S. B. Caudle, and H. A. Wichman. 2005. An empirical test of the mutational landscape model of adaptation using a single-stranded DNA virus. Nat Genet 37:441-444.

Woodruff, R. C., and J. D. Thompson. 1998. Preface in R. C. Woodruff, and J. D. Thompson, eds. Mutation and Evolution. Kluwer, Dordrecht, The Netherlands.

Yedid, G., and G. Bell. 2002. Macroevolution simulated with autonomously replicating computer programs. Nature 420:810-812.


1 Pigliucci, along with Gerd Muller, edited a book on “the extended Synthesis” with papers from a select group of thinkers who were invited in July, 2008 to a special meeting in Altenberg, Austria. The book is now available in paperback:

2 from p. 1 “Although mutation is a key parameter in the genetics of populations, the role of mutation as an evolutionary factor has been debated since the time of Darwin. Early geneticists, who held to the ‘classical’ view of the genome as being homogeneous with occasional mutant alleles, saw new mutation as a major determining force in adaptive change. When the classical view was replaced with the ‘balance’ view of the genome, i.e., highly heterogeneous, pre-existing variation became more important as the resource on which selection would act. Many, therefore, began to disregard new mutation as a significant force in evolution, since the level of genetic diversity is already so high that new mutants would generally be expected to add little to that resource . . . Mechanisms responsible for maintaining levels of genetic diversity became the focus of attention, and mutation pressure is now thought by many to have only minor significance, especially when compared to selection, recombination, gene flow, and similar factors. We think this position, like the classical view, is too extreme. While there can be little doubt that mutation per se is not the principle driving force it was once believed to be for phenotypic evolution, we see growing evidence that its role is under-appreciated in important situations. The rate and pattern of mutation can be influenctial variables in adaptive responses, and the role of mutation in evolution deserves to be reexamined.”

3 Orr (2002) notes the absence of such models by making a far more sweeping claim that population genetics has ignored, not just new-mutations models of adaptation, but all models of adaptation, and instead has focused on neutral and deleterious alleles. That is an odd thing to say, given that the quantitative genetics of adaptation have been a topic for a long time. In any case, here is what Orr says: “Evolutionary biologists are nearly unanimous in thinking that adaptation by natural selection explains most phenotypic evolution within species as well as most morphological, physiological, and behavioral differences between species. But until recently, the mathematical theory of population genetics has had suprisingly little to say about adaptation. Instead, population genetics has, for both historical and technical resasons, focussed on the fates of neutral and deleterious alleles. The result is a curious disconnect between the verbal theory that sits at the heart of neo-Darwinism and the mathematical content of most evolutionary genetics. ”

4 Also known as the theory of records—”record” in the sense of “pinnacle of achievement”. Given a series of records, such as the world record in the long-jump, what’s the interval of time to the next record, and by how much will it break the previous record? The theory of records addresses such questions. Can you see how this would be useful to make a predictive theory of adaptation?

5 Rokyta, et al. used a different formula for the probability of fixation, because the classic approximation only works for s << 1, whereas the phiX174 populations experience very large s, sometimes s > 1.

6 Formally Nu*2si is not a probability but a steady-state rate (e.g., for an infinite-alleles model). If we treat it as an instantaneous rate, and then compare it to all other instantaneous rates, this makes it a relative probability of choosing step i over a short interval.

7 For our present purposes, we don’t need to explain Orr’s addition to this model, which was a theory of the distribution of the favorable s values under generalized assumptions (oddly, the commentators on Rokyta, et al. did not mention that Orr’s theory wasn’t really needed, and that the study really was a test of the mutational landscape model itself).

8 Gould (2002, p. 140) is not endorsing Darwin’s error about fluctuation. Darwin’s followers think of that mistake as a trivial detail. Instead Gould is endorsing a more general inference. Here is what he writes. “Darwin reasoned that natural selection can only play such a role [as exclusive source of creativity and direction] if evolution obeys two crucial conditions: (1) if nothing about the provision of raw materials—that is, the sources of variation—imparts direction to evolutionary change; and (2) if change occurs by a long and insensible series of intermediary steps, each superintended by natural selection—so that “creativity” or “direction” can arise by the summation of increments.

Under these provisos, variation becomes raw material only—an isotropic sphere of potential about the modal form of a species. Natural selection, by superintending the differential preservation of a biased region from this sphere in each generation, and by summing up (over countless repetitions) the tiny changes thus produced in each episode, can manufacture substantial, directional change. What else but natural selection could be called ‘creative,’ or direction-giving, in such a process? As long as variation only supplies raw material; as long as change accretes in an insensibly gradual manner; and as long as the reproductive advantages of certain individuals provide the statistical source of change; then natural selection must be construed as the directional cause of evolutionary modification.

These conditions are stringent; and they cannot be construed as vague, unconstraining, or too far in the distance to matter. In fact, I would argue that the single most brilliant (and daring) stroke in Darwin’s entire theory lay in his willingness to assert a set of precise and stringent requirements for variation—all in complete ignorance of the mechanics of heredity. Darwin understood that if any of these claims failed, natural selection could not be a creative force, and the theory of natural selection would collapse. ”

Credits: The Curious Disconnect is the blog of evolutionary biologist Arlin Stoltzfus, available at An updated version of the post below will be maintained at (Arlin Stoltzfus, ©2010)

Leave a Reply

Your email address will not be published / Required fields are marked *