The definition of “mutation bias”

Mutation bias: a systematic difference in rates of occurrence for different types of mutations, e.g., transition-transversion bias, insertion-deletion bias

Brandolini’s law: it takes 10 times the effort to debunk bullshit as to generate it

If I were to misdefine “negative selection” or “G matrix”, evolutionary biologists would go nuts because theories and results that are familiar would be messed up by a wrong definition. Likewise, a wrong definition of mutation bias is obvious to those of us who are actual experts, because it induces contradictions and errors in things we know and care about.

The actual usage of “mutation bias” by scientists is broadly consistent with a systematic difference in rates of occurrence for different types of mutations and is not consistent with a forward-reverse bias or with heterogeneity in rates of mutation for different loci or sites. To demonstrate this, here is a simple table showing which meanings fit with actual scientific usage, starting with the 3 types of mutation bias invoked most commonly in PubMed (based on my own informal analysis), and continuing with some other examples. The last two refer to the literature of quantitative genetics, which occasionally makes reference to bias in mutational effects on quantitative traits (either on total variability, or on the direction of effects).

Effect called a “mutation bias” in the literatureHeterogeneity per locus (or site)Forward-reverse asymmetrySystematic diff in rates for diff types
Transition biasNoNoYes
GC/AT biasNo*YesYes
Male mutation biasNoNoYes
pattern in Monroe, et al (2022)Yes*NoYes
Insertion or deletion biasNoYesYes
CpG biasNoPossiblyYes
Diffs in mutational variability of traitsPossiblyNoYes
Asymmetric effect on trait valueNoPossiblyYes
In the first column are kinds of effects that scientists denote with the literal term “mutation bias” or variants thereof (mutational bias, bias in mutation). The remaining columns indicate whether the noted effect is covered by a definition of mutation bias that also appears in the literature. “Possibly” means that some models of the bias would fit the definition and others would not. CpG bias can’t be modeled correctly as a sitewise bias because it influences transitions and transversions quite differently. The “No” with asterisk means that you could try to model GC/AT bias as a site-wise bias, but this approach will soon break down as sequences change, because mutability is not actually an intrinsic property of a position, but of the sequence context at a position. Likewise, the “Yes” with asterisk means that, whereas Monroe, et al. are usually putting the focus on regional differences in mutation rate, the detailed pattern is not merely a difference in rates per site, because the underlying model of contextual effects involves things like transition bias and GC/AT bias.

How does one concept of “mutation bias” cover such heterogeneity? Every mutation has a “from” and a “to” state, i.e., a source and a destination. A variety of different genetic and phenotypic descriptors can be applied to these “from” and “to” states, which means that we can define many different categories or types of mutations. Different applications of the concept of mutation bias always refer to types whose rates differ predictably, but there are many different ways of defining types, so there are many different possible mutation biases.

from wikimedia commons

Let’s consider transition-transversion bias, GC vs. AT bias, and male mutation bias. The first is defined relative to the chemical categories of purine (A or G) and pyrimidine (C or T): we apply these categories to the source and destination states, and if they are in the same category, that is a transition, otherwise it is a transversion. The second example, GC/AT bias, is based on whether the shift from the “from” to the “to” increases or decreases GC content. This can be defined either as a forward-reverse asymmetry, or as a difference in mutability of the “from” state, e.g., if A and T are simply more mutable than G and C, the result is a net bias toward GC. In the case of male mutation bias, the categories of mutation are defined by whether the “from” context is male or female.

Note that transition-transversion bias is not a site-wise bias: every nucleotide site is the same in the sense of having 1 transition and 2 transversions (one blue arrow and 2 red arrows in the figure above). Also, transition bias is not a forward-reverse bias, but a difference between two types of fully reversible rates, e.g., under transition bias, the transitions A —> G and G —> A both have a higher rate than the transversions A —> T and T —> A. An insertion-deletion bias is a forward-reverse bias, but it is not a site-wise bias, in the sense that every site has the same set of possible insertions and deletions.

Thus, defining mutation bias as “differences between loci in mutation rates” (Svensson, 2022) is inconsistent with transition bias, GC/AT bias, and male mutation bias, the 3 most familiar and commonly invoked types of mutation bias in the scientific literature. The magnitude of this error is roughly the same as that of defining “genome” as the RNA molecules that store hereditary information. Some genomes are indeed made of RNA. We can imagine a novice RNA virus researcher, e.g., a summer student, who hears everyone in the lab talking about the “genome” which is RNA, and who assumes on this basis that all genomes are RNA, but no experienced scientist who has worked with a variety of organisms or read widely or attempted to teach students would make this kind of error of defining something in a way that excludes the most familiar cases.

Erroneous definitions of “mutation bias” from Svensson (2022).

Why is this called a “bias”? “Mutation bias” (“mutational bias”, “bias in mutation”) has been a term of art in molecular evolution for over half a century, since Cox and Yanofsky (1967). The term is perfectly apt and useful. A bias is a systematic or predictable asymmetry, and the term is most congenial when this asymmetry applies to categories with some structural symmetry, e.g., insertions vs. deletions. The term is used this way in various areas of science and engineering, e.g., a biased estimator in statistics is one that yields a systematically low or high estimate.

Nonetheless, some evolutionary biologists don’t want you to have this useful term in your vocabulary. Some will object that “bias” should be avoided because it implies an effect on fitness, but that is just because some people think everything is about fitness and want to restrict your language to force you into their belief system. Salazar-Ciudad rejects the use of “bias” on the grounds that it implies a background assumption of uniformity with no mechanistic justification.

Sadly, we also expect that traditionalists will dilute the concept of mutation bias as part of a cultural appropriation strategy that has gone completely off the rails in the past 20 years (see my blog on appropriation). That is, based on what we have seen here, here and in a recent anonymous review, traditionalists will undermine the distinctive concept of mutation bias by blurring it together with chance effects or heterogeneity, because this makes it easier to broaden the issue and claim it for tradition using “we have long known” arguments, e.g., “we have long known that mutation rates are not all the same” will be used to suggest that there is nothing new here.

The problem with this line of argument is that systematic and patterned differences in properties between classes of things are not the same thing as idiosyncratic or unpatterned heterogeneity among a set of items, and more importantly, what is novel is not the claim that mutation biases exist, but linking them theoretically and empirically with biases in evolutionary outcomes. More specifically, what is novel includes

  • the emerging empirical proof of large and predictable effects of mutation biases on the changes involved in adaptation
  • having a formal body of theory to leverage knowledge of mutation biases to make predictions about evolution including adaptative evolution
  • how this formal body of theory creates an equivalence between mutational and developmental biases that was not known to exist
  • how this theory provides a previously unrecognized causal grounding for ideas that used to be considered dubious or even heretical, including biases in variation causing directional trends, taxon-specific propensities, and a tendency for evolution to prefer intrinsically likely structures.
  • the much broader theoretical recognition that including a rate-dependent process of introduction is a required post-Synthesis revision to our understanding of evolutionary dynamics

However, the traditionalists have a lot of power, which means that they can set the terms of debate and they can simply ignore arguments that work against them, or reframe things using straw-man arguments and excluded middle arguments, e.g., “we see nothing revolutionary with X” is utterly devoid of merit but has been an effective go-to argument for traditionalists in online discussions or when talking to reporters. It’s a very easy argument to make and can be applied using each of the claims of novelty above as the value of X. As a rhetorical device, it can be coupled very effectively with a misrepresentation of X that broadens it into something trivial, e.g., rather than saying

“we see nothing revolutionary with how this formal body of theory creates an equivalence between mutational and developmental biases that was not known to exist”

instead say

“we see nothing revolutionary with a theory that applies both to molecules and morphologies— we have long used such models”.

The defense of tradition often relies on fatuous arguments that broaden and trivialize new findings. Exploring them is a useful exercise to build awareness. I wish that reporters knew how to recognize this garbage.

By the way, Wikipedia gets the definition of mutation bias right. But many other sources get this wrong and say wrong things, e.g.,

  • Mutation bias. A pattern of mutation in DNA that is disproportional between the four bases, such that there is a tendency for certain bases to accumulate.” (Encyclopedia.com)
  • Mutation bias. Bias in the mutation frequencies of different codons, affecting the synonymous to nonsynonymous rate ratio. Mutation bias results in an accelerated rate of amino acid replacement in functionally less constrained regions.” [that statement is not true] (Oxford Reference)

And many sources simply do not define the term because it is not on the radar for most evolutionary biologists.

References

E. Cox and C. Yanofsky. Altered base ratios in the DNA of an Escherichia coli mutator strain. Proc. Natl. Acad. Sci. USA, 58:1895–1902, 1967.

A bit about non-obviousness and theories

It has been said that the reception of any successful new scientific hypothesis goes through three predictable phases before being accepted. First, it is criticized for being untrue. Secondly, after supporting evidence accumulates, it is stated that it may be true, but it is not particularly relevant. Thirdly, after it has clearly influenced the field, it is admitted to be true and relevant, but the same critics assert that the idea was not original

Zihlman, A.L., Pygmy chimps, people, and the pundits. New Scientist 104, 39–40 (1984).

A model of inventiveness for theories

The dismissal-resistance-appropriation pattern called the “stages of truth” has been noted for well over a century (Shallit, 2005). Zihlman’s reference to “the same critics” suggests that the attempt to appropriate a new idea for tradition overlaps with, or closely follows, the stage of active resistance. This blog about appropriation uses recent examples to illustrate how pundits normalize or domesticate new results, claiming them for tradition. These examples indicate that the gatekeepers who challenge the validity and importance of a new idea are also the same people who attack its originality, both by selective quote-mining, and by mashing up new ideas with older ideas, so as to make the new ideas seem more incremental.

This presents a dual challenge to scientists attempting to advance knowledge via new theories, who face both (1) the ordinary scientific challenge of communicating the new idea and arguing for its importance and plausibility, and (2) the struggle to distinguish the new idea and defend its novelty against the work of gatekeepers. A new idea will not get the attention it deserves if gatekeepers are successful in convincing everyone that the idea is not new.

Although I knew from the beginning that we would face this dual challenge over the theory of arrival biases, the naked brutality of the appropriation process continues to stun me. The first shock was Lynch’s (2007) ridiculously bad take “The notion that mutation pressure can be a driving force in evolution is not new” (explained here) citing Charles Darwin, ourselves, and about a half-dozen others, none of whom proposed the same theory. The first major commentary on mutation-biased adaptation by someone not directly involved came out in TREE in 2019: it was a shameful hit-piece that misrepresented the theory and the empirical case, and offered multiple attempts at appropriation— the theory is merely part of the neutral theory, it comes from Haldane and Fisher, it is nothing more than contingency, it is the traditional view, etc (when I say “shameful hit piece”, I mean that TREE cooked up a highly negative piece without getting feedback from the people whose work was targeted, and when we objected to the garbage they published, we were not given space for a rebuttal).

Apparently I did not learn from these experiences, because I was once again dumbstruck when Cano, et al. (2022) came out, showing an effect of mutation-biased adaptation predicted from theory, and one press release presented this work literally as “helping to return Darwin’s second scenario to its rightful place in evolutionary theory” as if the main idea came from Darwin.

However, my focus here is not on this process of appropriation or theft, but on what makes a theory new, and how to evaluate newness. The starting point is simply that something is profoundly wrong with suggesting that a population-genetic theory of arrival biases— proposing a specific linkage between tendencies of variation and predictable tendencies of evolution— is not new, when the theory appeared only in 2001 and is still unknown to most evolutionary biologists. You won’t find it in your textbook or in the Oxford Encyclopedia of Evolution, or in the canonical texts of the Modern Synthesis, or in the archives of classical population genetics.

Clearly we need some different way to think about scientific novelty beyond “whatever I can link to tradition via vague statements of dead authorities is not new.”

Here I’m going to use the patent process as an example. Patenting an invention and proposing a theory are two different things, but the comparison is useful in my opinion. The law is often where philosophy meets practice: where abstract principles become the basis for adjudicating concrete issues between disputing parties— with life, liberty and treasure at stake.

Patent law instantiates a theory of novelty, and it hinges on non-obviousness. Under US patent law, a successful patent application shows that an invention meets the 4 criteria of eligibility (patentability), newness, usefulness, and non-obviousness. Eligibility is mostly about whether the proposed invention is a manufacturing process, rather than something non-patentable. I will set aside that criterion as irrelevant for our purposes. I will address usefulness in a moment. An invention is new if it is not found in prior art. In patent law, prior art is defined in a very permissive way, to include any prior representation, whether or not it was ever manufactured or made public, whereas in science, we might want to restrict the scope of prior art to widely available published knowledge.

Thus, newness is easy to understand: make one small improvement on prior art, and that is considered new under patent law. But improvement or newness is not enough. In order to be patentable, a new and useful invention must be a non-obvious improvement— non-obvious to a practitioner or knowledgeable expert. In the patent law for some other countries (e.g., Netherlands), the latter two criteria are sometimes combined by saying that the invention must be inventive, meaning both new and non-obvious. An example of an obvious improvement would be to take a welding process widely used with airframes and apply it to bicycle frames.

The most distinctive (i.e., potentially non-obvious) aspects of the theory from Yampolsky and Stoltzfus (2001) are that (1) it focuses on the introduction process (the transient of an allele frequency as it departs from zero) as a causal process that leads to distinct rules that clash with the classic conception of population-genetic causes as mass-action forces shifting frequencies of pre-existing alleles; (2) it links tendencies of evolution to tendencies of variation without requiring neutrality or high mutation rates, in conflict with the mutation-pressure theory of Haldane and Fisher; and (3) it purports to unite disparate phenomena including effects of mutation bias, developmental bias, and the findability of intrinsically likely forms.

The non-obviousness of a pop-gen theory of arrival biases

Thus, as outlined above, we may consider this theory as a useful and possibly non-obvious improvement on the prior art of Haldane and Fisher.[1] Is it non-obvious relative to all prior art? Because we are asking about the prior non-obviousness, this is a somewhat hypothetical question that must be guided by our understanding of history. Would the theory have been non-obvious before 2001? What was the state of the art (in theoretical population genetics) bearing on the question of internal variational biases as possible causes of directionality or trends in evolution?

Just because something is obvious does not mean it will be proposed. An idea could be obvious but no one writes it down because no one cares enough to do so. Indeed, a big problem here is that, typically, the people with the population-genetics expertise are not the same people as the developmentalists and paleontologists searching for internal causes. Population geneticists are obsessed with selection, Fisher, selection, standing variation and selection. We could search through stacks of population genetics papers without finding out what anyone thinks about generative biases. I never tire of pointing out that Edwards (1977) is a book about theoretical population genetics, with hundreds of equations, and it does not have a mutation rate anywhere.

We could solve this problem with a time machine: go back into the 20th century, and get some theoreticians together with internalists to see if they could combine the classic idea of internal variational biases with population genetics. For instance, we could go back to the 1980s and get together some population geneticists with those early evo-devo pioneers like Pere Alberch literally calling for attention to developmental biases in the introduction of variation. Maybe we could add some philosophers of science.

In fact, we do not need a time machine, because this meeting of the minds actually happened, with results recorded in the scientific literature. In the late 1970s and early 1980s, Gould, Alberch and others began to suggest some kind of important evolutionary role for developmental “constraints” that might was not included in traditional thinking. I will quote from the wikipedia page on this:

Similar thinking [about generative biases acting prior to selection] featured in the emergence of evo-devo, e.g., Alberch (1980) suggests that “in evolution, selection may decide the winner of a given game but development non-randomly defines the players” (p. 665)[23] (see also [24]). Thomson (1985), [25] reviewing multiple volumes addressing the new developmentalist thinking— a book by Raff and Kaufman (1983) [26] and conference volumes edited by Bonner (1982) [27]  and Goodwin, et al (1983) [28] — wrote that “The whole thrust of the developmentalist approach to evolution is to explore the possibility that asymmetries in the introduction of variation at the focal level of individual phenotypes, arising from the inherent properties of developing systems, constitutes a powerful source of causation in evolutionary change” (p. 222). Likewise, the paleontologists Elisabeth Vrba and Niles Eldredge summarized this new developmentalist thinking by saying that “bias in the introduction of phenotypic variation may be more important to directional phenotypic evolution than sorting by selection.” [29]

They are literally talking about biases in the introduction of variation. In 1984, a group of scientists and philosophers, all highly regarded, convened to consider how development might shape evolution. In 1985, these 9 eminent scientists and philosopers collaborated to publish “Developmental constraints and evolution”:

  • John Maynard Smith, population geneticist trained with Haldane
  • Richard Burian, philosopher of science
  • Stuart Kauffman, later wrote the Origins of Order
  • Pere Alberch, developmental biologist and evo-devo pioneer
  • John H Campbell, evolutionary theorist and philosopher of science
  • Brian Goodwin, developmental biologist, author of How the Leopard got its Spots
  • Russ Lande, developer of the multivariate generalization of QG (trained with Lewontin)
  • David Raup, paleontologist
  • Lewis Wolpert, developmental biologist

The authors raised the question of what might give developmental biases on the production of variation a legitimate causal status, i.e., the ability to “cause evolutionary trends or patterns.” The only accepted theory of evolutionary causation was that evolution is caused by the forces of population genetics, i.e., mass-action pressures acting on allele frequencies. They confronted the issue thus:

Although they clearly call for a “reexamination”, they did not provide one, other than some vague suggestions of neutral evolution, which are unsatisfactory because proposing neutrality, though consistent with Haldane-Fisher reasoning, was not a satisfactory basis for claiming some special promise of evo-devo not included in standard thinking.

Another way of articulating the prior art would be to point to the verbal theories noted above, e.g., the highly developed example of Vrba and Eldredge (1984).[2] Is the population-genetic theory of Yampolsky and Stoltzfus (2001) an obvious improvement on this verbal theory? Does it merely supply the math that would be obvious from reading Vrba and Eldredge? Again, we can answer this question by reading Maynard Smith, et al (1985), because they are clearly representing the verbal theory from evo-devo, of which Pere Alberch (one of the authors) had helped to promote. So, if a population-genetic theory of arrival biases was an obvious clarification of this verbal theory, then Pere Alberch could have asked John Maynard Smith and Russ Lande to translate his verbal theory so as to yield a proper population-genetic grounding for his claims. Clearly that did not happen.

I could cite some other examples, but the case could be made entirely on Maynard Smith, et al. (1985). Why does a single paper make such a strong case? First, the author list includes a set of people who are clearly experts on the relevant topics. Second, the authors were clearly focused on the right issue, and were clearly motivated to find a theory to account for the efficacy of biases in variation to influence the course of evolution, arguably the central claim of the paper. Third, this was a seminal paper that got lots of attention and became highly cited (today: 1800 citations). This means that hundreds of other experts must have read and discussed the paper: if this particular set of 9 authors had missed something, others would have pointed it out in response. Instead, years later, critics such as Reeve and Sherman (1993) complained that Maynard Smith, et al. had never provided an evolutionary mechanism.

Thus, experts in the field of evolutionary biology confronted the issue of how biases in variation could act as a population-genetic cause, and these experts cited the prior art of Haldane and Fisher and the verbal claims of the developmentalists, but they did not find a causal grounding for these internalist claims in a population-genetic theoory of arrival biases.

Thus, a population-genetic theory of arrival biases was non-obvious in 1985.

Indeed we could just take this back to 1930, referring to Haldane and Fisher and the opposing-pressures argument. They confronted the issue of whether internal biases in mutation could be the cause of tendencies or trends, and concluded against this idea, on the grounds that this would require high mutation rates unopposed by selection. Loyalists are going to object to this and say that they were just addressing evolution by mutation pressure, not introduction biases, but this objection is based on hindsight. Haldane and Fisher did not say “This is a narrow argument based on mutation pressure, don’t apply it to introduction biases.” The opposing pressures argument is important precisely because it was presented as a general and rigorous theoretical argument about the possible sources of direction in evolution. Fisher’s (1930) argument, as given, is fully general, e.g., it justifies his advice to researchers to ignore the causes of mutation on the grounds of being irrelevent to how evolution turns out:

“For any evolutionary tendency which is supposed to act by favouring mutations in one direction rather than another, and a number of such mechanisms have from time to time been imagined, will lose its force many thousand-fold, when the particulate theory of inheritance, in any form, is accepted…

The whole group of theories which ascribe to hypothetical physiological mechanisms, controlling the occurrence of mutations, a power of directing the course of evolution, must be set aside, once the blending theory of inheritance is abandoned. The sole surviving theory is that of Natural Selection, and it would appear impossible to avoid the conclusion that if any evolutionary phenomenon appears to be inexplicable on this theory, it must be accepted at present merely as one of the facts which in the present state of knowledge seems inexplicable. The investigator who faces this fact, as an unavoidable inference from what is now known of the nature of inheritance, will direct his inquiries confidently towards a study of the selective agencies at work throughout the life history of the group in their native habitats, rather than to speculations on the possible causes which influence their mutations.’’

However, it is impossible to win arguments with loyalists about the holy trinity, so I’m not going to make my case with Haldane and Fisher. I’m going to make my case with Maynard Smith, et al.

More non-obviousness with Maynard Smith and Kauffman

To further explore this issue of non-obviousness, let us considerJohn Maynard Smith’s knowledge of King’s (1971) codon argument. As I have explained (e.g., this blog, or in my book), King’s codon argument can be read as an early intuitive appeal to an implication of arrival biases. King argued that the amino acids with more codons would be more common in proteins (as indeed they are) because the offer more options, explaining this with an example implying an effect of variation and not selection. Originally, King and Jukes (1969) proposed this as an implication of the neutral theory, but King (1971) quickly realized that this did not depend on neutrality, but would happen even if all changes were adaptive [3].

The way we would explain this today is that the genetic code is a genotype-phenotype map that assigns more codon genotypes to certain amino-acid phenotypes. Because these amino acids occupy a greater volume of the sequence-space of genotypic possibilities, they have more mutational arrows pointed at them from other parts of sequence space: this makes them more findable by an evolutionary process that explores sequence space (or genotype space) via mutations. Because of this effect of findability or “the arrival of the frequent”, the amino acids with the most codons will tend to be the most common in proteins. This argument does not require neutrality, but merely a process subject to the kinetics of introduction. The form of the argument maps to a more general argument about the findability of intrinsically likely phenotypes, which is one of the meanings of Kauffman’s (1993) concept of self-organization.

Maynard Smith literally invented the concept of sequence space (Maynard Smith, 1970). He also knew about King’s codon argument, which he quoted it in his 1975 book, in a passage contrasting the neutralist and selectionist views:

“Hence the correlation does not enable us to decide between the two.  However, it is worth remembering that if we accept the selectionist view that most substitutions are selective, we cannot at the same time assume that there is a unique deterministic course for evolution.  Instead, we must assume that there are alternative ways in which a protein can evolve, the actual path taken depending on chance events.  This seems to be the minimum concession the selectionists will have to make to the neutralists; they may have to concede much more.” 

p. 106 of Maynard Smith J. 1975 (same text in 1993 version). The Theory of Evolution. Cambridge: Cambridge University Press.

This just drives home the point about non-obviousness even further. To someone who knows the theory already, King’s argument might look familiar, but Maynard Smith does not recognize a theory connecting generative biases with evolutionary biases that would be useful for understanding evo-devo and solving the challenge of “constraints”. Instead, he refers only to a theory that allows “chance events” to affect the outcome of evolution.[4]

A few years later, Kauffman (1993) published his magnum opus, The Origins of Order, offering findability arguments under the heading of “self-organization.” In Kauffman’s thinking, selection and “self-organization” worked together to produce order. But Kauffman never offered a causal theory explaining self-organization as a population-genetic process. His simulations typically didn’t include the introduction process because they did not include any population genetics at all (in his models, evolutionary change happens when a discrete particle takes an evolutionary step in a discrete space). So, his claims were a bit of a mystery to population geneticists. But there is no need for mystery: we now know that the findability effect is a matter of biases in the introduction process. This “arrival of the frequent” effect, in regard to the findability of RNA folds, has been demonstrated clearly in work from Ard Louis’s group (see Dingle, et al., 2022). So, we can count Kauffman (1993) as another famous example illustrating non-obviousness, because clearly the theory was non-obvious to Kauffman, and his book was widely read and discussed.

This is an important lesson for traditionalists who tend to assume that theories are timeless universals per Platonic realism and that they are all obvious from assembling the parts. Maynard Smith had the parts list, in a sense, and approached the issue from multiple angles. He got useful clues from the thinking of Jack King. In the circumstances leading up to the famous 1985 paper, Maynard Smith and another brilliant population geneticist (Russ Lande) were placed (figuratively) on an island with some developmental biologists, a philosopher, a paleontologist, and 2 unclassifiable deep thinkers (Campbell and Kauffman), and they were tasked with making sense of the causal role of developmental biases. In the end, they did not articulate a theory of arrival biases as a potential solution to this problem.

This is no criticism of Maynard Smith. Maynard Smith himself understood the point that I am trying to make. I had the opportunity to meet him several times in the 1990s when I was with the Canadian Institute for Advanced Research program in evolutionary biology, and he was some kind of outside advisor who came to our meetings. Anyone who met Maynard Smith in those days knows he was a great one for sharing stories and talking science with students and post-docs over a beer. He once told a series of humorous anecdotes about scientific theories he failed to discover but almost discovered. He had run some numbers on the logistic growth equation and found some odd behavior, and then set the problem aside— only to realize much later that he had stumbled on deterministic chaos. The neutral theory was another theory that he claimed to have almost-but-not-quite discovered. I can’t remember the other ones.

Confronting attempts at appropriation and minimnization

Now, with this background in mind, we can reconsider attempts to appropriate or minimize the theory of Yampolsky and Stoltzfus. Lynch (2007) and Svensson and Berger (2019) appear to be suggesting that the theory is now new, but is merely part of a tradition going back a century or more.

In science, the way to establish X in prior art is to find a published source and then cite it. Let us consider what this might look like:

“Stoltzfus and Yampolsky (2001) were not the first to propose and demonstrate a population-genetic theory for the efficacy of mutational and developmental biases in the introduction of variation: such a theory was already proposed and demonstrated by Classic Source and subsequently was cited in reviews such as Well Known Review and textbooks such as Popular Textbook”.

Of course they say nothing like that, because they can’t: no such sources exist. In appropriation arguments, this gap between the pundit’s purpose and reality is filled with misleading citations and hand-waving arguments. To go back to the analogy with patents: if this were a patent law case, every one of the arguments of Svensson and Berger (2019) would be rejected as irrelevant because they simply do not articulate a theory of biases in the introduction of variation as a cause of orientation or direction in evolution.

For instance, Svensson and Berger say that the theory is part of the neutral theory, citing no source for this absurd and fabricated claim. Kimura wrote an entire book about the neutral theory, along with hundreds of papers. Surely if Svensson and Berger were serious, they could cite a publication from Kimura where the theory is expressed. But they have not done this. None of the sources that they cite articulates a theory of arrival biases. Their insinuation that our work-product is not original relative to work cited from Kimura, Haldane, Fisher, or Dobzhansky must be rejected as, not merely false, but frivolous. Likewise, Lynch’s bad take is so bad it falls in the “not even wrong” category.

Now, having dismissed attacks on the newness of the theory, let us consider the critique of Svensson and Berger (2019) as an attack on non-obviousness, i.e., they may be insinuating that the theory, though new, fails the criterion of being non-obvious, and therefore is not a genuine invention worthy of recognition. This is one way to read their argument in Box 1, in which they derive an equation that can be used to model the theory, building on other equations from Haldane, Fisher and Kimura.

The implication is that the theory is obvious because one can put it together from readily available bits. But this just begs the question: does that really mean that it is obvious? If anyone can put the theory together from readily available bits, why didn’t Fisher and Haldane do this? Why didn’t Maynard Smith and Lande in 1985?

The attitude of Svensson and Berger (2019) seems to be a case of inception: we planted this theory in their heads, and now they see it everywhere. Yet before we proposed this theory, vastly greater minds than Svensson and Berger had access to the same Modern Synthesis canon and failed to see the theory.

Clearly mathematical complexity or cleverness is not the right standard for judging the non-obviousness of scientific theories. The equation is, at best, a model of the theory, useful if you already understand what the theory says. Writing down an equation with the form of a ratio of origin-fixation rates does not give you the theory, e.g., Lewontin has a structurally similar equation on p. 223 of his 1974 book (it is a ratio of steady-state origin-fixation rates for beneficial vs. neutral changes, which reduces to 4Ns ub / un). If Lewontin had understood the theory of arrival biases, he could have put it in the Spandrels paper and this would have made the arguments about the role of non-selective factors much more credible.

Why was a population-genetic theory of arrival biases so non-obvious? I’ve been arguing for years that this reflects a “blind spot” in evolutionary thinking, the combined effect of ingrained habits of thought, convenient approaches to modeling, specific notions of causation, and the overwhelming influence of neo-Darwinism.

However, understanding why the theory was non-obvious is a separate issue. Even if the theory is obvious today (a dubious claim, IMHO), it clearly was not obvious in the past, and the proof of this non-obviousness is that Maynard Smith, Lande, Kauffman, Lewontin and hundreds of other well qualified evolutionary thinkers had the motive and the opportunity to propose this theory and they did not.

References

King JL, Jukes TH. 1969. Non-Darwinian Evolution. Science 164:788-797.

King JL editor. 1972. Sixth Berkeley Symposium on Mathematical Statistics and Probability. 1971 Berkeley, California.

King JL. 1971. The Influence of the Genetic Code on Protein Evolution. In:  Schoffeniels E, editor. Biochemical Evolution and the Origin of Life. Viers: North-Holland Publishing Company. p. 3-13.

Mani GS, Clarke BC. 1990. Mutational order: a major stochastic process in evolution. Proc R Soc Lond B Biol Sci 240:29-37.

Maynard Smith J. 1970. Natural selection and the concept of a protein space. Nature 225:563-564.

Maynard Smith J. 1975. The Theory of Evolution. Cambridge: Cambridge University Press.

Maynard Smith J, Burian R, Kauffman S, Alberch P, Campbell J, Goodwin B, Lande R, Raup D, Wolpert L. 1985. Developmental Constraints and Evolution. Quart. Rev. Biol. 60:265-287.

Oster G, Alberch P. 1982. EVOLUTION AND BIFURCATION OF DEVELOPMENTAL PROGRAMS. Evolution 36:444-459.

Reeve HK, Sherman PW. 1993. Adaptation and the Goals of Evolutionary Research. Quarterly Review of Biology 68:1-32.

Shallit J. 2005. Science, Pseudoscience, and The Three Stages of Truth. PDF

Vrba ES, Eldredge N. (benchmark; co-authors). 1984. Individuals, hierarchies and processes: towards a more complete evolutionary theory. Paleobiology 10:146-171.

Notes

[1] The key bits of prior art, to my knowledge are

  • opposing pressures argument of Haldane (1927, 1932, 1933), Fisher (1930)
  • verbal arguments of King (1971, 1972). See note 3.
  • verbal theory of Vrba and Eldredge (1984). See note 2.
  • Mani and Clarke (1990), which shows that mutational order is influential but treats it as a stochastic variable, rather than proposing a theory of biases and generalizing on that

[2] The piece by Vrba and Eldredge, 1984 is part of the paleo debate of the 1970s and 1980s. They use very general abstract language, following on early authors such as Oster and Alberch. They refer to biases in the introduction (or production) and sorting (reproductive sorting, to include selection or drift), making a neat dichotomy that they apply at each level of a hierarchy. So, clearly, they see this as a fundamental kind of causation that can be extrapolated to a hierarchy. The strange thing about the argument is the implicit assumption that biases in the introduction of variation are a kind of causation already recognized at the population level, which is not correct. So, the argument is not situated properly. And, because it includes no demonstration, one could not be sure (in 1984) whether the theory would actually work or not.

[3] King 1971 presents a verbal argument with a concrete example that is the closest thing I have seen to an earlier statement of the theory of Yampolsky and Stoltzfus (2001). First, he clearly means for this idea to be general. That is the significance of the diagram with the arrows coming out from a point in a blank space, with some pointing up (beneficial), some laterally (neutral), and many down (deleterious). I have used diagrams like that myself. So, he is clearly aiming for generality. But he has almost nothing to say on where the biases come from. His motivating example is the genetic code but he does not cite any other example. He does not reference prior art and does not explain how his theory is different. He does not offer proof of principle other than the verbal model. So, a reader would have to question whether this idea would be consistent with pop gen.

[4] I have seen this kind of reaction many times. Saying that chance affects evolution is a familiar thing. Evolutionary biologists are accustomed to a dichotomy of selection (or necessity) and chance, and it is familiar to invoke “chance” as if it were a cause. But it is not familiar in evolutionary biology to refer to generative processes as evolutionary causes that impose biases on the course of evolution. As of today, there is no language for this that is acceptable to traditionalists. So, when traditional thinkers are confronted with the theory of Yampolsky and Stoltzfus (2001), they often translate this into a familiar selection-vs-chance dichotomy and say that this is a theory about how “chance” affects evolution, or refer to “contingency.”

King’s amino-acid findability argument

Whereas Kimura (1968) proposed his version of the Neutral Theory of Molecular Evolution as the answer to an esoteric problem of population genetics theory, King and Jukes (1969) proposed a theory driven by the results of macromolecular sequence comparisons. Molecular evolution, in their view, demanded “new rules.” As evidence for neutrality, they pointed to a general correspondence between the frequencies of amino acids in protein sequences, and the frequencies expected from translating randomly generated sequences with the genetic code.

At the bottom left are Met and Trp with 1 codon each and then, proceeding with some variation up and to the right, we have the 2-codon blocks (Cys, His, Tyr, Phe, Gln, Asn, Asp, Lys), Ile with 3 codons, then the 4-codon blocks (Pro, thr, Val, Ala, Gly) then the 6-codon blocks (Leu, Ser, Arg), with Arg being a somewhat extreme outlier due to the CpG effect.

However, King rather quickly recanted on this argument. It is rare for a scientist to do that, so pay attention. In the proceedings of a 1971 conference that came out 2 years later, King (1973) said that this was not evidence of neutrality, but rather evidence for some kind of indeterministic process dependent on mutation. He explains

“If a gene is in the process of progressive, adaptive evolution, there might very likely be more than one among the thousand or so possible single-step changes that would be evolutionarily advantageous.  Then the first of these to occur by mutation would have the first chance to take over.  The conditions of selection would then be changed, and it would be too late for the other previously potential candidates.  Thus the probability of fixation [probability of origin-and-fixation] of an amino acid is a function of its frequency of arising by mutation, and this will happen more often to amino acids with more codons.  The eventual distribution of amino acid frequencies will reflect, more or less passively, the peculiarities of an arbitrary genetic code, even if most evolutionary changes are due exclusively to Darwinian adaptive evolution.” (p. 7)

King JL. 1973. The Influence of the Genetic Code on Protein Evolution. In:  Schoffeniels E, editor. Biochemical Evolution and the Origin of Life. Viers: North-Holland Publishing Company. p. 3-13

King did not generalize further on this argument. However, we can see this as an instance of a general form of argument in which phenotypes that are over-represented in genotype-space are more findable due to mutation. Amino acids with larger numbers of codons (in the genetic code) occupy a greater volume of sequence space, analogous to phenotypes with large numbers of genotypes in genotype-space, and this makes them more findable by an evolutionary process that explores sequence space (or genotype space) via mutations. So the amino acids with the most codons will tend to be the most common in proteins, and this argument does not require neutrality, but merely a process subject to biases in introduction of amino acid phenotypes.

The form of this argument is analogous to that made by Ard Louis and colleagues in regard to RNA folds that are common in sequence space (Dingle, et al. 2022).

King’s two conference papers in this period reveal important thinking about evolution in discrete spaces. King (1973) gives this image combining states, paths, upwardness, and a fitness landscape. Note that this is not merely a passive depiction of “Maynard Smith’s” concept, but represents creative and synthetic thinking.

And he emphasizes that actually, at any point there are many upward steps and many lateral (neutral) steps, using this diagram:

What happened to King’s argument? It could have, but apparently did not, stimulate a model like that of Yampolsky and Stoltzfus (2001). However, it was not completely lost to history. Leigh Van Valen repeated the argument in a 1974 paper, attributing it ambiguously either to an “oral” clarification of Lewontin or to Stebbins and Lewontin (1973) :

“2. Lewontin also made more plausible another rebuttal that Stebbins and Lewontin (1973) made to a neutralist argument. The latter argument is the general similarity of the proportions of amino acids in proteins to the proportions of their respective codons among all codons, given the observed proportions of the four nucleotides. Now consider a protein sitting in the protein space. There may be several sequences (local adaptive peaks) better than the one it now has. However, these will be unequally available. The most available will be those for which the needed mutations are for amino acids with the most codons, assuming that many of the possible steps to the peak increase fitness. Once the protein has chosen its peak, the final sequence is determined only by selection. Therefore selection can give a correspondence of proportions as easily as drift can.”

The article from Stebbins and Lewontin appears in the same symposium volume as King’s paper. So, all 3 of them probably went to the same 1971 meeting and were exposed to this idea (the papers were not published until 1973). However, the version of the argument in Stebbins and Lewontin is inadequate and cannot have been the source of Van Valen’s version, which is as clear as that of King. So, either Van Valen got it from King but misattributed it to Lewontin, or Lewontin’s “oral” clarification included a greatly improved argument.

Maynard Smith (1975) also repeats King’s argument in his comments on the neutralist-selectionist controvery, concluding that:

“Hence the correlation does not enable us to decide between the two.  However, it is worth remembering that if we accept the selectionist view that most substitutions are selective, we cannot at the same time assume that there is a unique deterministic course for evolution.  Instead, we must assume that there are alternative ways in which a protein can evolve, the actual path taken depending on chance events.  This seems to be the minimum concession the selectionists will have to make to the neutralists; they may have to concede much more.” 

I love this quotation because it reveals a world that most people do not know existed in 1975, a world in which selectionists were not yet entertaining the idea of evolution as a Markov chain of origin-fixation events, but were still using the shifting-gene-frequencies view defended by Stebbins and Lewontin (1973).

References

King JL. 1973. The Role of Mutation in Evolution. In: Sixth Berkeley Symposium on Mathematical Statistics and Probability (eds Le Cam, Neyman, and Scott) Berkeley, California.

King JL. 1971. The Influence of the Genetic Code on Protein Evolution. In:  Schoffeniels E, editor. Biochemical Evolution and the Origin of Life. Viers: North-Holland Publishing Company. p. 3-13.

Maynard Smith J. 1975. The Theory of Evolution. Cambridge: Cambridge University Press.

Stebbins GL, Lewontin RC, 1973. Comparative evolution at the levels of molecules, organisms and populationseditors. In: Sixth Berkeley Symposium on Mathematical Statistics and Probability (eds Le Cam, Neyman, and Scott) Berkeley, California

Van Valen L. 1974. Molecular evolution as predicted by natural selection. Journal of Molecular Evolution 3:89-101.

Normalizing and appropriating new scientific findings

This is a long but organized dump of some thoughts on a particular type of distortion that arises from an attitude of conservatism or traditionalism. It is part of a longer-term attempt to understand controversy and novelty, with the practical goal of helping scientists themselves to cut through bullshit and learn how to interpret new findings and reactions to new findings. The topics here in rough order are:

  • anachronism and the Sally-Ann test
  • the stages of truth, ending with normalization and appropriation
  • back-projection as a means of appropriation
  • ret-conning as a means of appropriation
  • some examples
    • the missing pieces theory in historiography
    • sub- and neo-functionalization in the gene duplication literature
    • “Fisher’s” geometric model
    • Haldane 1935 and male mutation bias
    • responses to Monroe, et al. (2022)
    • ret-conning the Synthesis and the new-mutations view
  • “we have long known” claims
  • Platonic realism in relation to back-projection
  • a gravity model for misattribution
  • pushing back against appropriation

Sally-Ann and the stages of truth

When a new finding appears, this changes the world of knowledge and our perspective on the state of knowledge. After a new result has appeared, reconstructing what the world looked like before the new result can be difficult.

Indeed, for young children, understanding what the world looks like from the perspective of someone with incomplete knowledge is literally impossible. This issue is probed by the “Sally-Ann” developmental test, in which a child is told a story with some pictures, and then is asked a question. The story is that Sally puts a toy in her basket, then leaves, and before she returns, Ann moves the toy to her box. The question is where will Sally look for the toy, in the basket or in the box? A developmentally typical adult will say that Sally will look in the basket where she put it. Sally’s state of knowledge is different from ours: she doesn’t know that her toy was moved to the box.

But children under 4 typically say that Sally will look in the box. They have not acquired the mental skill of modeling the perspective of another person with different information. In their only model of the world— the world as seen from their own perspective—, the toy is in the box, so they assume that this is also Sally’s perspective, and on that basis, they predict that Sally will look in the box.

The most egregious kinds of scientific anachronism have the same flavor as this childish error, e.g., describing Darwin’s theory as one of random mutation and selection. It is notoriously difficult for us to forget genetics and comprehend pre-Mendelian thinking on heredity and evolution. For this reason, one often hears the notion that Mendelism supplies the “missing pieces” to Darwin’s theory of evolution, as if Darwin articulated a theory with a missing component the precise shape of Mendelian genetics, yet did not foresee Mendelian genetics.

[Figure: If a pie has a missing piece, this is because it was baked whole and a piece was taken out. Think about it.]

Historian Peter Bowler loves to mock the missing-pieces story. Darwin did not, in fact, propose a theory with a hole in it for Mendelism: he proposed a non-Mendelian theory based on the blending of environmental fluctuations under the struggle for life, which Johannsen then refuted experimentally. Historian Jean Gayon wrote an entire book about the “crisis” precipitated by Darwin’s errant views of heredity. Decades passed before Darwin’s followers threw their support behind a superficially similar theory combining a neo-Darwinian front end with a Mendelian back end. Then they shut their eyes tightly, made a wish, and the original fluctuation-struggle-blending theory mysteriously vanished from the pages of the Origin of Species. They can’t see it. They can’t see anything non-Mendelian even if you hold the OOS right up in front of their faces and point to the very first thing Darwin says in Ch. 1. All they see is a missing piece. This act of mass self-hypnosis has endured for a century.

Normalization: the stages-of-truth meme

Anachronistic attempts to make sense of the past fit a pattern of normalization suggested by the classic “stages of truth” meme (see the QuoteInvestigator piece), in which a bold new idea is first dismissed as absurd, then challenged as unsupported, then normalized. Depictions of normalization emphasize either that (1) the new truth is declared trivial or self-evident (e.g., Schopenhauer’s trivial or selbverständlich), or (2) its origin is pushed backwards in time and credited to predecessors, e.g., Agassiz says the final stage is “everybody knew it before” and Sims says

For it is ever so with any great truth. It must first be opposed, then ridiculed, after a while accepted, and then comes the time to prove that it is not new, and that the credit of it belongs to some one else.

This phenomenon is something that deserves a name and some careful research (such research may exist already, but I have not found it yet in the scholarly literature). The general pattern could be called normalization (making something normal, a norm) or appropriation (declaring ownership of new results on behalf of tradition). Normalization or appropriation is a general pattern or end-point for which there are multiple rhetorical strategies. I use the term “back-projection” when contemporary ideas are projected naively onto progenitors, and I sometimes use “ret-conning” when there is a more elaborate kind of story-telling that anchors new findings in tradition and links them to illustrious ancestors. Recognizing these tactics (and the overall pattern) can help us to cut through the bullshit and assess more objectively the relationship of new findings or current thinking to the past.

Back-projection examples (DDC model and Monroe, et al 2022)

The contemporary literature on gene duplication features a common 3-part formula with consistent language for what might happen when a genome contains 2 copies of a gene: neo-functionalization, sub-functionalization or loss (or pseudogenization).

This 3-part formula began to appear after the sub-functionalization model was articulated in independent papers by Force, et al. (1999) and Stoltzfus (1999). Each paper presented a theory of duplicate gene establishment via subfunctionalization, and then used a population-genetic model to demonstrate the soundness of the theory. In this model, each copy of a gene loses a sub-function, such as expression in a particular tissue, but the loss is genetically complemented by the other copy, so that the two genes together are sufficient to do what one gene did previously. Force, et al. called their model the duplication-degeneration-complementation (DDC) model; the model of Stoltzfus (1999) was presented as a case of constructive neutral evolution.

The appearance of this new and somewhat subversive theory— calling on neutral evolution to account for a pattern of apparent functional specialization— sparked a renewed interest in duplicate gene evolution that has been surprisingly durable, continuing to the present day. The article by Force, et al has been cited over 2000 times. That is a huge impact!

As noted, the emergence of this theory induced the use of a now-familiar 3-part formula. Along with this came a shift in how existing concepts were described, using the neat binary contrast of sub versus neo, i.e., “neo-functionalization” refers to the classic idea that a duplicate gene gains a new function, yet the term itself is not traditional, but spread beginning with its use by Force, et al (1999), as shown in this figure.

Then the back-projection began. Even though this 3-part formula emerged in 1999, references in the literature (e.g., here) began to attribute it to an earlier piece by Austin Hughes that does not propose a model for the preservation or establishment of duplicate copies by subfunctionalization. Instead, Hughes (1994) argued that new functions often emerge within one gene (“gene sharing”) before gene duplication proceeds, i.e., Hughes proposed dual-functionality as an intermediate stage in the process of neo-functionalization (see the discussion on Sandwalk):

A model for the evolution of new proteins is proposed under which a period of gene sharing ordinarily precedes the evolution of functionally distinct proteins. Gene duplication then allows each daughter gene to specialize for one of the functions of the ancestral gene. 

Hughes (1994)

Over time, the back-projection became even more extreme: some sources began to attribute aspects of this scheme to Ohno (1970), e.g., here, or when Hahn (2009) writes:

In his highly prescient book, Susumu Ohno recognized that duplicate genes are fixed and maintained within a population with 3 distinct outcomes: neofunctionalization, subfunctionalization, and conservation of function.

What, precisely, is Hahn saying here? He does not directly attribute the DDC model to Ohno. He seems to refer primarily to outcomes rather than to processes, leaving room for interpretation. Perhaps there is some subtle way in which it is legitimate to apply the word “subfunctionalization” anachronistically, but it isn’t clear what exactly Ohno said that justifies this statement. Of course, Ohno did not use the term “neo-functionalization” either, but there is no anachronism in applying it, because the term was invented specifically as the label for the old and familiar idea of gaining a new function. Again, Hahn does not say explicitly and clearly that the subfunctionalization model comes from Ohno, but this is what the reader will assume.

And this is where the ingenuity of back-projection goes wrong: the more clever you are in weaving a thread backwards from the present into the past, spinning a story that connects current thinking to older sources— older sources that actually used different language and explored different ideas—, the more likely that you are just going to mislead people.

Obviously any new theory or finding will have some aspects that are not new. A common strategy of appropriation is to point to familiar parts of a new finding, and present those as the basis to claim that the finding is not new. One version of this tactic is to focus on a phenomenon or process that features either as a cause or an effect in a new theory, and then claim that, because this part was recognized earlier, the theory is not new. For instance, Niche Construction Theory (NCT) is about the reciprocal ways in which organisms both adapt to, and modify, their environment. However, naturalists have recognized for centuries that organisms modify their environment, e.g., beavers build dams and earthworms aerate and condition the soil. Therefore, strategies of appropriation by traditionalists (e.g., Wray, et al; see Stoltzfus, 2017) focus on the way that authors such as Darwin noted how earthworms modify their environment, claiming that this undermines the novelty of NCT.

If this kind of argument were valid, it would mean that we have no need for genuine causal theories in science, e.g., theories that induce sophisticated mathematical relations between measurable quantities, because it equates the recognition of an effect with a theory for that effect. In the Origin of Species, Darwin explicitly and repeatedly invoked 3 main causes of evolutionary modification: natural selection, use and disuse, and direct effects of environment. He did not list niche construction. Saying that niche construction theory is not novel on the grounds that the phenomenology it was designed to explain was noticed earlier is like saying that Newton’s theory of gravity was not novel because humans, going back to ancient times, already knew that heavy things fall [7].

A variety of anachronisms, misapprehensions, and other pathologies of normalization were evident in responses to the recent report by Monroe, et al. (2022) of a genome-wide pattern in Arabidopsis of an anti-correlation between mutation rate and functional density [3]. One commentary was entitled “Who ever thought genetic mutations were random?, which is outright scientific gaslighting. Another commentary stated that “Scientists have been demonstrating that mutations don’t occur randomly for nearly a century” citing a 1935 paper from Haldane that does not explicitly invoke either random or non-random mutation, and does not report any systematic asymmetry or bias in mutation rates. I was so mystified by this citation that I read Haldane’s paper line by line about 4 times, and finding nothing, used an online service (scite) to examine the context for about 70 citations to Haldane’s 1935 paper. I found that the paper was mainly cited for the reasons one would expect (sex-linked diseases, mutation-selection balance) until about 2 decades ago, when male mutation bias began to be a hot topic, and then scientists began to cite Haldane’s paper as though this were a source of the idea. In fact, Haldane (1935) does not propose male mutation bias. The closest that he gets to this possibility is to present a mutation-selection balance model for sex-linked diseases with separate parameters for male and female mutation rates, though ultimately his actual estimate is a single rate inferred from the frequency of haemophilic males (“x” in his notation). That is, male mutation bias was back-projected to Haldane, then this pattern was twisted into an even more bizarre claim in regard to Monroe, et al (and whereas the people who propagated this myth probably never stopped to ponder what they were doing, I spent multiple hours checking my own work, illustrating Brandolini’s law: debunking bullshit takes 10 times the effort as producing it).

An important lesson to draw from such examples is that when new results are injected into evolutionary discourse, this provokes new statements, even if the form of those new statements is a novel defense of orthodoxy, e.g., outrageous takes like “Who ever thought genetic mutations were random? or “Scientists have been demonstrating that mutations don’t occur randomly for nearly a century.” That is, the publication of Monroe, et al. caused these novel sentences to come into existence, as a form of normalization.

More generally, new work may induce increased attention to older work, rightly or wrongly. The extreme case would be a jump in attention to Mendel’s work, not when it was published in 1865, but when it was “rediscovered” in 1900 [6]. The appearance of Monroe, et al. (2022) stimulated a jump in attention to earlier work from Chuang and Li (2004) and Martincorena, et al (2012). Renewed attention to relevant prior work is salutary in the sense that (1) the later work increases the posterior probability of the earlier claims, and (2) this re-evaluation rightfully draws our attention. However, this is not salutary if (1) the earlier studies failed to have an impact for any reason, but particularly because they were not as convincing, and (2) their existence is now being used retroactively to make a case against the novelty of subsequent work. The work of Martincorena, et al. stimulated a backlash at the time; Martincorena wrote a rebuttal but never published it (it’s still on biorxiv), and then got out of evolutionary biology, escaping our toxic world for the kinder gentler field of cancer research. But now his work (and the work of Chuang and Li) is put forth as the basis of “we have long known” claims attempting to undermine the novelty of Monroe, et al. (e.g., this and other rhetorical strategies are used to undermine the novelty of Monroe, et al in this video from a youtube science explainer).

“Fisher’s” geometric model

As a more extended example of back-projection, consider the case of “Fisher’s geometric model.”

Given a range of effect-sizes of heritable differences from the smallest to the largest, i.e., effects that might be incorporated in evolutionary adaptation, which size is most likely to be beneficial? Fisher (1930) answered this question with his famous geometric model. The chance of a beneficial effect is a monotonically decreasing function of effect-size, so that the smallest possible effects have the greatest chance of being beneficial. Fisher concluded from this that the smallest changes are the most likely in evolution, i.e., adaptation will occur gradually, by infinitesimals. To put this in more formal terms, for any size of change d, Fisher’s model allows us to compute a chance of being beneficial b = Pr(s > 0), and he showed that b approaches a maximum, b → 0.5, as d → 0.

[Figure: Fisher’s geometric model illustrated by Pavlicev and Wagner (2012). For mutations that change the phenotype from some point P a distance d from the optimum O, the smaller the size of the mutation, the more likely the change represents an improvement (decreased distance to O, blue shaded area). ]

Kimura (1983) revisited this argument 50 years later, but from the neo-mutationist perspective that emerged among molecular evolutionists in the 1960s, and which gave rise to the origin-fixation formalism (McCandlish and Stoltzfus, 2014). That is, Kimura treated the inputs as new mutations subject to fixation, rather than as a shift defined phenotypically, or defined by the expected effect of allelic substitution from standing variation. Each new mutation has a probability of fixation p that depends, not merely on whether the effect is beneficial, but how strongly beneficial it is. Mutations with bigger effects are less likely to be beneficial, but among the beneficial mutations, the ones with bigger effects have higher selection coefficients, and thus are more likely to reach fixation. Meanwhile, as d → 0, the chance of fixation simply approaches the neutral limit, i.e., the mutations with the tiniest effects behave as neutral alleles whether they are beneficial or not.

So, instead of Fisher’s argument with one monotonic relationship dictating that the chances of evolution depend on b, which decreases with size, we now have a second monotonic relationship in which the chances of evolution depend on p that (conditional on being beneficial) increases with size. The combination of the two opposing effects results in an intermediate optimum.

Thus Kimura transformed and recontextualized Fisher’s geometric argument in a way that changes the conclusion and undermines Fisher’s original intent, which was to support infinitesimalism. This is because Kimura’s conception of evolutionary genetics was different from Fisher’s.

The radical nature of Kimura’s move is not apparent in the literature of theoretical evolutionary genetics, where “Fisher’s model” often refers to Kimura’s model (e.g., Orr 2005a, Matuszewski, et al 2014, Blanquart, et al. 2014). Some authors have been explicit in back-projecting Kimura’s mutationist thinking onto Fisher, e.g., to explain why Fisher came to a different conclusion, Orr (2005a) suggests that Fisher made a mistake in forgetting to include the probability of fixation

“Fisher erred here and his conclusion (although not his calculation) was flawed. Unfortunately, his error was only detected half a century later, by Motoo Kimura”

Orr (2005b) states that “an adaptive substitution in Fisher’s model (as in reality) involves a 2-step process.”

But Fisher himself did not specify a 2-step process as the context for his geometric argument: he did not provide an explicit population-genetic context at all. However, we have no reason to imagine that Fisher was secretly a mutationist. His view of evolution as a deterministic process of selection on available variation is well known, i.e., the missing pop-gen context for Fisher’s argument would look something like this: Evolution is the process by which selection leverages available variation to respond to a change in conditions. At the start of an episode of evolution, the frequencies of alleles in the gene pool reflect historical selection under the previously prevailing environment. When the environment changes, selection starts to shift the frequencies to a new multi-locus optimum: most of them will simply shift up or down partially; any unconditionally deleterious alleles will fall to their deterministic mutation-selection balance frequencies; any unconditionally beneficial ones will go to fixation deterministically. The smallest allelic effects are the most likely to be beneficial, thus they are the most likely to contribute to adaptation.

The fixation of new mutations is not part of this process, and that, surely, is why the probability of fixation plays no part in Fisher’s original calculation. Instead, all one needs to know is the chance of being beneficial as a function of effect-size. Fisher’s argument is complete and free of errors, given the supposition that evolution can be adequately understood as a deterministic process of shifting frequencies of available variation in the gene pool.

I recently noticed that Matt Rockman’s (2012) seminal reflection on the limits of the QTN program presents a nearly identical argument in his supplementary notes (i.e., 5 years before the longer version I put in the supplement to Stoltzfus 2017):

3. Note that while Fisher was concerned with the size distribution of changes that improve the conformity of organism and environment (i.e., adaptation), Kimura (1983, section 7.1) was discussing the effect size distribution of adaptive substitutions, i.e., his is a theory of molecular evolution. Though many now describe Kimura’s work as correcting Fisher’s mistake, it is not clear that there is a mistake: Fisher was concerned not with fixation but with adaptation. Kimura for one seems not to have thought that he was correcting an error made by Fisher (Kimura 1983, p. 150-151). Though the distributions derived by Fisher and Kimura are both relevant to adaptation, Fisher’s model is compatible with adaptation via allele frequency shifts in standing variation. In Fisher’s words, “without the occurrence of further mutations all ordinary species must already possess within themselves the potentialities of the most varied evolutionary modifications. It has often been remarked, and truly, that without mutation evolutionary progress, whatever direction it may take, will ultimately come to a standstill for lack of further possible improvements. It has not so often been realized how very far most existing species must be from such a state of stagnation” (Fisher 1930, p. 96).

Relative to the case above regarding gene duplications, this case of back-projecting Kimura’s view to Fisher results in a more pernicious mangling of history: attributing to Fisher a model based on a mutationist mode of evolution not formalized until 1969 after Fisher was dead, and which contradicts Fisher’s most basic beliefs about how evolution works (along with the clear intent of the Modern Synthesis architects to exclude mutationist thinking as an apostasy).

Synthesis apologetics

But these examples are mild compared to the ret-conning that has emerged in debates over the “Modern Synthesis.” In serial fiction, ret-conning means re-telling an old story to ensure retroactive continuity with new developments that the writers added to the storyline in a subsequent episode, e.g., when a character that died previously is brought back to life. The difference between a retcon and simple back-projection is perhaps a matter of degree. The retcon is a much more conscious effort to re-tell the past in order to make sense of the present. The “Synthesis” story is very deliberately ret-conned to appropriate contemporary results. In a different world, the defenders of tradition might have declared that the definitive statement is in that 1970s textbook by Dobzhansky, et al.; they might have stopped writing defenses and just posted a sign saying “See Dobzhansky, et al. for how the Synthesis answers evolutionary questions.” But instead, defenders keep writing new pieces that expand and reinterpret the Synthesis story to maintain an illusion of constancy.

Futuyma is the master of the Synthesis retcon. He has a craftsman’s respect for the older storylines, because he helped write them, so his retcons are subtle and sometimes even artful. We can appreciate the artistry with which he has subtly pulled back from the shifting gene frequencies theory and the grand claims he made originally in 1988 on behalf of the Synthesis. In the original Synthesis storyline, the MS restored neo-Darwinism, crushed all rivals (mutationism, saltationism, orthogenesis, etc), and provided a common basis for anyone in the life sciences or paleontology to think about evolution.

By contrast, the retcons from the newer traditionalists are full of bold anachronisms. Svensson (2018) mangles the Synthesis timeline by calling on Lewontin’s (1985) advocacy of reciprocal causation to appropriate niche construction, as if the year 1985 were not several decades after the architects of the Modern Synthesis declared victory at the 1959 Darwin centennial. Lewontin (1985) himself says that reciprocal causation was not part of the Darwinian received view in 1985. Welch (2017), dismayed by incessant calls for reform in evolutionary biology, suggests that this reflects intrinsic features of the problem-space: naive complaints are inevitable, he argues, because no single theory can cover such a large field with diverse phenomenology subject to contingency. That is, Welch is boldly erasing the original Synthesis story in which Mayr, et al. explicitly claimed to have unified all of biology (not just evolution, but biology!) with a single theory. Whereas the “contingency” theme emerged after the great Synthesis unified biology, Welch treats this as a timeless intrinsic feature that makes any simple unification impossible (see Platonic realism, below).

Svensson (e.g., here or here) has repeatedly turned history on its head by suggesting that evolution by new mutations is the classical view of the Modern Synthesis and that the perspective of evolutionary quantitative genetics (EQG), i.e., adaptation by polygenic shifting of many small-effect alleles, has been marginalized until recently. To anchor this bold anachronism in published sources, he calls on a marginalization trope from the recent literature on selective sweeps, in which some practitioners sympathetic to EQG looked back— on the scale of 10 or 20 years— to complain that the EQG view was neglected and that the view of hard sweeps from a rare distinctive mutation was the classic view (some of these quotations appear in The shift to mutationism is documented in our language).

Indeed, hard sweeps are easier to model and that is presumably why they came first, going way back to Maynard Smith and Haigh (1974). And the mini-renaissance of work on the genetics of adaptation from Orr and others beginning in the latter 1990s— work that received a lot of attention— was based on new mutations. But that’s a very shallow way of defining what is “classic” or “traditional.” The mini-renaissance happened (after decades of inactivity) precisely because theoreticians were suddenly exploring the lucky mutant view they had been ignoring or rejecting (Orr says explicitly that the received Fisherian view stifled research by making it seem like the problem of adaptation was solved). The origin-fixation formalism only emerged in 1969, and for decades, this mutation-limited view was associated with neutrality and molecular evolution (see the figure below from McCandlish and Stoltzfus, 2014).

[Figure: history of modeling evolution using the probability of fixation. The probability of fixation is a classic result, but it was not integrated into origin-fixation models until Kimura and Maruyama (1969) and King and Jukes (1969). Because this kind of model was not explicitly recognized and named until 2014, the literature was quite disorganized, and key results were derived independently multiple times. Each of the 7 triangles on the right is a large block of contemporary literature featuring models or inference methods based on the origin-fixation formalism.]

Rockman (2012) again gets this right, depicting the traditional view (from Fisher to Lewontin and onward) as a change in allele frequencies featuring polygenic traits with infinitesimal effects from standing variation:

“Despite the centrality of standing variation to the evolutionary synthesis and the widely recognized ubiquity of heritable variation for most traits in most populations, recent models of the genetics of adaptive evolution have tended to focus on new-mutation models, which treat evolution as a series of sequential selective sweeps dependent on the appearance of new beneficial mutations. Only in the past few years have phenotypic and molecular population genetic models begun to treat adaptation from standing variation seriously (Orr and Betancourt 2001; Innan and Kim 2004; Hermisson and Pennings 2005; Przeworski et al. 2005; Barrett and Schluter 2008; Chevin and Hospital 2008).”

To summarize, Svensson’s QEG-marginalization narrative turns history upside down in order to retcon contemporary thinking, i.e., he creates a false view of tradition in order to claim that new work with a mutationist flavor is traditional.

This anachronistic approach makes Svensson and Welch more effective in some ways than Futuyma, because they are really just focused on telling a good story, without being constrained by historical facts. But sometimes fan-service means sticking more closely to tradition. Even die-hard Synthesis fans are going to be complaining about Svensson’s fabrications, because they go beyond ret-conning into the realm of gas-lighting, undermining our shared understanding of the field, e.g., population geneticists (and most everyone else, too?) understand the “Fisherian view” of adaptation to be precisely the view that, according to Svensson, was marginalized in the Modern Synthesis. Clearly if anyone is going to take over the mantle from Futuyma and write the kind of fiction needed to keep the Synthesis brand fresh, the franchise needs a better crop of writers, or else needs to develop a fan-base that doesn’t care about consistency.

How to understand this phenomenon

Presumably if a grad student were to ask a genuine expert on gene duplication for the source of the sub-functionalization model, so as to study its assumptions and implications, they would be instructed to read the papers from 1999 or subsequent ones, and not Hughes (1994) or Ohno (1970), because the model is simply not present in these pre-1999 papers. Likewise, “Fisher’s geometric model” in Kimura’s sense is not in Fisher (1930). The theory of biases in the introduction process (or any model of this kind of effect) is absent from Kimura’s book and other works (e.g., from Dobzhansky, Haldane and Fisher) suggested as sources by Svensson and Berger (2019).

In this sense, back-projection is a mode of generating errors in the form of false attributions.

Why does this happen?

A contributing sociological factor is that, in academia, linking new ideas to prior literature, and especially to famous dead people, is a performative act that brings rewards to the speaker. Referencing older literature makes you look smart and well read, and also displays a respectful attitude toward tradition that is prized in some disciplines. And then those patterns of attribution get copied. When some authors started citing Haldane (1935) for male mutation bias, others simply copied this pattern (and resisting the pattern presumably would entail a social cost).

The extreme form of this performative act, a favorite gambit of theoreticians, is to dismiss new theoretical findings by saying “this is merely an implication of…” citing some ancient work. Indeed, many “we have long known” arguments defend the fullness and authority of tradition, in the face of some new discovery X, by saying “we have long known A and B”, where A and B can be construed to imply or allow X. Why don’t the critics undermine the novelty of X by saying “we have long known X”? Because they can’t. If X were truly old knowledge, the critics would just cite prior statements of X, following standard scientific practice. But when X is genuinely new, the defense of the status quo resorts to the implicit assumption that a result isn’t new and significant if someone could have reasoned it out from prior results, even if they did not actually do so. This implies the outrageously wrong notion that science is a perfect prediction machine, i.e., feed it A and B, and it auto-generates all the implications that will become important in the future. Clearly this is not how reality works (but see Platonic realism below).

Professional jealousy is a contributing factor when scientists offer their opinion on new findings, especially when those new findings are generating attention. I’m not going to dwell on this but it’s obviously a real thing.

Likewise, politics come into play when pundits and opinion leaders are called on to comment on new work. In an ideal world, when a new result X appears, we would just call on the people genuinely interested in X, the ones best positioned to comment on it, and they would only accept the challenge if they have digested the new result X [5]. But if X is new, how do we know who is best qualified? If X crosses boundaries or raises new questions, how do we know who has thought deeply about it? Often reporters will rely on the same tired old commentators to explain Why Orthodoxy Is True. The ones who step willingly into this role are often the ones most deeply invested in maintaining the authority of the status quo, the brand-value of mainstream acceptable views of evolution. Genuinely new findings undermine their brand. It’s a dangerous situation today when so many evolutionists have publicly signaled a commitment to the belief that a 60-year-old conception of evolution is correct and sufficient, that this theory cannot be incorrect, only incomplete (p. 25 of Buss, 1987), indeed, when some even go so far as to insist that nothing fundamentally new remains to be discovered (Charlesworth, 1996). Given this commitment to tradition, how could they possibly respond to a genuinely new idea except by (1) rejecting it or (2) shifting the goal-posts to claim it for tradition? Either way, this attitude degrades scientific discourse.

A completely different way to think about back-projection and ret-conning — independent of motivations and power struggles — is that they reflect a mistaken conception of scientific theories. Philosophers and historians typically suppose that scientific theories are constructed by humans, in a specific historic context, out of things such as words, equations, and analogies. The theory does not exist until the point in time when it is constructed, or perhaps, the point when it appears in scientific discourse. Under this kind of view, the DDC subfunctionalization model did not exist until 1999, Kimura’s mutationist revision of Fisher’s argument did not exist until 1983, and the theory of biases in the introduction process did not exist until 2001.

However, scientists themselves often speak as if theories exist independently of humans, as universals, and are merely recognized or uncovered at various points in time, e.g., note Hahn’s use of “recognized” above. In philosophy, this is called “Platonic realism.” The theory is a real thing that exists, independent of time or place. It’s hard to resist this. I do it myself instinctively. When I look back at King (1971, 1972), it feels to me like he is trying (without complete success) to state the theory we stated in 2001.

This has an important implication for understanding how scientists construct historical narratives, and how they interpret the historical canon. In the Platonic view, there is a set of universal time-invariant theories T1, T2, T3 etc, and anything written by anyone at any period in time can refer to these theories. In particular, anyone can see a theory like the DDC theory partly or incompletely, without clearly stating the theory or listing its implications. It’s like in the parable of the blind men and the elephant, where each person senses and interprets a thing, without construing it as an elephant.

By contrast, in the constructed view, if no one construes an elephant, describing the parts and how they fit together, there is no elephant, there is just a snake and a fan and a tree trunk and so on.

If we adopt the Platonic view, we will naturally tend to suppose that terms and ideas from the past may be mapped to each other and to the present, because they are all references to universal theories that have always existed. Clearly the Platonic view underlies the missing-pieces theory. Likewise, if one holds this view, one may imagine that Hughes or Ohno glimpsed the sub-functionalization theory, without fully explaining the theory or its implications. They sensed a part of the elephant. Likewise, the Platonic view is at work in Orr’s framing, which suggests that Fisher entertained a mutationist conception of evolution as a 2-step origin-fixation process, according to Kimura’s theory, but perhaps saw it imperfectly, resulting in a mistake in his calculation. Svensson and Berger (2019) likewise suggest that Dobzhansky, Fisher and Haldane understood implications of a theory of biases in the introduction process (first published in 2001), even though those authors never explicitly state the theory or its implications.

By contrast, a historian or philosopher considering the concepts implied by a historical source does not insist on mapping them to the present on an assumption of Platonic realism or continuity with current thinking. In fact, just like the practitioners of any scholarly discipline make distinctions that are invisible to novices, it is part of the craft of history to notice and articulate how extinct authors thought differently. For instance, the careful reader of Ohno (1970) surely will notice that his usage of the term “redundancy” often implies a unary concept rather than a relation. That is, Ohno often specifies that gene duplication creates a redundant copy, i.e., a copy with the property of being redundant, which makes it free to accumulate forbidden mutations, as if the original (“golden”) copy has been supplemented with a photocopy or facsimile that is a subtly different class of entity. By contrast, the logic of the DDC model is based on treating the two gene copies as equivalent. We think of redundancy today as a multidimensional genomic property that is distributed quantifiably across genes.

This is how Platonic realism encourages and facilitates back-projection, especially when combined with confirmation bias and the kind of ancestor-worship common in evolutionary biology. If theories are universal and have always existed, then it must have been the case that any theory available today also was accessible to illustrious ancestors like Darwin and Fisher. They may have recognized or seen the theory in some way, perhaps only dimly or partly; their statements and their terminology can be mapped onto the theory. So, the reader who assumes Platonic realism and is motivated by the religion of ancestor-worship can explore the works of predecessors, quote-mining them for indications that they understood the Neutral Theory, mutationism, the DDC model, and so on.

Again, a distinctive feature of the Platonic view is that it provides a much broader justification for back-projection, because it allows for a theory to be sensed without fully grasping it or getting the implications right, like sensing only part of the elephant. So the test case distinguishing theories about theories is this: if a historic source has a collection of statements S that refers to parts of theory T without fully specifying the theory, and perhaps also features inconsistent language or statements that contradict implications of T, we would say under the constructed view that S lacks T, but under the Platonic view, we might conclude that the S refers to T but does so in an incomplete or inconsistent way.

A model for misattribution

When we are back-projecting contemporary ideas, drawing on Platonic realism while virtue-signaling our dedication to tradition and traditional authorities, what sources will we use? Of course, we will use the ones we have read, the ones on our bookshelf, the ones by famous authors, the ones that everyone else has on their bookshelves. We will simply draw on what is familiar and close at hand.

Thus, the practice of back-projection will make links from contemporary ideas to past ideas, and it will tend to make those links by something like what is called a “gravity model” in network modeling, where the activity or importance or capacity associated with a node is equated with mass and used to model links to other nodes. The force of back-projection from A to B, e.g., the chance that the neutral theory will be linked to Darwin, will depend on the importance of A and B, and how close they are in a space of ideas.

In more precise terms, the force of gravitational attraction between objects 1 and 2 is proportional to m1m2 / d2, i.e., the product of the masses divided by their distance squared. In a network model of epidemiology, we might treat persons and places of employment as the objects subject to attraction. Each person has a location and 1 unit of mass, and each workplace has a location and n units of mass where n is the number of employees. For a given person i, we assign a workplace j by sampling from available workplaces with a chance proportional to mj / dij2, the mass (number of employees of the workplace) divided by the squared distance from the person. Smaller workplaces tend to get only local workers, while larger ones draw from a larger area.

Imagine that concepts or theories or conjectures can be mapped to a conceptual hyperspace. This space could be defined in many ways, e.g., it could be the result of applying some machine-learning algorithm. We can take every idea from canonical sources, each ic, and map it in this space, along with every new idea, each in. Any two ideas are some distance d from each other. Any new idea has some set of neighboring ideas, including other new ideas and old ideas from canonical sources. Among the ideas from canonical sources, there is some set of nearest neighbors, and each one has some distance dnc from the target idea to be appropriated.

To complete the gravity model for appropriation, we need to assign different masses to the neighbors of a target idea, and perhaps also assign a mass to the target idea as well. Given the nature of appropriation, a suitable metric for the mass of ideas from the canon mc would be the reputation or popularity of the source, e.g., an idea from Darwin would have a greater mass than from Ford, which would have a greater mass than an idea from someone you have never heard of. If so, the force of back-projection linking a new idea to something from the canon would be proportional to mc / dnc2, assuming that back-projection acts equally on all non-canonical ideas, i.e, they all have the same mass. If more important ideas stimulate a stronger force of back-projection— because traditionalists are more desperate to appropriate new ideas if they are important— then we could also assign an importance mn to the new idea and then the force of appropriation would be mnmc / dnc2.

Thus, the more important a new theory, the greater the pressure to back-project it to traditional sources. The more popular a historic source, the more likely scientists will attribute a new theory to it. If two different historic sources suggest ideas that are equally close to a new theory (i.e., same dnc) the one with the higher mass mc (e.g., popularity) is more likely to be chosen as the target of back-projection.

If the chances of back-projection follow this kind of gravity model, then clearly back-projection is an effective system for diverting credit from contemporary scientists and lesser-known scientists of the past, to precisely those dead authorities who already receive undue attention. Under a gravity model for misattribution, Darwin is going to get credited with all sorts of ideas, because he is already famous and he wrote several sprawling books that are available on bookshelves of scientists, even the ones who have no other books from 19th-century authors who might have said it better than Darwin. If one has very low standards for what constitutes an earlier expression of a theory, then it is easy to find precursors.

Resisting back-projection

The two most obvious negative consequences of back-projection and ret-conning are that they (1) encourage inaccurate views of history and (2) promote unfair attribution of credit.

However, those are just the obvious and immediate consequences.

Back-projection and anachronism have contributed to a rather massive misapprehension of the position on population genetics underlying the Modern Synthesis, directly related to the “Fisher’s geometric model” story above. Beatty (2022) has written about this recently. I’ve been writing about it for years. Today, many of us think of evolution as a Markov chain of mutation-fixation events, a 2-step process in which “mutation proposes, selection disposes” (decides). If you asked us what is the unit step in evolution, we would say it is the fixation of a mutation, or at least, that fixations are the countable end-points. The latter perspective is sometimes arguably evident in classic work, e.g., the gist of Haldane’s approach to the substitution load is to count how many allele replacements a population can bear. But more typically, this kind of thinking is not classical, but reflects the molecular view that began to emerge among biochemists in the 1960s.

The origin-fixation view of evolution from new mutations, where each atomic change is causally distinct, is certainly not what the architects of the Modern Synthesis had in mind. The MS was very much a reaction against the non-Darwinian idea of a “lucky mutant” view in which the timing and character of episodes of evolutionary change depend on the timing and character of events of mutation. Instead, in the MS view, change happens when selection brings together masses of infinitesimal effects from many loci simultaneously. In both Darwin’s theory and the Modern Synthesis, adaptation is a multi-threaded tapestry: it is woven by selection bringing together many small fibers simultaneously [4]. This is essential to the creativity claim of neo-Darwinism. If we take it away, then selection is just a filter acting on each thread separately, a theory that Darwin and the architects of the Modern Synthesis disavowed. Again, historical neo-Darwinism, in its dialectical encounter with the mutationist view, insists that adaptation is multi-threaded.

As you might guess at this point, I would argue that back-projection does not merely distort history and divert credit in an unfair way, it is part of a pernicious system of status quo propaganda that perpetually shifts the goal-posts on tradition in a way that undermines the value of truth itself, and suppresses healthy processes in which new findings are recognized and considered for their implications.

A scientific discipline is in a pathological state if a large fraction of of its leaders are unable or unwilling to recognize new findings or award credit to scientists who discover them, and instead confuse the issues and redirect credit and attention to the status quo and to some extremely dead white men like Darwin and Fisher. The up-and-coming scientists in such a discipline will learn that making claims of novelty is either bad science or bad practice, and they will respond by limiting themselves to incremental science, or if they actually have a new theory or a new finding, they will seek to package it as an old idea from some dead authority. A system that ties success to mis-representing what a person cares about the most is a corrosive system.

So, we have good reasons to resist back-projection.

But how does one do it, in practice? I’m not sure, but I’m going to offer some suggestions based on my own experience with a lifetime of trying not to be full of shit. I think this is mainly a matter of being aware of our own bullshitting and demanding greater rigor from ourselves and others, and this benefits from understanding how bullshit works (e.g., the we-have-long-known fallacy, back-projection, etc) and what are the costs to the health and integrity of the scientific process.

Everyone bullshits, in the weak sense of filling in gaps in a story we are telling so that the story works out right, even if we aren’t really sure that we are filling the gaps in an accurate or grounded way. When I described the Sally-Ann test, part was factual (the test, the age-dependent result), but the part about “the mental skill of modeling the perspective of another person” is just an improvised explanation, i.e., I invented it, guided partly by my vague recollection of how the test is interpreted by the experts, but also guided by my intuition and my desire to weave this into a story about perspective and sophistication that works for my narrative purposes, sending a not-so-subtle message to the reader that scientific anachronisms are childish and naïve. To succeed at this kind of improvisation means telling a good story that covers the facts without introducing misrepresentations that are significantly misleading, given the context.

When I wrote (above) about the self-hypnosis of Darwin’s followers, this was an invention, a fiction, but not quite the same, given that it is obviously fictional to a sophisticated reader. This is important. A mature reader encountering that passage will understand immediately that I am poking fun at Darwin’s followers while at the same time suggesting something reasonable and testable: scientists who align culturally with Darwin tend to be blind to the flaws in Darwin’s position. Again, any reasonably sophisticated reader will understand that. However, it would take a high level of sophistication for a reader to surmise that I was improvising in regard to “the mental skill of modeling the perspective of another person.” Part of being a sophisticated and sympathetic reader is knowing how to divide up what the author is saying into the parts that the author wants us to take seriously, and the parts that are just there to make the story flow. Part of being a sophisticated writer is knowing how to write good stories without letting the narrative distort the topic.

So, always remember the difference between facts and stories, and use your reason to figure out which is which. The story of Fisher making a mistake is a constructed story, not a fact. The authors of this story are not citing a journal entry from Fisher’s diaries where he writes “I made a mistake.” Someone looked at the facts and then added the notion of “mistake” to make retrospective sense of those facts. In the sub-functionalization story, the link to Ohno and Hughes is speculative. If ideas had DNA in them, then maybe we could extract the DNA from ideas and trace their ancestry, though I doubt it would be that simple. In any case, there is no DNA trace showing that the DDC model is somehow derived from the published works of Hughes or Ohno. It’s a fact that Ohno talked about redundancy and dosage compensation and so on, and it’s a fact that Hughes proposed his gene-sharing model, but it is not a fact that these earlier ideas led to the DDC model. Someone constructed that story.

By the way, it bears emphasis that the inability to trace ideas definitively is pretty much universal even if one source cites another. How many times have you had an idea and then found a prior source for it? You thought of it, but you are going to cite the prior source. So when one paper cites another for an idea, that doesn’t mean the idea literally came from the previous paper in a causal sense. We often use this “comes from” or “derives from” or “source of” language, but it is an inference. We only know that the prior paper is a source for the idea, not the source of the idea.

Second, gauging the novelty of an idea can be genuinely hard, especially in a field like evolution full of half-baked ideas. If you find that people are normalizing a new result by making statements that were never said before (like the “for a century” claim above), that is a sign that the result is really new. And likewise, if people are reacting to a new result with the same old arguments but the old arguments don’t actually fit the new result, that indicates that new arguments are needed. In generally, the strongest sign of novelty (in my opinion) is misapprehension. When a result or idea is genuinely new, you don’t have a pre-existing mental slot for it. It’s like a book in a completely new genre, so you don’t know where it goes on your bookshelf. The people who are most confident about their mastery are the most likely to shove that unprecedented book into the wrong place and confidently explain why it belongs there using weak arguments. So, when you find that experts are reacting to a new finding by saying really dubious things, this is a strong indicator of novelty.

Third, I recommend to develop a more nuanced or graduated sense of novelty, by distinguishing different types of achievements. Scientific papers are full of ideas, but precisely stated models with verifiable behavior are much rarer. Certain key types of innovations are superior and difficult, and they stand above more mundane scientific accomplishments. One of them is proposing a new method or theory and showing that it works using a model system. Another one is establishing a proposition by a combination of facts and logic. Another one is synthesizing the information on a topic for the first time, i.e., defining a topic.

It should be obvious that stating a possibility X is a different thing from asserting that X is true, which is different again from demonstrating that X is true. Many many authors have suggested that internal factors might shape evolution by influencing tendencies of variation, and many have insisted that this is true, but very few if any have been able to provide convincing evidence (e.g., see Houle, et al 2017 for an attitude of skepticism). Obviously, we may feel some obligation to quote prior sources that merely suggest X as a possibility, but if a contemporary source demonstrates X conclusively, this is something to be applauded and not to be treated as a derivative result. If demonstrating the truth of ideas that already exist is treated as mundane and derivative, our field will become even more littered with half-baked ideas.

If ideas were everything, we would not need scientific methods to turn ideas into theories, develop models, derive implications, evaluate predictions and so on, we would just need people spouting ideas. In particular, authors who propose a theory and make a model [2] that illustrates the theory have done important scientific work and deserve credit for that. If an author presents us with a theory-like idea but there is no model, we often don’t know what to make of the idea.

It seems to me that, in evolutionary biology, we have a surfeit of poorly specified theories. That is, the ostensible theory, the theory-like thing, consists of some set of statements S. We may be able to read all the statements in S but still not understand what it means (e.g., Interaction-Based Evolution), and even if we have a clear sense of what it might mean, we may not be certain that this meaning actually follows from the stated theory. An example of the latter case would be directed mutation. Cairns, et al proposed some ways that this could happen, e.g., via reverse-transcription of a beneficial transcription error that helps a starving cell to survive. If this idea is viable in principle, we should be able to engineer a biological system that does it, or construct a computer model with this behavior. But no one has ever done that, to my knowledge.

This is a huge problem in evolutionary biology. Much of the ancient literature on evolution lacks models, and because of this, lacks the kind of theory that we can really sink our teeth into. Part of this is deliberate in the sense that thinkers like Darwin deliberately avoided the kind of speculative formal theorizing that, today, we consider essential to science. The literature is chock full of unfinished ideas about a broad array of topics, and every time one of those topics comes up, we have to go over all the unfinished ideas again. Poring over that literature is historically interesting but scientifically it is IMHO a huge waste of time because again, ideas are a dime a dozen. If we just threw all those old books into the sea, we would lose some facts and some great hand-drawn figures, but the ideas would all be replenished within a few months because there is a more diverse and far larger group of people at work in science today and they are better trained.

Finally, think of appropriation as a sociopolitical act, as an exercise of power. As explained, even when an idea “comes from” some source, it often doesn’t come from that source in a causal sense. That’s a story we construct. Each case when normalization-appropriation stories are constructed to put the focus on tradition and illustrious ancestors, they re-direct the credit for scientific discoveries to a lineage grounded in tradition that gravitates toward the most important authorities. Telling this kind of story is a performative act and a socio-political act, and it is inherently patristic: it is about legitimizing or normalizing ideas by linking them to ancestors with identifiable reputations, as if we have to establish the pedigree of an idea— and it has to be a good pedigree, one that traces back to the right people. Think about that. Think about all those fairy-tales from the classics to Disney to Star Wars in which the worthy young hero or heroine is, secretly, the offspring of royalty, as if an ordinary person could not be worthy, as if young Alan Force and his colleagues could not have been the intellectual sources of a bold new model of duplicate gene retention, but had to inherit it from Ohno.

Part of our job as scientists operating in a community of practice is to recognize new discoveries, articulate their novelty, and defend them against the damaging effects of minimization and appropriation. The scientists who are marginalized for political or cultural reasons are the least likely to be given credit, and the same scientists may hesitate to promote the novelty of their own work due to the fear of being accused of self-promotion. In this context, it’s up to the rest of us to push back against misappropriation and back-projection and make sure that novelty is recognized appropriately, and that credit is assigned appropriately, bearing in mind that these outcomes make science healthier.

References

  • Blanquart, F., Achaz, G., Bataillon, T., Tenaillon, O. (2014) Properties of selected mutations and genotypic landscapes under Fisher’s geometric model. Evolution 68(12), 3537–54 . doi:10.1111/evo.12545
  • Buss LW. 1987. The Evolution of Individuality. Princeton: Princeton Univ. Press.
  • Charlesworth B. 1996. The good fairy godmother of evolutionary genetics. Curr Biol 6:220.
  • Force A, Lynch M, Pickett FB, Amores A, Yan YL, Postlethwait J. 1999. Preservation of duplicate genes by complementary, degenerative mutations. Genetics 151:1531-1545.
  • Gayon J. 1998. Darwinism’s Struggle for Survival: Heredity and the Hypothesis of Natural Selection. Cambridge, UK: Cambridge University Press.
  • Haldane JBS. 1935. The rate of spontaneous mutation of a human gene. Journ. of Genetics 31:317-326.
  • Hughes AL. 1994. The evolution of functionally novel proteins after gene duplication. Proc. R. Soc. Lond. B 256:119-124.
  • Kimura M. 1983. The Neutral Theory of Molecular Evolution. Cambridge: Cambridge University Press.
  • Matuszewski, S., Hermisson, J., Kopp, M. (2014) Fisher’s geometric model with a moving optimum. Evolution 68(9), 2571–88 doi:10.1111/evo.12465
  • Ohno S. 1970. Evolution by Gene Duplication. New York: Springer-Verlag.
  • Orr, H.A (2005a) The genetic theory of adaptation: a brief history. Nat Rev Genet 6(2), 119–27
  • Orr, H.A. (2005b) Theories of adaptation: what they do and don’t say. Genetica 123(1-2), 3–13
  • Rockman MV. 2012. The QTN program and the alleles that matter for evolution: all that’s gold does not glitter. Evolution 66:1-17.
  • Blanquart, F., Achaz, G., Bataillon, T., Tenaillon, O. (2014) Properties of selected mutations and genotypic landscapes under Fisher’s geometric model. Evolution 68(12), 3537–54 . doi:10.1111/evo.12545
  • Charlesworth B. 1996. The good fairy godmother of evolutionary genetics. Curr Biol 6:220.
  • Force A, Lynch M, Pickett FB, Amores A, Yan YL, Postlethwait J. 1999. Preservation of duplicate genes by complementary, degenerative mutations. Genetics 151:1531-1545.
  • Gayon J. 1998. Darwinism’s Struggle for Survival: Heredity and the Hypothesis of Natural Selection. Cambridge, UK: Cambridge University Press.
  • Haldane JBS. 1935. The rate of spontaneous mutation of a human gene. Journ. of Genetics 31:317-326.
  • Hughes AL. 1994. The evolution of functionally novel proteins after gene duplication. Proc. R. Soc. Lond. B 256:119-124.
  • Kimura M. 1983. The Neutral Theory of Molecular Evolution. Cambridge: Cambridge University Press.
  • Matuszewski, S., Hermisson, J., Kopp, M. (2014) Fisher’s geometric model with a moving optimum. Evolution 68(9), 2571–88 doi:10.1111/evo.12465
  • Ohno S. 1970. Evolution by Gene Duplication. New York: Springer-Verlag.
  • Orr, H.A (2005a) The genetic theory of adaptation: a brief history. Nat Rev Genet 6(2), 119–27
  • Orr, H.A. (2005b) Theories of adaptation: what they do and don’t say. Genetica 123(1-2), 3–13
  • Rockman MV. 2012. The QTN program and the alleles that matter for evolution: all that’s gold does not glitter. Evolution 66:1-17.

Notes

  1. Actually, in the evolutionary literature, traditionalists do not assume that all past thinkers saw the same theories we use today, but only the past thinkers considered to be righteous by Synthesis standards, i.e., the ones from the neo-Darwinian tradition. Scientists from outside the tradition are understood to have been bonkers.
  2. In the philosophy of science, a model is a thing that makes all of the statements in a theory true. So, to borrow an example from Elisabeth Lloyd, suppose a theory says that A is touching B, and B is touching C, but A and C are not touching. We could make a model of this using 3 balls labeled A, B and C. Or we could point to 3 adjacent books on a shelf, label them A, B and C, and call that a model of the theory.
  3. I’m going to write something separate about Monroe, et al. (2022) after about 6 months have passed.
  4. In Darwin’s original theory, the fibers in that tapestry blend together, acting like fluids. The resulting tapestry cannot be resolved into threads anymore, because they lose their individuality under blending. A trait cannot be dissected in terms of particulate contributions, but may be explained only by a mass flow guided by the hand of selection. This is why Darwin’s says that, on his theory, natura non facit saltum must be strictly true.
  5. In some instances of reactions to Monroe, et al., 2022, this actually worked well, in the sense that people like Jianzhi Zhang and Laurence Hurst— qualified experts who were sympathetic, genuinely interested, and critical— were called on to comment.
  6. Google ngrams data shows an acceleration of references to “Mendel” after 1900; there are earlier references, but upon examination (based on looking at about 10 of them), these are references to legal suits and other mundane circumstances involving persons with a given name or surname of Mendel.
  7. Note that one often sees a complementary fallacy in which reform of evolutionary theory is demanded on the basis of the discovery and scientific recognition of some phenomenon X, usually from genetics or molecular biology. That is, the reformist fallacy is “X is a non-traditional finding in genetics therefore we have to modify evolutionary theories,” whereas the traditionalist fallacy is “we have long known about X therefore we do not have to change any evolutionary theories.” Both versions of the argument are made in regard to epigenetics. Shapiro’s (2011) entire book rests on the reformist fallacy where X = HGT, transposition, etc, and Dean’s (2012) critical review of Shapiro (2011) relies on the traditionalist fallacy of asserting that if “Darwinians” study X, it automagically becomes part of a “Darwinian” theory: “Horizontal gene transfer, symbiotic genome fusions, massive genome restructuring…, and dramatic phenotypic changes based on only a few amino acid replacements are just some of the supposedly non-Darwinian phenomena routinely studied by Darwinists”. What Dean is describing here are saltations, which are not compatible with a gradualist theory, i.e., a theory that takes natura non facit saltum as a doctrine. Whatever part of evolution is based on these saltations, that is the part that requires some evolutionary theory for where jumps come from, i.e., what is the character and frequency of variational jumps, and how are they incorporated in evolution.

The Haldane-Fisher “opposing pressures” argument

The Haldane-Fisher “opposing pressures” argument is an argument from population genetics that played an important role in establishing the Modern Synthesis orthodoxy, and which continued to guide thinking about causation throughout the 20th century. The flaw in the argument was pointed out by Yampolsky and Stoltzfus (2001) when they showed that a workable theory of variation-biased evolution emerges, not from mutation driving alleles to fixation against the opposing pressure of selection, but from biases in the introduction process. The purpose of this blog is simply to document this influential fallacy.

In his magnum opus, Gould (2002) writes as follows, citing an argument from Fisher (1930):

“Since orthogenesis can only operate when mutation pressure becomes high enough to act as an agent of evolutionary change, empirical data on low mutation rates sound the death-knell of internalism” (p. 510).

The conclusion of this argument is that internalist theories — the kind of theories that attempt to explain evolutionary tendencies by referring to internal variational tendencies — are incompatible with population genetics because mutation rates are too small for the pressure of mutation to be an important causal force.

Note the form of the argument: the theoretical principle in the first clause, combined with an empirical observation (low mutation rates), yields a broad conclusion. The theoretical argument assumes that the way to understand the role of mutation in evolution is to think of it as a force or pressure on allele frequencies. That is, in Modern Synthesis reasoning, evolution is reduced to shifting gene frequencies, and the causes of evolution are declared to be the forces that shift frequencies. One then inquires into the magnitude of the forces, because obviously the stronger forces are more important; strong forces deserve our attention and must be treated fully; very weak forces may be ignored. As indicated by Provine (1978), this kind of argument about the sizes of forces was a key contribution of theoretical population genetics to the Modern Synthesis, anchoring its claim to have undermined all the alternative non-Darwinian theories.

Below I will present the argument in more depth, illustrate how it is invoked in evolutionary writing, and explain why it is important today.

The argument

The mutation pressure theory, explained in much more detail in Bad Takes #2, appears most prominently as a strawman rejected by Haldane (1927, 1932, 1933) and Fisher (1930). That is, Haldane and Fisher did not advocate for the importance of evolution by mutation pressure, but presented an unworkable theory as a way to reject the idea that evolutionary tendencies may reflect internal variational tendencies, an idea that conflicts with the neo-Darwinian view that selection is the potter and variation is the clay.

Haldane and Fisher concluded that evolution by mutation pressure would be unlikely on the grounds that, because mutation rates are small, mutation is a weak pressure on allele frequencies, easily overcome by opposing selection. Haldane (1927) concluded specifically that this pressure would not be important except in the case of neutral characters or abnormally high mutation rates (image)

The conclusion of Haldane (1927)

The argument is hard to comprehend today because most of us think like mutationists and no longer accept the shifting-gene-frequencies theory central to classical thinking.

The way to understand the argument more sympathetically is to consider how, in the neo-Darwinian tradition, the focus on natural selection shapes conceptions of evolutionary causation: selection is taken as the paradigm of a cause, so that other evolutionary factors are treated as causal only to the extent that they look (in some way) like selection. For instance, drift and selection can both cause fixations, and so (in the context of population-genetics discussions) they are often contrasted as the two alternative causes of evolutionary change.

More generally, classical population genetics tends to treats causes of evolution as mass-action pressures that shift frequencies. The mutation-pressure argument treats mutation as a pressure that might drive alleles to prominence, i.e., to high frequencies.

That is, the way to understand Haldane’s treatment is that, if mutation-biased evolution is happening, this is because mutation is driving alleles to prominence against the opposing pressure of selection, so that either the mutation rate has to be very high, or selection has to be practically absent (i.e., neutrality). Fisher’s (1930) reasoning on the issue was similar to Haldane’s. From the observed smallness of mutation rates, he drew a sweeping conclusion to the effect that internalist theories are incompatible with population genetics.

Examples

Haldane’s 1927 statement is given above. In 1933, he wrote as follows, again treating the role of mutation in the “trend” of evolution as a matter of mutation pressure (where Haldane uses k and p, we would today use something like s for selection coefficient and something like u for mutation rate).

p. 6.  “In general, mutation is a necessary but not sufficient cause of evolution.  Without mutation there would be no gene differences for natural selection to act upon.  But the actual evolutionary trend would seem usually to be determined by selection, for the following reason.  

A simple calculation shows that recurrent mutation (except of a gene so unstable as to be classifiable as multi-mutating) can not overcome selection of quite moderate intensity.  Consider two phenotypes whose relative fitnesses are in the ratios 1 and 1-k, that is to say, that on the average one leaves (1-k) times as many progeny as the other.  Then, if p is the probability that a gene mutates to a less fit allelomorph in the course of a life cycle, it has been shown (Haldane, 1932) that when k is small, the mutant gene will only spread through a small fraction of the population unless p is about as large as k or larger.  This is true whether the gene is dominant or recessive.”

Fisher used much more dramatic language.

“For mutations to dominate the trend of evolution it is thus necessary to postulate mutation rates immensely greater than those which are known to occur.”

“The whole group of theories which ascribe to hypothetical physiological mechanisms, controlling the occurrence of mutations, a power of directing the course of evolution, must be set aside, once the blending theory of inheritance is abandoned. The sole surviving theory is that of Natural Selection, and it would appear impossible to avoid the conclusion that if any evolutionary phenomenon appears to be inexplicable on this theory, it must be accepted at present merely as one of the facts which in the present state of knowledge seems inexplicable. The investigator who faces this fact, as an unavoidable inference from what is now known of the nature of inheritance, will direct his inquiries confidently towards a study of the selective agencies at work throughout the life history of the group in their native habitats, rather than to speculations on the possible causes which influence their mutations.”

Fisher (1930) The Genetical Theory of Natural Selection

Fisher’s unqualified rejection of internalist theories seems to have been more influential, which is not surprising given that it comes down like a hammer whereas Haldane’s conclusion is subtle by comparison.

“For no rate of hereditary change hitherto observed in nature would have any evolutionary effect in the teeth of even the slightest degree of adverse selection.  Either mutation-rates many times higher than any as yet detected must be sometimes operative, or else the observed results [apparent evolutionary trends] can be far better accounted for by selection.”  p. 56

“Of course, if mutation-rate were high enough to overbalance counter-selection, it would provide an orthogenetic mechanism of a kind.  However, as Fisher and others have shown, mutation rates of this intensity do not exist, or at least must be very rare. ”  p. 509

Huxley (1942), Evolution: the Modern Synthesis

“if ever it could have been thought that mutation is important in the control of evolution, it is impossible to think so now; for not only do we observe it to be so rare that it cannot compete with the forces of selection but we know this must inevitably be so.” p. 391

Ford (1971), Ecological Genetics

Provine (1978) begins by stating the issue very modestly, but then concludes that the argument “discredited” alternative theories. However, note that the pressure theory was invented by Haldane and Fisher: the position of the mutationists was not a monistic theory of mutation pressure, but a dualistic theory of “mutation proposes, selection disposes (decides).”

“the mathematical evolutionists demonstrated that some paths taken by evolutionary biologists were unlikely to be fruitful. Many of the followers of Hugo de Vries, including some Mendelians like Raymond Pearl, believed that mutation pressure was the most important factor in evolutionary change. The mathematical models clearly delineated the relationships between mutation rates, selection pressure, and changes of gene frequencies in Mendelian populations. Most evolutionists believed that selection coefficients in nature were several orders of magnitude larger than mutation rates; upon this assumption, the mathematical models indicated that under most conditions likely to be found in natural populations, selection was a vastly more powerful agent of evolutionary change than mutation … These mathematical considerations … discredited macromutational theories of evolution and theories emphasizing mutation pressure as the major factor in evolution.”

Provine (1978) The role of mathematical population geneticists in the evolutionary synthesis of the 1930s and 1940s.

In the seminal paper on developmental constraints, Maynard Smith et al (1985) identify the Haldane-Fisher argument as an impediment to recognizing developmental biases as genuinely causal

“Two separate issues are raised by these examples.  The first is whether biases on the production of variant phenotypes (i.e., developmental constraints) such as those just illustrated cause evolutionary trends or patterns.  Since the classic work of Fisher (1930) and Haldane (1932) established the weakness of directional mutation as compared to selection, it has been generally held that directional bias in variation will not produce evolutionary change in the face of opposing selection.  This position deserves reexamination.  For one thing, our examples (like many discussed during the last twenty years – e.g., White, 1965; Cox and Yanofsky, 1967) concern biased variation in the genetic mechanism itself.   If such directed variation accumulates– as the results regarding DNA quantity and chromosome numbers suggest– one obtains a very effective evolutionary ratchet.  For another, such directional biases may not stand in contradiction to the Fisher-Haldane point of view: within reasonable limits, neither the increase in cellular DNA content nor that in chromosome number is known to have deleterious effects at the organismic level.” (p. 282)

Maynard Smith, et al. (1985) Developmental Constraints

Below is one of several contemporary statements that seem to gesture toward the Haldane-Fisher argument, without betraying any clear link. It’s a general application of the forces theory, based on the idea that some forces are strong and others are weak, and the strong forces dominage.

“For instance, it is possible to say confidently that natural selection exerts so much stronger a force than mutation on many phenotypic characters that the direction and rate of evolution is ordinarily driven by selection even though mutation is ultimately necessary for any evolution to occur.”

Futuyma and others, 2001, in a white paper written by representatives of various professional societies

Gould was obviously sympathetic to internalist thinking but he got his ideas on this issue straight from Fisher (1930). Note that Gould is writing 75 years after Haldane.

“Since orthogenesis can only operate when mutation pressure becomes high enough to act as an agent of evolutionary change, empirical data on low mutation rates sound the death-knell of internalism.” (p. 510)

Gould (2002) The Structure of Evolutionary Theory

Contemporary relevance

Subsequent work has partially undermined the narrow implications of the Haldane-Fisher argument, and completely undermined its broader application as a cudgel against internalism. Mutation pressure is rarely a reasonable cause of population transformation, because it would happen so slowly and take so long that other factors such as drift would intervene, as argued by Kimura (1980).

That is, whereas Haldane’s conclusion suggests that important effects of mutation in evolution result from one of two special conditions— high rates of mutation or neutrality—, this is not a safe inference because it ignores the role of biases in origination, whose efficacy does not require high rates of mutation or neutrality.

Although the mutation pressure theory is most relevant today as a historically important fallacy, it is not entirely irrelevant to evolution in nature. Consider the loss of a complex character encoded by many many genes: perhaps the total mutational target is so large that a population might reach a substantial frequency of loss of the character due to the mass effect of many mutational losses. The case studied by Masel and Maughan (2007) is exactly this kind of case in which evolution by mutation pressure is reasonable. In particular, the authors estimate an aggregate mutation rate of 0.003 for loss of a trait (sporulation) dependent on many loci, concluding that complex traits can be lost in a reasonable period of time due primarily to mutational degradation.

To reiterate, the main relevance of this argument today is historical and meta-scientific. First, it represents a historically influential fallacy. Recognizing that the argument cited by Gould above is a fallacy might cause us to pause and reflect on how conventional wisdom from famous thinkers citing other famous thinkers might have an improper grounding. Second, this is not just an arbitrary technical error, but reflects a substantive flaw in the Modern Synthesis view of causation and of evolutionary genetics, exposing the extent to which classic arguments about causation that established the Modern Synthesis do not follow from universal principles, but are grounded in a parochial view designed to support neo-Darwinism.

Objections to declaring the argument a fallacy

When I present this argument, I sometimes hear objections. One is that it is unfair to criticize Fisher and Haldane for not understanding transition bias, because they did not know about it. But we are not trying to be fair to persons: we are trying to be rigorous about theories and arguments. Theories and arguments are supposed to be right. If the opposing-pressures argument is a good pop-gen argument, then it will work in a world with transition bias or GC bias and so on.

For instance, the mutation-selection balance— in the simplest case, f = u / s — is a theory from Haldane and Fisher, and the theory can be right when applied to kinds of mutations that were not known in 1930. In fact, no molecular mechanisms of mutation were known in 1930: this was before the structure of DNA was known, and even before it was known that DNA is the genetic material. Haldane and Fisher knew that not all mutation rates are the same, so when they devised theories, they invoked a mutation rate as a case-specific variable. They derived a mutation-selection balance equation with a form that allows the rate to take on different values, so we are on solid ground in applying it to any deleterious mutation that can be assigned a rate, e.g., a transposon insertion.

Another objection is that the opposing-pressures argument is really just an argument against evolution by mutation pressure— which we still reject generally, for reasons expressed by Haldane and Kimura— and it doesn’t rule out other forms of variation-biased evolution. The problem is that this is not how the argument was understood by generations of evolutionary biologists from Fisher and Huxley to Gould and Maynard Smith. Instead, it was understood to be a very general claim.

Think about it this way. Theoretical arguments like this often have a 3-part structure

  • the set-up: a problem-statement or question that frames the issue and establishes context, possibly with some problematic assumptions
  • the analysis: an analytical core with some modeling or equations
  • the take-away: a conclusion that maps the analysis to the problem, answering the framing question

The analytical core is rarely the problem. If you go back over the examples above and ask how the issue is framed, it is often framed in terms of a very general question like what determines the direction or general trend of evolution (Haldane), or what is the status of internalism (Gould), or could a trend be caused by mutation instead of selection (Huxley), or what is the potential for developmental effects on the production of variation to influence evolution (Maynard Smith, et al). Fisher’s argument quoted above is explicitly general, referring to any theory that attempts to explain evolutionary tendencies by reference to “physiological mechanisms controlling the occurrence of mutations.” He is not just rejecting evolution by mutation pressure or a specific theory labeled “mutationism” or “orthogenesis.” Fisher says that researchers who understand how population genetics works will stay focused on selection and not on how the mutations happen, because that is irrelevant.

For instance, a discussion of how oxidative deamination and repair contribute to CpG bias is clearly a discussion of physiological mechanisms controlling the occurrence of mutations, and therefore is irrelevant to evolution according to Fisher’s argument. To cite a concrete example, the study by Storz, et al. (2019) of the role of CpG bias in altitude adaptation by changes in hemoglobin genes violates Fisher’s guidance because the authors directed their evolutionary inquiry toward the possible causes which influence mutations. Fisher’s argument is explicitly a general argument that applies to any considerations of what determines the occurrence of mutations, that is precisely how generations of evolutionary thinkers understood Fisher’s argument, and that is precisely the basis for concluding firmly that Fisher’s argument is mistaken.

Synopsis

The opposing pressures argument says that, because mutation rates are small, mutation is a weak pressure, and this rules out a possible role for mutational and developmental effects in determining evolutionary tendencies or directions. The argument first appeared in writings of Haldane and Fisher, and was repeated by leading thinkers throughout the 20th century, e.g., emerging in the evo-devo dispute of the 1980s.

The analytical core of the opposing pressures argument is not the problem. The analytical core says that evolution by mutation pressure would require high mutation rates unopposed by selection. The fallacy is to use this analytical core as the basis for a general conclusion about the status of internalism, the sources of direction in evolution, or the potential for variational biases to impose dispositions.

Why would generations of evolutionary thinkers assume that an argument about mutation pressure is an adequate basis for making such broad conclusions, ignoring the introduction process? That’s a story for another day, but the short answer is that, for the people thinking analytically about causation, the introduction process did not exist. For them, the Modern Synthesis had reduced evolution to quasi-deterministic shifts in frequencies of genes in the gene pool. New mutations aren’t involved. The population is a point being pushed around by forces in a space of non-zero allele frequencies. Mass-action pressures are the only effective sources of direction in this kind of system.

References

Popov I. 2009. The problem of constraints on variation, from Darwin to the present. Ludus Vitalis 17:201-220.

Ulett MA. 2014. Making the case for orthogenesis: The popularization of definitely directed evolution (1890–1926). Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 45:124-132.

Index to bad takes

Unfamiliar ideas are often mis-identified and mis-characterized. It takes time for a new idea to be sufficiently familiar that it can be debated meaningfully. We look forward to those more meaningful debates. Until then, fending off bad takes is the order of the day!

This is a series of mostly short pieces focusing on bad takes on the topic of biases in the introduction of variation, covering both the theory and the evidence.

Bad takes #7: requires sign epistasis

Unfamiliar ideas are often mis-identified and mis-characterized. It takes time for a new idea to be sufficiently familiar that it can be debated meaningfully. We look forward to those more meaningful debates. Until then, fending off bad takes is the order of the day! See the Bad Takes Index.

Svensson (here or here) has repeatedly asserted that the effect of biases in the introduction process requires reciprocal sign epistasis, with the implication that this makes the effect unlikely in nature.

Epistasis is inescapably relevant when considering extended adaptive walks on realistic fitness landscapes, but sign epistasis is certainly not a requirement for effects of biases in the introduction process. This is an invention of Svensson, not listed as a requirement in any of the published works of scientists developing theory on this topic. For instance, Rokyta, et al. (2005) have no epistasis in their model of 1-step adaptation. Gomez, et al. (2020) present a staircase model of fitness without sign epistasis.

In some cases, the fitness landscape is specified by an empirical model, e.g., the “arrival of the frequent” model of Schaper and Louis (2014) simply uses the genotype-phenotype map for RNA folds that emerges from RNA folding algorithms. Cano and Payne (2020) use empirical fitness landscapes of transcription-factor binding sites, which typically have a small number of peaks. Whatever the degree of epistasis found on these landscape, it is the naturally occurring degree for this kind of landscape.

The superficial plausibility of Svensson’s fabrication arises from the fact that reciprocal sign epistasis is indeed a feature of the original computer simulations of Yampolsky and Stoltzfus (2001). Why is this feature present?

In the original model, the initial ab population can evolve either to Ab or to aB, but further evolution to AB does not occur because AB is less fit (i.e., in the figure, t is less than s1 or s2). This means that the change from a to A is beneficial in the b background, but deleterious in the B background. When one allelic substitution reverses the effect of another, this is called sign epistasis, and when the effect goes both ways, that is reciprocal sign epistasis.

Remember, the Yampolsky-Stoltzfus model was designed to be the simplest model to prove a point, so it is a model of one-step adaptation with two options: up and to the left, or up and to the right. Thus, we could have avoided sign epistasis by stipulating that the left and right options are mutually exclusive, e.g., we could have stipulated that the initial population has T at a specific nucleotide site, and that the left and right options are C (transition) and A (transversion). The behavior of such a model would be almost identical to the original. Or we could have stipulated (1) an infinite landscape where each derived genotype has a similar left-right choice with no epistasis, but (2) we are only going to look at the first step.

More generally, the way to understand this issue more deeply is to contrast two kinds of scenarios: (1) the idealistic scenario in which the evolving system proceeds to the fitness optimum at equilibrium, in infinite time, so that biases in introduction have no final effect, and (2) everything else, i.e., non-idealistic scenarios in which multiple outcomes are possible, and the choice might reflect biases in the introduction of variation.

Many conditions lead to models of the second type, i.e., realistic models (for a larger discussion, see here). For instance, ending evolution with the first beneficial change is the actual model used recently by Cano, et al. (2021), and it corresponds to the natural scenario of antibiotic resistance evolution explored empirically by Payne, et al. (2019). Resistant M. tuberculosis isolates emerge, and they are isolated and analyzed, without waiting for some long-term process of adaptive optimization to take place. They are isolated and analyzed by virtue of having evolved resistance, not by virtue of having reached a global fitness optimum for resistance.

More generally, we could stipulate that the space is large compared to the amount of time to explore it, so that kinetic biases influencing the early steps are important. In the antibiotic resistance scenario, one is literally looking at the first step in evolution, because that is the step that counts. For instance, one could simply posit an infinite space and compare the rates of two origin-fixation processes that differ due to a mutation bias, e.g., GC-increasing or GC-decreasing changes in an infinitely long genome. No sign epistasis is required, and the effect would apply even given completely additive effects.

In a more finite model in which the system has time to explore the landscape of possibilities, sign epistasis, diminishing-returns epistasis, and other kinds of frustration can have the effect of locking in consequences of initial steps that are subject to kinetic biases. Such effects are common for protein-coding regions because, from a given starting codon for 1 type of amino acid, only 4 to 7 other amino acids — not all 19 alternatives — are accessible by a single-nucleotide mutation. Thus, even when the effects of amino acid changes are all additive, the landscape is rough for a protein-coding gene evolving by single-nucleotide mutations, so that biases in the introduction process can influence the final state of the system. This effect is evident in the theoretical study by Stoltzfus (2006).

In summary, the true relevance of sign epistasis to understanding the efficacy of biases in the introduction process is roughly as follows. When you haven’t got far from where you started, your path depends a lot on your first steps. So, kinetic biases reflecting what is mutationally likely are going to be relevant to understanding the path of an evolving system when the amount of change is non-large relative to the size of the space to be explored. When evolution has plenty of time to explore a small finite landscape, as in the Yampolsky-Stoltzfus model, we can still see consequences of kinetic bias in the case where epistasis has the effect of locking in these consequences.

Bad takes #6: requires “drift in small populations”

Unfamiliar ideas are often mis-identified and mis-characterized. It takes time for a new idea to be sufficiently familiar that it can be debated meaningfully. We look forward to those more meaningful debates. Until then, fending off bad takes is the order of the day! See the Bad Takes Index.

Svensson (here or here) has repeatedly asserted that the effect of biases in the introduction process requires “drift in small populations,” citing Lynch (2007) as a source.

However, like the fake requirement for reciprocal sign epistasis, this is not an actual requirement, but emerges from Svensson’s tendentious misinterpretation of sources.

For instance, consider a large population with strongly beneficial variants that we will designate as “left” and “right” introduced at rare intervals, i.e., uN is small. Assume that fitness favors going right by virtue of a K-fold higher selection coefficient, but mutation favors going left with a bias of magnitude B. This is roughly the same set-up as the Yampolsky-Stoltzfus model, i.e., 1-step adaptation with 2 beneficial options, left (more mutationally likely) and right (more strongly beneficial).

First, suppose that there is drift. Drift only affects the chance of fixation vs. loss (not the mutational dynamics) and the probability of fixation for strongly beneficial alleles is hardly affected at all by population size. Using Kimura’s formula, the chance of fixation for an allele with fitness benefit s = 0.02 for haploid populations ranging from N = 103 to N = 109 is the same to six digits, namely 0.0392106. So, if we are in the origin-fixation regime, i.e., ignoring clonal interference, the evolutionary bias toward going left is still roughly B/K as shown by Yampolsky and Stoltzfus, and population size hardly matters.

Now, let us suppose that there is no drift: the population mutates stochastically but reproduces deterministically. This means that, ignoring clonal interference, the first mutation to occur is assured of fixation regardless of the size of the fitness benefit, and it will proceed deterministically to fixation. Because the mutationally favored option (the left option) has a B-fold chance of happening first, there is a B-fold bias favoring the left option, and there is no dependence on K because (in this artificial scenario) beneficial mutations are fixed deterministically regardless of the degree of beneficiality.

Clearly, this effect of biases in the introduction process does not depend on drift in small populations.

Apparently Svensson is confused by Lynch (2007), who presents a version of Bulmer’s mutation-selection-drift model and then uses this to make an overly broad claim about the conditions under which mutation will deflect the “direction” of evolution relative to the expectations of adaptation. The critical problem with Lynch’s Manichean view is that it imagines a universe in which evolution has only two possible directions, adaptive and non-adaptive. Thus, one must begin by recognizing that, for Lynch, the efficacy of mutation bias to influence the “direction” of evolution is a matter of finding conditions under which mutation assists in sustaining some non-adaptive state, a motivation utterly unlike that which stimulates the Yampolsky-Stoltzfus model. The dependency on small populations in Lynch’s argument arises from the fixation of a slightly deleterious allele that happens to be favored by mutation: for this reason, it is irrelevant to understanding the Yampolsky-Stoltzfus model, which has no deleterious fixations. For a lengthy explanation, see Bad Takes #2.

Grounding internalism in the introduction process: from pop-gen to evo-devo

(work in progress: emerging manuscript, currently in the form of a persuasive essay)

Abstract

(This abstract is aspirational. The actual manuscript delivers the argument in a different way.)

Historical debates on evolution featured internalist ideas that were considered together with, or in opposition to, an externalist focus on natural selection. The mid-20th-century “Modern Synthesis” put an end to this kind of talk: internalist ideas were formally rejected, either for being incompatible with Mendelian population genetics, or merely for being unnecessary. Though internalist lines of thought re-emerged in the form of neo-structuralism and evo-devo, they are widely understood to evoke or require an alternative form of causation separate from population genetics. Here I explain why this impasse exists and how to remove it. First, I argue that an impasse on causation is genuine, and that it reflects how the received view of causation emerged from the distinctive boundary conditions and change-laws of a shifting-gene-frequencies paradigm in which the alleles relevant to evolution are already present in the initial gene pool, and causation is assigned (by an analogy with statistical physics) only to mass-action forces that shift their frequencies. Second, I describe a body of post-Synthesis theory that departs from this paradigm by representing the introduction of new alleles as a change-making process that is not a mass-action force. I show how an explicit treatment of the introduction of novelty by mutation-and-altered-development leads to direct contradictions with classic reasoning, most importantly the Haldane-Fisher “opposing pressures” argument that appeared to rule out evolutionary tendencies due to internal variational biases. Mutation-biased adaptation is a distinctive prediction of this theory, confirmed by recent empirical work. Third, I explain how this seemingly small modification to our understanding of causation represents a major innovation that allows us to specify a formal population-genetic grounding for the key internalist themes of (1) the evolutionary emergence of intrinsically likely forms, (2) internal taxon-specific dispositions that contribute to recurrent evolution; and (3) directional evolutionary trends that are internal in origin. Finally, I outline some of the initial successes and open challenges of a research program focused on the evolutionary impact of mutational, developmental, and systemic biases in the introduction of variation.

Introduction

An internalist-externalist distinction appears in many contexts. In the context of evolutionary thinking, externalist approaches focus on explaining the outcomes or products of evolution by reference to external conditions, so that, for instance, explanations for changes in some feature would make a linkage with changes in external conditions. Internalist approaches focus on explaining outcomes or products of evolution by reference to internal features so that, for instance, explanations for the emergence of some novel structure would be linked to intrinsic material or dynamic propensities of the evolving system. Internalists and externalists tend to differ not just in how they explain things, but in what they hope to explain: they typically are not just offering contrasting explanations for precisely the same things.

Contemporary internalism is manifested in neo-structuralist arguments about self-organization or findability in the manner of Kauffman (1993); in approaches to molecular evolution that feature mutational explanations (Cano, et al.); in the evolvability research front (Nuño de la Rosa), with its focus on how internal developmental-genetic organization facilitates evolution; and in evo-devo generally. Some of the classic internalist themes that persist into the contemporary literature are (1) the predominance (among the products of evolution) of forms or structures that are intrinsically or structurally likely, (2) taxon-specific evolutionary propensities or dispositions that contribute to recurrent evolution; (3) directional trends that are internal in origin.

For the present purposes, I will simplify this issue of internalism in the following way. Assume that evolution is fundamentally and essentially the result of combining a process of varigenesis, i.e., the generation of variation, and a process of reproductive sorting (resulting in selection and drift), and we are concerned only with the immediate or first-order implications of this combination. The internal factor in evolution is then primarily a matter of the genesis of variation, and the issue of internalism is primarily to understand the role of varigenesis in evolution, with a focus on how to combine variation and reproductive sorting. A key issue is whether varigenesis has a dispositional role, predictably favoring some types of changes or directions, and if so, how this dispositional influence operates, and how strongly it shapes evolution.

Before proceeding further, I want to note that this is a non-trivial simplification both conceptually and historically. For instance, the classic advocates of orthogenesis put their focus on how the genesis of variation influences evolution, but they were not strict internalists, e.g., some considered the environment as an important influence on the genesis of variation (see Ulett, 2014). Likewise, to equate internalism with the first-order effect of variation is to exclude higher-order ideas that figure importantly in internalist discussions, e.g., concepts of feedback through which evolvability evolves. My aim here is not to reduce all of evolution to one theory, nor even to cover all of the themes of internalism, but rather to provide a rigorous grounding for certain classic themes. To get to this point, I need to put some limits on the topic. Stated differently, this is an attempt at reduction or mechanistic justification— specifically an attempt to map certain pre-existing high-level themes to implications of a causal theory— and such reductive arguments typically shift the boundaries of things rather than perfectly mapping old themes to new mechanisms.

For my purposes here, “neo-Darwinism” refers to a view that posits specific and contrasting roles for selection and variation: selection is the potter and variation is the clay. In Darwin’s original theory, the process of indefinite variability (the noise-like environmental variation that Darwin relied on, later called fluctuation) merely supplies the raw materials that selection shapes into adaptations. Darwin said that variation follows immutable laws, but that these laws “bear no relation” to the structures built by natural selection. In this conception, variation is not dispositional. Selection is creative and imposes shape and direction, while variation merely supplies raw materials. In this view, “the ultimate source of explanation in biology is the principle of natural selection” (Ayala, 1970).

Although the neo-Darwinian combination of variation and selection— an unequal marriage in which one partner makes all the decisions and gets all the credit— has dominated evolutionary thinking, alternative views posit other roles for the process of variation, assigning it some leverage in influencing the course of evolution. A concise list of notional theories of the role of variation that have been most important in evolutionary discourse would look something like this, in chronological order:

  1. Variations emerge adaptively by effort, and are preserved, as per Lamarck
  2. Variation supplies in each generation the indefinite raw materials that selection shapes into adaptations, as per Darwin.
  3. The constrained generation of variation sets limits on the choices available to selection, as per Eimer (1898) or Oster and Alberch (1982)
  4. Mutation pressure drives alleles to prominence (against the opposing pressure of selection) under neutrality or high mutation rates, per Haldane (1927)
  5. New quantitative variation (M) contributes to standing variation (G) which, together with selection differentials (β), jointly determines (as ) the short-term rate and direction of multivariate change in quantitative characters (Lande and Arnold, 1983; see note 5)

The first 3 theories are essentially folk theories, whereas the latter 2 are formalized. Theories of “orthogenesis” (perpetually mischaracterized by Darwin’s followers) from Eimer, Cope and others fall into the 3rd category, focusing on how the generation of variation influences evolution, and advocates of this kind of thinking considered both internal and external influences on the origin of variation (see Ulett, 2014).

The mutation pressure theory is explained below.

The formalization of evolutionary quantitative genetics (EQG) per Lande and Arnold (1983) is clearly an outgrowth of neo-Darwinian thinking, but the behavior of this formalism diverges significantly from selection and variation as the potter and the clay. The meaning of the master equation Δz = Gβ is that the trajectories of change for all variable traits are linked together (in a somewhat springy way) by their variational correlation structure, where G is the structured factor that represents the correlations in standing variation. However, G is standing variation, not varigenesis (M). Therefore the relation of selection and varigenesis in this theory is complex and indirect. See note 5 for more explanation.

Given how I have defined the problem of internalism in terms of the role of varigenesis, the issue of grounding internalism in causal theories is a matter of having a complete causal theory that specifies the kind of dispositional role of variation that makes sense of internalist themes.

In contemporary evolutionary discourse, there is clearly a discordance between the kinds of claims that internalist thinkers would like to support, and the kinds of claims with a vera causa status recognized by evolutionary geneticists. This discordance is reflected in (1) skepticism and dismissal from evolutionary geneticists, e.g., the way that Lynch (2007) dismisses basically all of evo-devo and the evolvability research front as speculation and loose talk, or the way that Wray, et al. (2014) refer to a “lack of evidence” for developmental bias, and likewise the way that Houle, et al (2017) take an attitude of extreme skepticism, denying that an influence of mutational variability on adaptation has been demonstrated in a way that fits with any known causal theory; and (2) the longstanding complaint from developmentalists about being left out of the “Synthesis,” which leads to the causal completeness argument (Amundson, 2005) and lineage explanation (Calcott, 2009), and to various calls for reform that emphasize the supposed limitations of population genetics.

Here I argue that we can specify a much broader and more complete grounding for internalism by adding (to the list above) a 6th theory about variation that is new and has not yet played a meaningful role in evolutionary discourse:

  • Mutational and developmental biases in the introduction of variation impose kinetic biases on evolution by a first come, first served logic, without requiring neutrality or high mutation rates (Yampolsky and Stoltzfus, 2001)

To understand how to use this theory to specify a broad causal grounding for internalist concerns, we must remove a series of obstacles, beginning with a major historic error in the use of population-genetic reasoning — an error whose effects reverberate today — concerning a possible causal link between internal tendencies of variation and tendencies of evolution.

The Haldane-Fisher argument and the SGFT

Futuyma (1988) attributes 3 remarkable accomplishments to an “Evolutionary Synthesis” of the 20th century: re-establishing neo-Darwinism on a Mendelian basis, sweeping away all rival theories, and providing a common framework for scientists in various disciplines to address evolution.

How were rival theories rejected? In an argument repeatedly cited by leading thinkers, Haldane (1927) and later Fisher (1930) concluded that mutation is a weak force unable to overcome the opposing pressure of selection, important only when selection is absent or when mutation rates are abnormally high (for more detail, see here). The argument was understood to mean that, given the observed smallness of mutation rates (and the equally well recognized pervasiveness of selection on visible traits), internalist theories relating tendencies of evolution to tendencies of variation are incompatible with population genetics, e.g., Gould (2002) writes as follows, citing Fisher (1930):

“Since orthogenesis can only operate when mutation pressure becomes high enough to act as an agent of evolutionary change, empirical data on low mutation rates sound the death-knell of internalism.” (p. 510)

Gould (2002)

This is a formal argument with a clearly recognizable logic, the effect of which is to reject an entire class of internalist theories, with no need for difficult experiments or time-consuming analyses of data! Accordingly, Provine (1978), in “The role of mathematical population geneticists in the evolutionary synthesis of the 1930s and 1940s,” identifies this argument as a key theoretical claim (see also Stoltzfus, 2017, 2019).

“For mutations to dominate the trend of evolution it is thus necessary to postulate mutation rates immensely greater than those which are known to occur.” “The whole group of theories which ascribe to hypothetical physiological mechanisms, controlling the occurrence of mutations, a power of directing the course of evolution, must be set aside, once the blending theory of inheritance is abandoned. The sole surviving theory is that of Natural Selection” (Fisher, 1930)

“For no rate of hereditary change hitherto observed in nature would have any evolutionary effect in the teeth of even the slightest degree of adverse selection. Either mutation-rates many times higher than any as yet detected must be sometimes operative, or else the observed results can be far better accounted for by selection.” (p. 56 of Huxley, 1942)

“If ever it could have been thought that mutation is important in the control of evolution, it is impossible to think so now, for not only do we observe it to be so rare that it cannot compete with the forces of selection but we know this must inevitably be so.” (p. 361 of Ford, 1971)

[Figure legend: Some leading thinkers who invoked the Haldane-Fisher opposing pressures argument (clockwise from left: Haldane, Fisher, Huxley, Mayr, Simpson, Ford, Wright). ]

But the argument is wrong.

An evolutionary process that depends on events of mutation that introduce new alleles is subject to biases in mutational introduction, by a first come, first served dynamic that Haldane and Fisher did not address in their arguments about mutation pressure (see note 8).

The flaw in the Haldane-Fisher argument arises from the assumption that evolution can be treated as a process of shifting the frequencies of alleles in an initial “gene pool,” without events of mutation that introduce new alleles. New mutations have to be involved somewhere in evolution, of course, but they don’t have to be directly involved: if all the relevant mutations happened in the past, and the corresponding variant alleles are present in the gene pool at frequencies resistant to random loss, then we don’t need to address new mutations to understand evolutionary dynamics, which would follow merely from shifting gene frequencies.

In this way, the shifting-gene-frequencies theory (SGFT) posits that evolution can be understood as a shift from an initial multi-locus distribution of allele frequencies, to a final distribution of frequencies for the same alleles. This is what “evolution is shifting gene frequencies” meant for modeling, in practice.

Note that this is a Newtonian framework for theorizing about dynamics. There is an initial state (a set of alleles with their initial frequencies), a set of boundary conditions (the frequencies may range between 0 and 1), and a set of change-laws that govern the dynamics, specifying how frequencies are shifted by mutation, selection, drift, migration and (in the multilocus context) recombination. Although the change-laws were often treated deterministically, stochastic versions also emerged.

Were the classic works of theoretical population genetics really built on thisnarrow foundation? Why isn’t this problem discussed more broadly? I’m not sure why this issue is not a primary focus of reformists, but certainly the issue has been noticed and remarked upon, and not just by non-conformists like myself or Nei (2014). Below are 3 independent sources authored by eminent evolutionary geneticists that note precisely this same restriction in classical theoretical population genetics:

“The process of adaptation occurs on two timescales. In the short term, natural selection merely sorts the variation already present in a population, whereas in the longer term genotypes quite different from any that were initially present evolve through the cumulation of new mutations. The first process is described by the mathematical theory of population genetics. However, this theory begins by defining a fixed set of genotypes and cannot provide a satisfactory analysis of the second process because it does not permit any genuinely new type to arise. ” (Yedid and Bell, 2002)

“Almost every theoretical model in population genetics can be classified into one of two major types.  In one type of model, mutations with stipulated selective effects are assumed to be present in the population as an initial condition . . . The second major type of models [the origin-fixation type] does allow mutations to occur at random intervals of time, but the mutations are assumed to be selectively neutral or nearly neutral.” (Hartl and Taubes, 1998)

“We call short-term evolution the process by which natural selection, combined with reproduction . . ., changes the relative frequencies among a fixed set of genotypes, resulting in a stable equilibrium, a cycle, or even chaotic behavior. Long-term evolution is the process of trial and error whereby the mutations that occur are tested, and if successful, invade the population, renewing the process of short-term evolution toward a new stable equilibrium, cycle, or state of chaos.” (p. 182). “Since the time of Fisher, an implicit working assumption in the quantitative study of evolutionary dynamics is that qualitative laws governing long-term evolution can be extrapolated from results obtained for the short-term process. We maintain that this extrapolation is not accurate.  The two processes are qualitatively different from each other.” (Eshel and Feldman, 2001, p. 163)

All three quotations suggest the same two things: (1) the SGFT was a limiting paradigm in mainstream 20th-century theoretical population-genetics, and (2) as the 20th century closed, this limiting paradigm was breaking down.

The breakdown started, perhaps, with origin-fixation models in 1969 (see McCandlish and Stoltzfus, 2014). For many years, these models were used primarily with neutral or slightly deleterious mutations: this explains the distinctive second category of Hartl and Taubes above. Eventually, SSWM models of adaptation (which overlap in meaning with origin-fixation models) emerged from Gillespie, and became the basis of the minor renaissance in modeling adaptation by Orr and others in the 1990s. That is, theoreticians have moved beyond the SGFT and embraced models of what is sometimes called the “lucky mutant” view or “mutation-driven” evolution, to such a degree that these models now represent a major branch of theory with diverse applications (see McCandlish and Stoltzfus, 2014; Tenaillon, 2014) (see note 9).

But the SGFT was influential in the past, and remains cryptically influential today. Michod (1981) identifies a shifting-gene-frequencies paradigm as the “hard core” of a research program per Lakatos:

(a) The Hard Core

The basic elements of Lakatos’s model are all clearly identifiable within the population genetics research programme. For the population geneticist, the common denominator of all evolutionary forces is their effects on gene frequencies. In other words, gene frequency changes are evolution. This proposition, the hard core of population genetics, is best summarised by Sewall Wright in the conclusion to volume II of his treatise (Wright [1969], p. 472): 

“. . the species is thought of as located at a point in gene frequency space. Evolution consists of movement in this space.”

 This point of view is the basis of the population genetics approach to evolution. This is as true today as it was during the synthesis of the 1920s and 30s.

Michod is correct to identify this as a paradigm, because of the way it defines a broad and powerful perspective on how to think about the problems of evolution, answering basic questions that otherwise might be very hard to answer, and which might be answered quite differently by scientists working on evolution from different perspectives:

  • What is evolution? How do I know if it has happened?
  • Where does evolution take place? What is the causal locale?
  • How do I model evolution? What is the field or state-space?
  • What are the causes of evolution? How do I quantify them and weigh their importance in evolution?
  • How do I study evolutionary causes?

Certainly there were no agreed-upon answers to these questions prior to the Synthesis era. The distinctive answers suggested by the shifting-gene-frequencies paradigm shaped the Synthesis movement:

What is evolution? How do I know if it has happened? Evolution is shifting gene frequencies. Evolution has happened if there has been a shift in gene frequencies at the population level. A single event of birth or death is not evolution, and likewise, an event of mutation or recombination is not evolution. Instead, evolution has happened if there has been some significant shift in allele frequencies. We can argument ad nauseam about what “significant” means in this context, but this “how much X is enough?” question is trivial compared to the primary decision that X— the thing whose size we are going to argue about— is a shift in frequencies and not something else.

Where does evolution take place? Evolution takes place in populations because populations are cohesive entities with allele frequencies. Individuals do not have allele frequencies. Species have allele frequencies, but only because they exist as populations (one or more) with allele frequencies.

How do I model evolution? What is the field or state-space? As Wright (above) suggests, “the species is thought of as located at a point in gene frequency space. Evolution consists of movement in this space.” A model of evolution represents the evolving thing, the population, as a point moving in its state-space of allele frequencies under the action of the forces [10].

What are the causes of evolution? How do I quantify them and weigh their importance? The causes of evolution are the processes that cause shifts in allele frequencies, in units of frequency change over time. The forces that cause larger shifts are, by definition, stronger forces.

How do I study evolutionary causes? The only direct way to study evolutionary causes is to adopt the approach of population genetics, i.e., focusing on populations undergoing changes in allele frequencies, to assess what is causing those changes.

Some of the guidance provided by this paradigm turned into explicit dogma (e.g., the causes of evolution are forces that shift frequencies), and some of it was established more in the form of hidden assumptions or soft prejudices.

Considered more as a falsifiable claim than as a paradigm, the shifting-gene-frequencies theory asserts that we can understand evolution in nature adequately as a shift from one frequency distribution to another, so that any time-course of evolution can be represented as a trajectory in a continuous allele-frequency space.

[Figure legend: The shifting gene frequencies theory (SGFT). In the SGFT, adaptation — understood as a smooth shift in trait distributions (left) — is attributed to simultaneous shifts in the frequencies of alleles at multiple loci (middle), each with a small phenotypic effect. Formally, the population is a point in the topological interior of an allele-frequency space, i.e., the space of non-zero frequencies, and evolution is movement in this interior space (right). Thus, the forces of evolution are processes that can shift the population in this interior space. By contrast, the introduction process jumps a population from a surface into the interior where reproductive sorting processes (selection and drift) operate.]

Within the SGFT, the forces of evolution are the biological processes that move the system in its state-space, i.e., the processes that shift frequencies. The ability to shift frequencies is obviously the measure of strength for a force: a biological process that causes larger shifts is necessarily a stronger force. Selection, being the strongest force, tends to dominate the process of shifting gene frequencies, i.e., it dominates the course of evolution.

What is the role of mutation in this theory?

In the process of shifting from an old to a new multi-locus frequency distribution, mutation pressure merely shifts the relative frequencies of pre-existing alleles. Because mutation rates are so small, these shifts are tiny in comparison to effects of selection and (typically) drift. Thus, the argument of Haldane and Fisher makes perfect sense within the SGFT.

That is, the Haldane-Fisher argument is both a fallacy (in a broader context) and, at the same time, the correctly derived implication of the SGFT: if evolution can be adequately understood merely as shifting gene frequencies, then mutation is indeed a weak force, unimportant unless selection is absent (neutral characters) or the rate of mutation is unusually large.

[Figure legend: The conclusion of Haldane (1927). ]

That is, mutation is a “weak force” in classical population-genetic thinking because the SGFT does not cover the novelty-introducing aspect of mutation. In effect, this aspect of mutation is treated as a background condition, rather than as a change-making causal process with explicit dynamics (see note 1). When a “gene pool” with pre-existing variation is assumed, the effect is that the novelty-introducing role of mutation is absorbed into this assumption as a background condition: the introduction process is literally is not part of “evolution” (shifting gene frequencies), but happens implicitly, before “evolution” gets started.

This theory makes mutation pressure largely irrelevant to modeling evolutionary change. This is why Lewontin (1974) says “There is virtually no qualitative or gross quantitative conclusion about the genetic structure of populations in deterministic theory that is sensitive to small values of migration, or any that depends on mutation rates.” The treatment of theoretical population genetics by Edwards (1977), shown in the image below, has hundreds of equations, but no terms for mutation. The word “mutation” appears only once in the entire book, on page 3, where the author says “All genes will be assumed stable, and mutation will not be taken into account.”

Note that the SGFT does not imply or suggest that new mutations never happen. Haldane, Dobzhansky and others stated explicitly that evolution ultimately would grind to a halt without new mutations. Instead, the verbal theory of the SGFT says that, even though mutations are ultimately necessary, they are not immediately necessary, i.e., they are not directly involved, because the “gene pool” acts as a dynamic buffer, maintaining variation so that there is always abundant material for selection to respond to a change in conditions.

The popularity of the SGFT was driven partly by the sense that adaptation would be too slow if it involved waiting for the right mutation, instead of beginning with an abundant gene pool (e.g., this is particularly emphasized in Wright’s 1932 paper). Before about 1940, when the age of the earth was established at 4000 MY instead of 20 MY or 200 MY, evolutionary biologists were particularly motivated by the need to establish that the process of evolutionary adaptation was fast enough to explain observed levels of adaptation and diversity. We don’t think like this anymore, but a century ago, a theory of evolution that made adaptation fast was considered to be ex posteriori a better theory.

In addition, the SGFT was experimentally validated, a known mechanism. The experimental touchstone for the SGFT was Castle’s famous experiment with hooded rats (see Provine, 1971). Johannsen had already proven that selection is effective in sorting out true-breeding Mendelian types, but Castle and his colleagues showed something quite different. They started with a population of mottled black-and-white rats, and bred nearly all white, and nearly all black populations by selection in just 20 generations, not enough time for new mutations to play any appreciable role. This proved that selection could create “new types” (Provine) or “wholly new grades” (Castle) without the involvement of mutation, simply by shifting gene frequencies.

Finally, the SGFT provided a rhetorical foundation for Darwin’s followers to reject mutationism in the sense of “mutation proposes, selection disposes” (decides), a non-Darwinian theory distinct from their gradualist conception of evolution by the shifting and blending of abundant infinitesimals. The mutationist conception of evolution as a 2-step mutation-fixation process — the “lucky mutant” theory formalized in 1969 in origin-fixation models — is common today (see The shift to mutationism is documented in our language). However, the architects of the Modern Synthesis called on the SGFT to argue against the lucky mutant view (for more detail, see When Darwinian Adaptation is neither). That is, even though the SGFT was a speculative theory of unknown realism, the architects of the Modern Synthesis convinced themselves that the theory was firmly established, and they conveyed this attitude of certainty to their readers, e.g.,

 “Novelty does not arise because of unique mutations or other genetic changes that appear spontaneously and randomly in populations, regardless of their environment. Selection pressure for it is generated by the appearance of novel challenges presented by the environment and by the ability of certain populations to meet such challenges.” (Stebbins, 1982, p. 160)

“It is most important to clear up first some misconceptions still held by a few, not familiar with modern genetics:  (1) Evolution is not primarily a genetic event.  Mutation merely supplies the gene  pool with genetic variation; it is selection that induces evolutionary change.” (p. 613 of Mayr, 1963)

This commitment continued to echo for decades in the notion that evolution does not depend on new mutations, a doctrine repeated in textbooks, e.g.,

“In practically all populations, however, the role of new mutations is not of immediate significance” (p. 464)

Strickberger MW. 1990. Evolution. Boston: Jones and Bartlett Publishers.

Thus, the SGFT was not merely a modeling convention — it was not just a technique used by mathematicians to make the equations easy to solve. Instead, the formal models and the conception of forces as mass-action pressures came together with a verbal theory about how evolution actually works in nature, and this integrated theory provided a basis to reject, not just orthogenesis, but mutationism in the sense of evolution via new mutations, i.e., mutation proposes, selection disposes (decides).

Even more broadly, the SGFT underlies the grand Synthesis claims noted earlier (Futuyma, 1988): restoring neo-Darwinism, sweeping away all rivals, and providing a unified framework for scientists in various disciplines to address evolution.

Yet, evolution in nature does not have to follow the SGFT. As stated earlier, an evolutionary process that depends on events of introduction — events of mutation that introduce a new allele, or events of mutation-and-altered-development that introduce a new phenotype — is subject to biases in the introduction process, by a simple “first come, first served” logic.

The logic of this theory was demonstrated by Yampolsky and Stoltzfus (2001) using a population-genetic model with 2 loci and 2 alleles. From the starting ab population, mutations with rates u1 and u2 introduce the beneficial genotypes Ab or aB, with a mutation bias favoring aB with magnitude B = u2 / u1 and with a greater fitness advantage (here, 2-fold) favoring Ab. The lines in the plot below all go up from left to right, indicating that the bias in outcomes (frequency of evolving aB relative to Ab) increases with the bias in mutation. The smaller populations show the degree of bias expected under origin-fixation dynamics (dashed line).

A distinctive prediction of this theory is that the influence of mutation biases does not require neutrality or high mutation rates (contra Haldane 1927), but will emerge (under the right conditions) from biases in ordinary types of nucleotide mutations, e.g., transition-transversion bias. This effect has been demonstrated conclusively in the past few years in both laboratory adaptation and in cases of natural adaptation (in diverse taxa) traced to the molecular level (for review, see Gomez, et al. 2020 or Stoltzfus, 2019).

[Figure legend: The observed transition-transversion ratio among parallel adaptive changes is significantly higher than the null 1:2 ratio (Stoltzfus and McCandlish, 2017). This pattern is consistent with the theory of biases in the introduction process, but not with the mutation pressure theory of Haldane and Fisher.]

Thus, a causal link between tendencies of variation and tendencies of evolution is theoretically possible and is actually observed. This result refutes a key argument from the mid-20th-century orthodoxy: internalist theories that attempt to link evolutionary tendencies to internal tendencies of variation are not inherently incompatible with Mendelian population genetics, but only with the SGFT (see note 2).

Repercussions

So far, we have established that the Haldane-Fisher argument is unsound theoretically, and that its conclusion is contradicted empirically. Haldane’s (1927) conclusion, even when considered narrowly, does not provide correct guidance for reasoning about evolution, e.g., when we see mutational patterns in molecular evolution, we cannot assume that this must reflect high mutation rates or neutral evolution. And the broad application of the Haldane-Fisher argument as a cudgel against internalism is crazy wrong.

Yet, in regard to the structure of evolutionary thought, much intellectual work will be required to reverse the damage done by this influential fallacy. Evolutionary discourse has proceeded through a century of theory development and exploratory thinking subject to the constraints that (1) a workable theory of biases in the introduction process was unknown to its major participants, and (2) the Haldane-Fisher argument placed a large “Do Not Enter” sign on the door leading to internalist thinking. This is a disturbing thought.

Do Not Enter Traffic Signs | Seton

This limitation was not known, for instance, in the 1980s, when the Modern Synthesis was being challenged on various fronts (molecular evolution, macroevolution, evo-devo), and reformers were exploring new ways of thinking. Gould and Lewontin did not know it in 1979, when they wrote their famous critique of adaptationist thinking. Maynard Smith, et al. did not know it in 1985 when they wrote about “developmental constraints.” Kauffman did not know it in 1993 when, in The Origins of Order, he invoked “self-organization” to explain the findability of structures that are common in genetic state-spaces.

Yet all 3 sources are widely cited and have been influential — evidence of widespread hunger for internalist or structuralist alternatives to neo-Darwinism.

What happened, and what didn’t happen, because of this “do not enter” sign?

In the “spandrels” paper, Gould and Lewontin (1979) eviscerated the adaptationist research program, but their arguments for alternatives to natural selection were unconvincing. Twenty years later, at the close of his career, Gould (2002) cited the Haldane-Fisher argument and wrote (as quoted above) that “empirical data on low mutation rates sound the death-knell of internalism” (p. 510).  What if Gould had known all along that this argument is mistaken?

Maynard Smith, et al. (1985), in their seminal piece on “developmental constraints,” noted explicitly that the Haldane-Fisher argument posed a barrier to the proposed efficacy of developmental biases in variation. If a theory of biases in the introduction process had existed in 1985, Maynard Smith, et al. could have used it to refute the Haldane-Fisher argument, and to justify their claims regarding developmental effects, yet their foundational statement offers no general answer to the crucial problem of lacking a valid population-genetic basis (in the highlighted passage, they go on to suggest neutral evolution, obviously not an adequate foundation to address evo-devo concerns).

Accordingly, Reeve and Sherman (1993), in their subsequent defense of the adaptationist program, cited Gould and Lewontin as well as Maynard Smith, et al. (1985) and complained that the advocates of developmental constraint had offered no evolutionary mechanism. They called on the logic of the Haldane-Fisher argument when they ask, rhetorically, “why couldn’t selection suppress an ‘easily generated physicochemical process’ if the latter were disfavored?” Decades later, the notion of developmental constraint remains a flexible explanatory concept not tied to a specific evolutionary mechanism (see Green and Jones, 2016).

In the discourse of developmentalists, the lack of a population-genetic mechanism for this effect has led to an exploration of alternative views of causation. That is, developmentalist-structuralist thinkers ignored the “do not enter” sign and continued to assume that internal factors actually matter in evolution. Yet, because classical population genetics did not seem to provide a causal basis for this intuition, they concluded that population genetics has some kind of metaphysical limitation that makes it inadequate as the basis for complete causal theories.

That is, due to the influence of the SGF paradigm, population genetics is widely accepted as the language of causation in evolution, e.g., Dobzhansky (1937) declared that “Since evolution is a change in the genetic composition of populations, the mechanisms of evolution constitute problems of population genetics.”   Yet, by the Haldane-Fisher argument, population-genetics rules out a dispositional role for internal variational factors. This has led internalist thinkers to suspect that something about population genetics makes it inadequate to construct complete accounts of evolutionary change.

“intellectually respectable evolutionary theorizing must be based on population genetics theory, which forms the substantive core of the relevant evolutionary theory.”

Sarkar (2014)

The causal completeness argument (Amundson, 2001, 2005) is a formalization of this complaint against population genetics. Because phenotypes exist and they are the stuff of evolution, an account of evolutionary causation that refers only to population genetics cannot be complete: development must fit in, somewhere, in a causal role. One way to integrate this role is to suggest that a full account of evolution must combine (1) the usual dry population-genetic account of causation by forces with (2) an alternative narrative of wet biological changes in development (e.g., Wilkins, 1998). This completes the causal account of evolution by supplementing standard forces with a kind of “lineage explanation” per Calcott (2009). In lineage explanation, the focus is on constructing a developmental-genetically plausible narrative for changes in a lineage over evolutionary time, as opposed to a focus on individual development over a lifetime, or on population genetics over evolutionary time.

The problem of a missing causal foundation manifests differently in the (completely separate) literature of molecular evolvability or self-organization following on Kauffman (1993). Kauffman sought to explain why certain features or forms emerge commonly by evolutionary processes, even without being selected.

A possible causal explanation emerges from the fact that the structures that are more common in genetic state-space, e.g., RNA folds that have more possible sequences, necessarily have more mutational arrows pointed at them, including from other parts of state-space.

We might be tempted to suggest that this fact alone explains the findability of common structures, but this only tells us that a mutational bias exists — how such a bias influences evolution is a separate issue that requires a population-genetic theory linking tendencies of mutation to tendencies of evolution.

To grasp this point more clearly, think of Sober’s (1984) distinction of “source laws” and “consequence laws” of selection. Population genetics tells us how to compute what will happen in a population if A and B differ in fitness by some amount such as 2 %, given some background conditions including a scheme of heredity. That is, population genetics covers the consequence laws of selection. But it doesn’t tell where the differences in fitness come from, i.e., how they emerge biologically. For that, we need the source laws of selection, which come from physiology and ecology and so on.

Likewise, a complete causal theory for a variational influence would require both source laws that address how the variational tendencies emerge, and consequence laws that address their impact on evolution. As noted above, Maynard Smith, et al. (1985) drew attention to the source laws for developmental tendencies of variation, but failed to supply a consequence law linking those to measurable evolutionary effects.

More generally, in the evo-devo literature, the focus is on developmental source laws, and the issue of consequence laws is often not identifiable (e.g., Salazar-Ciudad, 2021), so that the assumption that tendencies of variation must somehow cause evolutionary tendencies is wholly implicit.

By contrast, for more traditionally minded evolutionary geneticists, the issue raised by evo-devo is precisely this alleged causal link between developmental biases and evolutionary ones, a link that is considered problematic and unlikely. For instance, in “Mutation predicts 40 million years of fly wing evolution,” Houle, et al. (2017) have done perhaps the finest and most rigorous work to date showing a detailed quantitative correlation between (1) measured tendencies of varigenesis, i.e., new phenotypic variation M, and (2) measured patterns of evolutionary divergence R. This seemed to resolve 40 years of debate over the evo-devo claim that developmental biases influence evolution, an argument that was always based far too much on developmental models of variation, instead of actual measurements of mutational variability. But the authors themselves take an attitude of utmost skepticism and deny that their results demonstrate a causal link from M to R.

When we are considering discrete traits, the Haldane-Fisher argument provides the consequence laws for biases in variation under the SGFT. Mutation is a weak force because mutation rates are small. Therefore, tendencies of mutation cannot be difference-makers in evolution, except in the case of neutral characters or unusually high mutation rates (Haldane, 1927).

Today, however, we can reject the SGFT and the Haldane-Fisher argument, and instead invoke the mutationist dynamics of origin-fixation models (for instance) to propose that the joint probability of origin-and-fixation of common structures (i.e., common in abstract genotype-spaces) is higher because their probability of mutational origin is higher. In this way, we can specify a complete chain of causation linking (1) a source law specifying that common structures have more mutational arrows pointed at them, with (2) a consequence law specifying that biases in the mutational introduction of alternative structures impose a bias on evolution (dependent on population-genetic conditions). Note that the consequence law comes from population genetics but the source law does not: it comes from a model for how RNAs develop a phenotype (i.e., how they fold into a shape), and then mapping the phenotypes (shapes) to genotype space.

But this kind of reasoning did not exist in 1993, and few scientists know about it today. Thus, proponents of effects of findability describe it in other ways, e.g., Kauffman invoked “self-organization.” The general response of evolutionary geneticists to Kauffman’s work was that he clearly had some fascinating results, but it was not clear how relevant they were (given the abstractness of the models), or what they said about evolutionary causes. Kauffman repeatedly said that selection and “self-organization” worked together, in a partnership. But Kauffman was not calling on the usual list of evolutionary forces that shift allele frequencies to give a mechanistic account of self-organization, so we had no way to evaluate the causal status of this partnership. One reviewer called his references to self-organization “almost magical” (Fox, 1993).

In other parts of the contemporary literature on molecular evolvability, the effects of proximity and cardinality (of connected phenotype networks in genotype-space) that, in the above interpretation, are mediated by biases in the introduction process, are described as an effect of background conditions, as “constraints” emerging from properties of fitness landscapes (e.g., here), rather than being described in terms of causal forces.

[Figure legend: Frequency vs. rank for the most common types of RNA folds of 100-nt sequences (from Dingle, et al, 2021). The circled folds are the ones found in nature. Thus, natural evolutionary processes discover the folds that are most common in sequence space. ]

In this way, the findability phenomenon is presented as something related to the complexity of the space in which evolution happens, i.e., patterns emerge, not due to any particular evolutionary force, but due to the unavoidable geography of the state-space for evolution. Yet the dependence of findability on the way that this state-space is sampled by mutation, leading to biases in the introduction process (Stoltzfus, 2012), is shown by Schaaper and Louis (2014). They refer to this effect as the “arrival of the frequent” or as “phenotype bias.” Likewise, Dingle, et al (2020) show that the findability effect disappears when sampling compensates for the differing cardinality of structures in sequence space.

A causal grounding for internalism

“What the world most needs, then, is not a good five-cent cigar, but a workable — and correct — theory of orthogenesis.”

(Shull, 1935)

Thus, the central barrier to establishing a causal grounding for internalist thinking in evolutionary biology is that the prevailing theory of causal forces is grounded specifically in a limited conception of evolutionary genetics (the SGF paradigm), rather than in a more general conception that implicates events of introduction. A more general conception of causation would include causes that are not formally population-genetic forces.

What are the classical forces? In the conception of causation grounded in the SGFT, a causal force is a mass-action pressure modeled after the pressures of statistical physics, i.e., a pressure is a pressure on allele frequencies, and it results from aggregating over the effects of innumerable events among the individual member organisms of a population (see Sober, 1984). Selection and drift result from the aggregate effects of innumerable births and deaths. The force of mutation is mutation pressure, the aggregate effect of innumerable events of mutational conversion in different individuals.

Note that, just as statistical physics is not a reductionist theory, the SGFT is simply not a reductionist theory, in the sense of pushing the fundamental basis of reality or causation down to the lowest possible level. The SGFT clearly posits an emergent population “level”, and the architects of the Modern Synthesis argued explicitly that the forces of evolution are emergent at the population level, and do not exist at the more reduced level of individual organisms. For instance, in their textbook, Dobzhansky, et al (1977) write

Each unitary random variation is therefore of little consequence, and may be compared to random movements of molecules within a gas or liquid.  Directional movements of air or water can be produced only by forces that act at a much broader level than the movements of individual molecules, e.g., differences in air pressure, which produce wind, or differences in slope, which produce stream currents.  In an analogous fashion, the directional force of evolution, natural selection, acts on the basis of conditions existing at the broad level of the environment as it affects populations. (p. 6)

The correct association of the concept of reduction, in regard to the role of population genetics in the Modern Synthesis, has to do with theory reduction. The Modern Synthesis clearly takes a set of recognized high-level phenomena of evolution, primarily the phenomenon of adaptation, and attributes them to the consequences of a set of underlying causal processes. The phenomenology is reduced to the operation of causes in this way.

The particular conception of an emergent population force (in the SGFT) means that an individual event of mutation that introduces a new allele does not satisfy the definition of an evolutionary cause. It is a proximate cause, in the language of Mayr. Likewise, the development of an individual is a proximate cause.

A casual way to state the consequences of this limitation is that the prevailing theory of causal forces used in evolutionary reasoning works well for causes of fixation but not for causes of origination, yet a full account of evolutionary causation requires that both origination and fixation are treated as change-making causal processes with explicit dynamics.

Thus, the flaw in the forces theory is exactly the same thing as the flaw in the SGFT. The sufficiency of the SGFT depends on evolution remaining in the topological interior of the relevant allele-frequency space, where all frequencies are non-zero. In this topological interior, all of the classical forces are identical in the sense that each force can change a frequency f to f + δ, where δ is an infinitesimal. A change from f = 0.5000 to 0.5001 can happen by any force, although a shift of 1 part in 5,000 is a large shift for mutation because mutation rates are so small. A process that causes larger shifts is a stronger force. Mutation is a very weak force.

In this way, the scheme of “forces” achieves generality by a common currency of causation, infinitesimal mass-action shifts in frequency. In the interior of the state-space for evolution (in the SGFT), any infinitesimal change, anywhere in the space, can happen by any force. This means that we can chain together any series of infinitesimal changes into a trajectory, and this trajectory can (in principle) be caused by any force, or by any combination of forces.

But the logic of forces falls apart if we consider movement from the surface (edge) of an allele-frequency space into the interior. In the left figure below, we have a small shift from the center of a 2-dimensional allele-frequency space (0.5000, 0.5000) to (0.5002, 0.5004). This could be caused by selection, drift or mutation, or by any combination of them, although again, a shift this large is enormous for mutation alone, and would not normally happen in one or a few generations. In the right figure, this same change in frequencies is moved down to the horizontal axis, i.e., the shift is now from (0.5000, 0) to (0.5002, 0.0005). This is the same shift mathematically, but not evolutionarily, because mutation is absolutely required to cause a shift upward from 0.

The logic of forces breaks down because the impact of the biological process of mutation is weaker than every other force in the interior of an allele-frequency space, but infinitely stronger than every other force at the surfaces, where it acts by discrete events, not continuous shifts. This is qualitatively different causal behavior. When an evolutionary process includes discrete events of introduction that jump an evolving system off of the surface of an allele-frequency space into the interior (where the forces of selection and drift operate), mass-action pressures are not a sufficient guide to causation.

This argument might sound very abstract, thus not relevant to practical evolutionary reasoning. Yet, anyone who saw how the evo-devo challenge played out in the 1980s and 1990s knows that abstract arguments about what qualifies as a true evolutionary cause (i.e, not development) have been deployed with great effect against claims of novelty from evo-devo. For instance, Wallace (1986) asks whether embryologists can contribute to understanding evolutionary mechanisms, and then answers negatively, arguing that “problems concerned with the orderly development of the individual are unrelated to those of the evolution of organisms through time.”

“If we are to understand evolution, we must remember that it is a process which occurs in populations, not in individuals.  Individual animals may dig, swim, climb or gallop, and they also develop, but they do not evolve.  To attempt an explanation of evolution in terms of the development of individuals is to commit precisely that error of misplaced reductionism of which geneticists are sometimes accused” (Maynard Smith, 1983, p. 45).

“I must have read in the last two years, four or five papers and one book on development and evolution.  Now development, the decoding of the genetic program, is clearly a matter of proximate causations.  Evolution, equally clearly, is a matter of evolutionary causations.  And yet, in all these papers and that book, the two kinds of causations were hopelessly mixed up.” (Mayr, 1994) 

“No principle of population genetics has been overturned by an observation in molecular, cellular, or developmental biology, nor has any novel mechanism of evolution been revealed by such fields.” (Lynch, 2007)

To escape this kind of smack-down from powerful scientists whose judgments continue to guide the field, presumptive causal arguments from evo-devo, or from any other sub-field in evolutionary biology, must refer to the forces of population genetics, because the statistical forces theory is the only accepted theory for what constitutes a genuine evolutionary cause.

But if events of introduction are evolutionary causes, and biases in the introduction process are causes of evolutionary bias, then

  • events of mutation can be evolutionary causes,
  • events of mutation-and-altered-development that introduce new phenotypes can be evolutionary causes, and
  • mutational and developmental biases in the generation of variation can be evolutionary causes.

In particular, when the introduction process is recognized as causal, this allows us to specify a formal locale of causation in which to recast the plausibility arguments in lineage explanation into arguments about the developmental factors that induce biases in the introduction process.

That is, this kind of causal grounding for internalist thinking does not repudiate the familiar elements of population genetics or supplement them with a parallel plane of developmental causation, but instead is based on (1) pointing to the part of mathematical theory covering the introduction process, which already exists and is currently in active development, (2) taking into account what we now know about the powerful influence of this qualitatively distinct process on the observed course of evolution, and the theoretically expected course, and finally (3) insisting that we must locate a cause in this part of population genetics, i.e., we must declare that the introduction (origination) process is a genuine cause, a change-making causal process with propensities that must be treated explicitly (again, see note 2).

To be perfectly clear, the necessity of doing this, justifying the use of “must” in the previous paragraph, is that the evidence (see Payne, et al 2022, Gomez, et al. 2020 or Stoltzfus, 2019) compels us to recognize that the dynamics of mutational introduction are profoundly consequential, and our theorizing suggests that a far broader role is inevitable. The notion that we can treat evolution as shifting gene frequencies, without directly involving the dynamics of introduction, is untenable, and the required correction to our conception of causation is to recognize the introduction process (mutational or otherwise) as something that must be treated explicitly as a cause in order to get evolution right.

To the extent that the term “population genetics” is associated with the SGF paradigm of the population as a cohesive emergent entity subject to causal displacement only via the classical mass-action forces, this expanded framework to account for evolutionary change is not population genetics because it breaks the SGF paradigm, i.e., we could choose to avoid the term “population genetics” and use a broader term like “evolutionary genetics.” However, this is just a question of labels. What is important to understand is that, regardless of the labels, we are breaking historical precedent and departing from the SGF paradigm in a way that induces new rules: integrating the introduction process induces qualitatively different behavior that explicitly contradicts the implications of population-genetic causation as it was understood (for instance) by Haldane and Fisher.

[The last 3 paragraphs are worth re-reading. ]

From the beginning, critics of Darwin’s thinking have objected that selection does not create anything new, and that the theory is therefore missing something fundamental. Darwin’s followers developed several well known responses to this objection, justifying the creativity of selection (see Ch. 6 of Stoltzfus, 2021). One of them is essentially that there is infinitesimal variation in every trait, and selection can leverage that diversity to create novelty solely by quantitative shifts. In a world consisting of a fixed set of continuous quantities (e.g., quantitative genetics), this is abstractly true. Another response is that selection is creative in the sense of bringing together rare combinations out of the diversity of the gene pool. This is also true, in a sense that depends on presuming mechanisms of recombination. Another argument is that selection can accrue effects in a particular direction, consistently, over long periods of time. This, too, is clearly true.

And yet, these hand-waving arguments that focus on justifying the creativity of selection do not suffice to address the issue of initiative or dynamics that arises if we attempt to give a dynamically sufficient account of evolution, i.e., if we address evolution explicitly as a process of change. The fundamental problem is that we cannot get the dynamics of evolution right without representing discrete events of the introduction of novelty by mutation-and-altered-development. We must recognize the introduction process as a genuine evolutionary cause.

Once we have added this vital piece of conceptual infrastructure, it then becomes possible to build a larger framework for causal theories. By appealing directly to the introduction process as an alternative type of causation, we can specify complete chains of causation from internal features that determine mutational and developmental propensities of variation, to quantifiable evolutionary behavior, via the population-genetic consequences of biases in the introduction process.

Let us briefly consider how to utilize the concept of biases in the introduction process (as a genuine cause of evolutionary orientation or direction) to specify a causal grounding for 3 historic themes of internalist thinking:

  • Taxon-specific propensities
  • Intrinsically likely forms
  • Directional trends

The first step is to make the transition from mutation biases and nucleotide-level effects to phenotypes. Let varigenesis cover all of the processes involved in the generation of new variation, from mutation to altered phenotypic development, subject to any applicable conditions. In quantitative genetics, varigenesis is represented by the M matrix of variances and covariances for new phenotypic variation (see note 5). For a discrete phenotype-space, we may consider a vector of rates U, with one rate for each alternative phenotype.

In the original Yampolsky-Stoltzfus model, the mutation bias B is a ratio of two mutation rates u1 and u2 , and these are specific mutation rates from one genotype to another.

But when we turn our focus to phenotypes, we can simply redefine u1  and u2 in terms of alternative phenotypes. For instance, consider an example in the figure below, based on the genetic code, which is the genotype-phenotype (GP) map relating codon genotypes to amino acid phenotypes. The rate u1 represents Asp-to-Val and the rate u2 represents Asp-to-Glu, which implicates 2 different mutational paths. Therefore, even if all mutation rates are the same (no mutation bias), the GP map induces a 2-fold phenotypic bias favoring Asp-to-Glu.

To the extent that fitnesses depend only on the phenotype, the two mutational paths from Asp to Glu are identical and will behave as if this were 1 path with a 2-fold higher rate. In this way, all the same conclusions that apply to a B-fold bias in the Yampolsky-Stoltzfus model will also apply to a B-fold phenotypic bias in varigenesis. That is, a GP map such as the genetic code induces asymmetries in the introduction of alternative phenotypes.

The figure above right represents a precisely analogous idea that is common in the evo-devo literature, which is that some alternative phenotypes may be more likely (in varigenesis) because they implicate a larger number of mutationally accessible genotypes, i.e., genotypic neighbors in the GP map. Here, the 1-mutant neighborhood of a genotype encoding phenotype P0 includes 5 genotypes with phenotype P2, and only 1 with genotype P1. In this case, without any mutation bias per se, there is still a 5-fold phenotypic bias in varigenesis toward P2.

In general, the form of the above argument is to use the neighboring phenotypes implicated by a GP map to define equivalence classes of genotypes, so that we can aggregate mutation rates by equivalence class, with the result that the differential mutational accessibility of alternative phenotypes will emerge due to asymmetries in the GP map, even if all mutation rates are the same. That is, for a given starting genotype, a mutation spectrum at the genotypic level, together with a GP map, induces a description of potentialities or dispositions of phenotypic change in a developmental-genetic system, i.e., a description of varigenesis.

This provides a rigorous justification for the notion that each taxon, having a distinctive genotype and GP map, has an intrinsic evolutionary potential or inherited predisposition, due to propensities of varigenesis.

[Note: reactions at a phil-bio-circle presentation of this piece convince me that the above argument, in order to capture an essential aspect of evo-devo, needs to distinguish arbitrary encodings from other sources of asymmetry in the accessibility of neighboring phenotypes. The example based on the genetic code (above, left) illustrates asymmetries in accessibility that are induced by an abstract and arbitrary digital encoding of biology in the genetics of sequences. That is, the genetic code is a largely arbitrary mapping of triplet genotypes to amino acid phenotypes, and it induces a set of neighbor relationships that are arbitrarily different in degree of mutational connectivity. Why should Asp-to-Val be less connected than Asp-to-Glu? There is no direct biological explanation, e.g., this relationship does not emerge due to Asp and Val sharing biosynthesis pathways. I am well aware of the hypothesis that the genetic code is adaptively organized (see Stoltzfus and Yampolsky, 2009), but this is an indirect effect— calling on an evolutionary process of code changes over vast scales of time— and the effect-size is very small.

This asymmetry due to an arbitrary encoding does not smell right as a rationalization of evo-devo arguments, e.g., in structuralist evo-devo arguments per Newman, the propensities that are attributed to developmental systems reflect the emergent dynamic properties of materials such as cell layers, and not merely the details of an arbitrary encoding.

Nevertheless, the differences in these dynamic properties of materials induced by a change in genotype can be mapped to the discrete space of genotypes, and this mapping will induce asymmetries in accessibility that are subject to the same consequence laws as the asymmetries induced by an arbitrary encoding. Thus the example on the right above, with P1 and P2, is a better match to evo-devo if the relatively higher accessibility of P2 is an effect of development, e.g., the phenocopy effect, and not an effect of arbitrary genetic encoding. From the way the example is given above, we can’t really tell. However, the problem is resolved in the following section, to the extent that the RNA folding example below is precisely the right kind of example: the structure of a GP map, and the propinquity of phenotypes in genotype-space, reflects the self-organizing properties of RNAs, i.e., the folding propensities that arise as emergent properties of each specific RNA sequence. ]

Next, let’s consider the tradition of structuralist arguments to the effect that certain familiar structures or features commonly emerge in nature because they are, in some sense, intrinsically likely, i.e., because they are the most natural or easily generated states of the materials in question.

We already addressed the contemporary form of this argument per Kauffman, which has been made in regard to RNA folds, protein folds, regulatory network structures, and some features of tissue layers: the forms that are intrinsically likely are understood to be the forms that are common in genetic possibility-spaces, and the question of evolutionary causation is what evolutionary cause makes intrinsically likely forms evolutionarily likely.

In regard to RNA folds, the folds with the most sequences occupy the greatest volume in genotype space, thus they have the largest number of mutational arrows pointed at them, including the arrows pointed at them from other regions of genotype space (which is actually a function of surface area rather than volume or cardinality). This means they are more likely to be proposed, thus more likely to be proposed-and-accepted, by an evolutionary process that explores sequence space via mutation.

[Figure legend: Two effects of mutational phenotype accessibility emerge from the way that phenotype networks map to genetic state-spaces (in the figure, mutation only samples adjacent vertices and each network represents genotypes with the same phenotype or fold). The shorter-term effect of mutational accessibility is that, from P0, P2 is more accessible than P1. The longer-term effect is that P0, with more genotypes, has a larger contour length (or more generally, surface area), thus it has more mutational arrows pointed at it from other regions of state-space.]

Thus, it is possible to specify a rigorous causal grounding for the kind of structuralist argument that explains what is evolutionarily likely by referring to what is common in abstract possibility-spaces.

Next, consider the idea of long-term directional trends due to internal biases in variation. Classical thinking says that such trends are impossible, and that (except under the case of neutral evolution) selection is the sole source of direction in evolution. However, models of adaptive walks with protein-coding genes subject to GC or AT biases in mutation show that compositional trends are the predictable result of biases in the introduction process.

[Figure legend: Mean trajectories of simulated adaptive walks on a protein NK landscape (Stoltzfus 2006), where evolution is subject to AT:GC mutation bias of 1:10 (blue), 1:3 (brown), 1:1 (green), 3:1 (orange) or 10:1 (red). Proteins adapting under AT bias become enriched for amino acids with AT rich codons (FYMINK), and those adapting under GC bias become enriched for amino acids with GC-rich codons (GARP).]

Note that when we collapse evolutionary change down to 1 dimension, the result is that internal and external factors, if they do not coincide in direction, must clash in direction. This way of combining the two types of causes leads to a consideration of which force is stronger, and (given the “pressure” conception of forces) selection is assumed to be the winner of this zero-sum game. But for an evolutionary process operating in a high-dimensional space such as a protein fitness landscape, there are typically many ways to go up, i.e., many directions toward increased fitness, some of them more favored by mutation, and some less favored. As a result, the trajectory of adaptive evolution in a high-dimensional space may have components of direction that are due to fitness effects, and other components of direction that are due to internal variational biases.

Thus, it is possible to specify a rigorous causal grounding for the notion of trends due to internal biases, and we can use this theoretical foundation to rebut the false intuition, widespread in the literature, that strong selection must necessarily suppress the effect of internal tendencies of variation. The source of this intuition is unclear. Does it arise from conceptualizing selection as a governing agent? Is it based on treating evolution as some kind of zero-sum game in which any deviation from a selective ideal is considered a loss? Modeling tells us that, regardless of the source of this intuition, it is mistaken: the same mutation bias will make adaptation easier in some cases and harder in others, depending on circumstances, a point that is illustrated by Cano and Payne (2020) using empirical fitness landscapes.

To summarize, the paragraphs above outline a causal grounding for the classic internalist-structuralist themes of (1) taxon-specific propensities, (2) intrinsically likely forms and (3) directional trends. This is a broad argument but it is not infinitely broad, e.g., it does not propose a new “Synthesis” or try to capture every complaint of every reformer. To suggest this causal grounding does not mean that all past internalist statements are true or even that they are all theoretically possible. Many of these past claims could be stupid. What it means is that, for each of the three classic types of claims identified above, we can map the form of the claim onto a causal model that validates its logic. If we can map a specific internalist claim to a causal model of this form, then it becomes a substantive falsifiable hypothesis about internal causes that can be tested using whatever tools are available to test hypotheses.

“Adaptation has a known mechanism: natural selection acting on the genetics of populations … Thus we have a choice between a concrete factor with a known mechanism and the vagueness of inherent tendencies, vital urges, or cosmic goals, without known mechanism.” (Simpson, 1967, p. 159)

This means that we are in a different place than in 1967 when Simpson wrote the above passage dismissing the notion of internal trends. Simpson’s argument is invalid, and this is not because we have discovered vital urges or cosmic goals, but because we have reconsidered evolutionary genetics both in theory and in fact, and we have concluded that internal biases are a real possibility grounded in the theoretically and empirically demonstrated effects of biases in the introduction process. The influence of such biases is now in the “known mechanism” category, available to be applied in all areas of evolutionary research.

Distinguishing other theories and paradigms

Which theories are (or were) actually used in reasoning about the role of variation in evolution? What roles for variation have been considered explicitly in accounts of evolution? What kinds of reasoning do these theories support, as documented by recurrent and explicit claims in the evolutionary literature? Here are some:

  1. Variations emerged adaptively by effort, and were preserved, as per Lamarck
  2. Variation supplied indefinite raw materials that selection shaped into adaptations, as per Darwin.
  3. The mechanisms of development (and in some versions, the influence of conditions) imposed constraints on variation, setting limits on what is possible, as per Eimer (1898) or Oster and Alberch (1982)
  4. Mutation pressure drove allele frequencies under neutrality or high mutation rates, per Haldane (1927)
  5. New quantitative variation (M) contributed to standing variation (G) which, together with selection differentials (β), jointly determined (as ) the short-term rate and direction of multivariate change in quantitative characters (Lande and Arnold, 1983; see note 5).

Relative to these ideas, the theory of the efficacy of biases in the introduction process (as a cause of orientation or direction) is distinctive, i.e., it represents a 6th theory with distinctive and testable implications. The logic of the theory generates various outputs that are otherwise not known to be part of evolutionary reasoning. Indeed, one way to explore this distinctiveness is to use the rhetorical approach of crafting statements that the theory distinctively enables. Such statements can refer, not only to expected evolutionary behavior, but also to other theories, and to informal claims in the literature that may be supported or contradicted, like these:

  • Biases in the introduction of variation can impose biases on the course of evolution without requiring neutrality, high mutation rates, or absolute constraints
    • Thus, variational biases on the course of adaptation are possible.
    • The common assumption in the molecular evolution literature that mutational effects on evolution require or imply neutrality is mistaken
    • The Haldane-Fisher argument as expressed by Haldane (1927) or Fisher (1930), and as employed by authors such as Huxley, Ford, Gould, Maynard Smith, et al, does not provide correct reasoning about the potential impact of biases in varigenesis because it fails to cover biases in origination.
    • The joint dependence (that emerges under some conditions) of adaptive changes on fixation probability and chance of mutational introduction invites previously unimagined considerations of Berkson’s paradox
  • For moderate values of B, there are conditions (e.g., in the origin-fixation regime) under which a B-fold bias in the introduction of variants results in a B-fold bias in evolutionary change
  • Biases in mutational accessibility of alternative phenotypes represent a kind of developmental bias, and conditions exist under which this kind of developmental bias may influence evolution in the same way, i.e., by the same kind of population-genetic mechanism, as a mutational bias of the same magnitude
    • This result invalidates the historically important argument (by Mayr, Wallace and others) attempting to undermine causal claims of evo-devo on the grounds that development cannot be construed as an evolutionary cause.
  • Adaptive traverses of high-dimensional spaces can exhibit, simultaneously, components of direction that reflect fitness effects mediated by selection, and components that reflect biases in varigenesis mediated by the introduction process
    • This result invalidates a kind of informal logic (of selection as a governing force) suggesting that internal biases must come at an adaptive cost or that they somehow work against or impede selection
  • Systematic biases in the mutational introduction of phenotypic forms (due to their differing surface area in genotype-space) provide a possible population-genetic mechanism for the findability aspect of “self-organization” reported by Kauffman (1993) or the “phenotype bias” reported by Dingle, et al (2021).

The grounding for internalism that emerges from this theory does not map in a simple way to the current reformist literature in evolutionary biology, with its complaints about reductionism, calls for the “return of the organism,” and exploration of the diffuse EES-SET axis of dispute. I don’t see this as a problem: I see it as inevitable. Einstein said that “We can’t solve problems by using the same kind of thinking we used when we created them.” If we accept this logic, then it would be very unlikely for the solution to a long-standing conundrum to map in a neat and clear way to the terms and concepts people have been using all along. The argument here evokes a conflict with a specific aspect of classic thinking, a specific conception of causal forces as mass-action pressures that interferes with productive thinking about the role of generative processes in evolution.

Relative to the classic conception of evolutionary causes as population-level pressures on allele frequencies, the introduction process conflicts with the statistical pressure criterion, but not necessarily the population-level emergence criterion. The introduction process is arguably emergent at the population level: if a specific individual in state A1 mutates to state A2, this is clearly an event of mutation, but we cannot determine whether it is an event of introduction without examining the population of which the individual is a member, i.e., we can’t diagnose an introduction event except at the population level. Again, mutational introduction and mutational conversion are distinct: introduction is emergent at the population level (see note 7).

However, the introduction process is different from classical forces because it is not a deterministic mass-action pressure aggregating over the behavior of countless individual members of a population (see note 3). To characterize the introduction process as a cause is to put the focus on a probability distribution for events that reflect generative processes acting inside organisms. These processes are studied by mutation researchers and developmental biologists.

The main distinction from the “constraints” literature of evo-devo is the concern to specify complete chains of causation from internal features that determine propensities of variation, to quantifiable evolutionary behavior, via population genetics. As explained above, the typical approach in the evo-devo literature, following Maynard Smith, et al (1985), leaves a gap in this chain of causation, where the missing theory would explain how developmental propensities of varigenesis become evolutionary propensities. [Note that Maynard Smith et al did not ignore this issue, but they published a review without filling this gap, and then this review was cited by thousands of other sources.] In the evo-devo literature, efforts to reform thinking about causation typically focus on supplementing population-genetic causation with lineage explanation (Calcott, 2009), rather than rethinking population-genetic causation.

A crucial distinction from the “evolvability” literature and the more recent literature of “developmental bias” in the EES context is that the causal grounding for internalist thinking offered here does not, in any way whatsoever, presume or imply that variation is facilitated, contrary to the fatuous treatment by Svensson and Berger (2019). The literature has been (to my way of thinking) relentlessly confusing on the extent to which the distinctiveness of evo-devo, or the distinctiveness of evolvability claims, is presumed to rest on facilitated variation.

By contrast, the focus here is on consequence laws— consequence laws that apply whether or not any source laws exist that specify facilitated variation. For instance, it is not necessary to assume that the molecular bias for transition over transversion mutations is in some way beneficial (see Stoltzfus and Norris, 2015). The theory predicts an influence of transition bias on adaptation even when the mutation bias is perfectly orthogonal to fitness effects. The issue of whether varigenesis is dispositional in its effect on evolution can be adjudicated entirely separately from whether varigenesis is facilitated or whether organisms are surprisingly evolvable.

Of course, non-orthogonality is inevitable in a high-dimensional world. In the case of any real-world landscape, a mutational bias toward transitions (1) will tend to align the overall process of evolutionary exploration better (quantitatively) with beneficial trajectories, or (2) will tend to align it worse. The modeling study by Cano and Payne (2020) demonstrates this point using empirical fitness landscapes for binding sites.

Perhaps this will seem disappointing for those familiar with the literature of evolvability or the EES Front, where the idea that variation is facilitated, and that living systems are surprisingly “innovable,” is deeply entrenched. Where is the mojo of internalism if internal variational propensities are merely arbitrary and not an expression of the superior evolvability of naturally evolved systems?

Relative to this way of thinking, the proposal here is about learning to walk before trying to run: it emphasizes an issue that is logically prior. Perhaps variation is facilitated, but a dispositional evolutionary role for varigenesis is both the premise and the promise of this claim. It is the zero-order effect that necessarily underlies all possible higher-order effects pertaining to intrinsic variability.

In fact, the notion of taxon-specific propensities of variation that are merely dispositional without being facilitated is historically part of evo-devo, e.g., it was the explicit position of Maynard Smith, et al (1985) that developmental biases are arbitrary. Similarly, the primary argument of Alberch and Gale (1985) is that two different taxa (salamanders and frogs) tend to lose digits differently in evolution (pre- or post-axially) for internal reasons, i.e., they tend to be lost differently when development is perturbed: Alberch and Gale were not arguing that each taxon evolved a better way to lose digits, but merely a different way.

Synopsis

The notion that evolution may have tendencies that reflect internal tendencies of variation is an old idea (see note 4). Yet, evolutionary discourse has proceeded without any rigorous grounding for internalist thinking. In particular, the Haldane-Fisher argument appeared to undermine this kind of theory.

Nevertheless, a rigorous grounding for internalist thinking is possible, based on the theory of biases in the introduction process. The logic of this theory has been validated by mathematical and computer modeling. Empirical studies have shown a strong effect of ordinary mutation biases on the changes involved in adaptation, an effect that is expected under the theory but which is not possible under the mutation pressure theory of Haldane and Fisher.

The kinds of variational tendencies covered by the theory include (1) mutation biases such as transition-transversion bias, (2) local asymmetries induced by the properties of genotype-phenotype (GP) maps, which reflect both arbitrary encoding and the propensities of developmental systems, and (3) differences in findability and connectivity of phenotypic forms induced by broad features of the architecture of genetic spaces.

When this previously missing causal link is considered in the broader context of contemporary internalist arguments, it provides a way to specify complete chains of causation from internal tendencies of variation to quantifiable tendencies of evolution, integrated with the evolutionary genetics of populations, rationalizing key themes of internalist thinking: taxon-specific dispositions, directional trends, and intrinsically likely structures.

This argument is not infinitely broad. It does not purport to cover everything. However, it offers a broad alternative to neo-Darwinism, i.e., we can specify an alternative to neo-Darwinism that can be used to generate hypotheses that provide a causal grounding for common internalist themes— a complete grounding that extends from internal properties to quantifiable evolutionary tendencies, by way of the evolutionary genetics of populations. Because the theory is quantitative, it will be possible, ultimately, to make statements that compare the relative importance of internal factors influencing varigenesis with the importance of selection.

Acknowledgements

I thank Tobias Uller and members of the phil-bio-circle discussion group (particularly Stuart Newman, Alan Love and Sahotra Sarkar) for comments. NIST disclaimer. Charles R. Darwin, who is often credited with ghost-writing contemporary evolutionary work, did not plan, write, review, or contribute in any meaningful way to this manuscript.

References

  • Alberch P, Gale EA. 1985. A developmental analysis of an evolutionary trend: digital reduction in amphibians. Evolution 39:8-23.
  • Amundson R. 2001. Adaptation, Development, and the Quest for Common Ground. In:  Hecht S, Orzack SH, editors. Adaptation and Optimality. New York: Cambridge University Press. p. 303-334.
  • Arthur W. 2004. Biased Embryos and Evolution. Cambridge: Cambridge University Press.
  • Calcott B. 2009. Lineage Explanations: Explaining How Biological Mechanisms Change. The British Journal for the Philosophy of Science 60:51-78. http://www.jstor.org/stable/25591988
  • Eimer T. 1898. On Orthogenesis; and The Impotence of Natural Selection in Species-Formation. Chicago: Open Court Publishing Co.
  • Eshel I, Feldman MW. 2001. Optimality and Evolutionary Stability under Short-term and Long-term Selection. In:  Orzack SH, Sober E, editors. Adaptationism and Optimality. Cambridge: Cambridge University Press. p. 161-190.
  • Fox RF. 1993. Review of Stuart Kauffman, The Origins of Order: Self-Organization and Selection in Evolution. Biophysical Journal 65:2698-2699.
  • Gould SJ, Lewontin RC. 1979. The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist program. Proc. Royal Soc. London B 205:581-598.
  • Hartl DL, Taubes CH. 1998. Towards a theory of evolutionary adaptation. Genetica 103:525-533.
  • Lynch M. 2007. The frailty of adaptive hypotheses for the origins of organismal complexity. Proc Natl Acad Sci U S A 104 Suppl 1:8597-8604.
  • Maynard Smith J. 1983. Evolution and Development. In:  Goodwin BC, Holder N, Wylie CC, editors. Development and Evolution. New York: Cambridge University Press. p. 33-46.
  • Mayr E. 1994. Response to John Beatty. Biology and Philosophy 9:357-358.
  • McCandlish DM, Stoltzfus A. 2014. Modeling evolution using the probability of fixation: history and implications. Quarterly Review of Biology 89:225-252.
  • Michod RE. 1981. Positive Heuristics in Evolutionary Biology. The British Journal for the Philosophy of Science 32:1-36.
  • Mitchell, P. Coupling of Phosphorylation to Electron and Hydrogen Transfer by a Chemi-Osmotic type of Mechanism. Nature 191, 144–148 (1961).
  • Morgan TH. 1910. The American Society of Naturalists Chance or Purpose in the Origin and Evolution of Adaptation. Science 31:201-210.
  • Popov I. 2009. The problem of constraints on variation, from Darwin to the present. Ludus Vitalis 17:201-220.
  • Provine WB. 1978. The role of mathematical population geneticists in the evolutionary synthesis of the 1930s and 1940s. Stud Hist Biol. 2:167-192.
  • Provine WB. 1971. The Origins of Theoretical Population Genetics. Chicago: University of Chicago Press.
  • Shull AF. 1935. Weismann and Haeckel: One Hundred Years. Science 81:443-451.
  • Sober E. 1984. The Nature of Selection: Evolutionary Theory in Philosophical Focus. Cambridge, Mass.: MIT Press.
  • Stoltzfus A. 2006. Mutation-Biased Adaptation in a Protein NK Model. Mol Biol Evol 23:1852-1862.
  • Stoltzfus A. 2017. Why we don’t want another “Synthesis”. Biol Direct 12:23.
  • Stoltzfus A. 2019. Understanding bias in the introduction of variation as an evolutionary cause. In:  Uller T, Laland KN, editors. Evolutionary Causation: Biological and Philosophical Reflections. Cambridge, MA: MIT Press.
  • Tenaillon O. 2014. The Utility of Fisher’s Geometric Model in Evolutionary Genetics. Annu Rev Ecol Evol Syst 45:179-201.
  • Ulett MA. 2014. Making the case for orthogenesis: The popularization of definitely directed evolution (1890–1926). Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 45:124-132.
  • Xue JZ, Costopoulos A, Guichard F. 2015. A Trait-based framework for mutation bias as a driver of long-term evolutionary trends. Complexity 21:331-345.
  • Yampolsky LY, Stoltzfus A. 2001. Bias in the introduction of variation as an orienting factor in evolution. Evol Dev 3:73-83.
  • Yedid G, Bell G. 2002. Macroevolution simulated with autonomously replicating computer programs. Nature 420:810-812.

Notes

1. Classical population genetics does not ignore mutation or treat it solely as a background condition. In particular, classic work pays loads of attention to deleterious mutation pressure. That is, in the case of deleterious mutation pressure, mutation is often treated as a change-making causal process characterized by explicit dynamics. But we are not concerned here with deleterious mutation pressure. We are concerned with the novelty-introducing role of mutation, in situations where this role may lead to changes that are actually incorporated in long-term evolution.

2. We should not be surprised by the lack of generality of the Modern Synthesis, which was never intended to be a general framework, but was constructed deliberately to exclude certain broad classes of ideas. That is, the SGFT and the ideas of causation that emerged mid-century were constructed deliberately to rationalize a neo-Darwinian view and to make alternatives appear unreasonable or impossible, particularly the “mutationist” views of the early geneticists. To some extent, the position developed by Mayr and his cohort of influencers was less like an ordinary scientific theory — driven by the challenge of accounting for empirical patterns — and more like a rhetorical battleship with its guns aimed squarely at alternatives to neo-Darwinism.

In the contemporary literature, the “Synthesis” remains more of a rhetorical strategy than a scientific theory, but now the focus is purely defensive, aimed at developing a flexible rhetorical strategy to fight off calls for reform by shifting the goal-posts, rather than a strategy to vanquish rivals and establish pre-eminence. That is, contemporary defenses of the Synthesis represent a rhetorical posture of defending the fullness and authority of tradition against the claims of reformers, by anchoring all new developments in tradition. The common theme is still the TINA doctrine (There Is No Alternative) yet, whereas this originally meant that one closed and restrictive theory claimed victory and excluded all the others, now it means that one open and flexible tradition appropriates all valuable ideas and claims ownership of them. Unlike the case for an ordinary theory, no risk is allowed in the Synthesis portfolio, i.e., it is alleged to include only well founded claims and positions that are immune to falsification.

By contrast, Provine, in his 2001 re-issue of The Origins of Theoretical Population Genetics, said that the Modern Synthesis “came unraveled” in the 1980s. Because Provine construed the Synthesis orthodoxy mainly as a position on population genetics, he is most concerned with the breakdown of the SGFT and the rise of neutralism. That is, Provine is not associating the demise of the Synthesis with the paleontology challenge (1970s to 1980s) or the evo-devo challenge (1980s onward), but instead with a changing understanding of population genetics primarily due to the challenge from molecular evolution.

3. Note that the conception and use of forces is typically deterministic, e.g., Sober (1984) says that “In evolutionary theory, mutation and selection are treated as deterministic forces of evolution” whereas drift is treated stochastically.  Importantly, one may aggregate the effects of the introduction process over infinitely many loci, e.g., all the sites in a genome, and this makes it possible to speak in a technically correct way about a mass-action pressure, but it is a pressure of introduction. Many, many claims in the molecular evolution literature refer to “mutation pressure” (e.g., Lynch 2007) but do not make any sense unless we reinterpret them in terms of introduction pressure. But introduction pressure is a different kind of pressure from the classical forces, operating in a different field: it aggregates over sites or loci in a genome rather than over member organisms in a population. Because it is an entirely different kind of pressure, it has different implications (for more explanation, read this). It would be possible to articulate an alternative theory of forces for evolutionary behavior in a discrete space where the steps are origin-fixation steps (Pablo Razeto-Barry has an unpublished manuscript on this).

4. This old idea is sometimes called “orthogenesis,” but the relentless caricatures by traditional authorities (noted by Ulett, 2009) have given “orthogenesis” such a pejorative connotation that it is not useful to use the term. For a review of historic ideas about constraints or channeling of variation relatively untainted by Darwinian prejudices, see Popov (2009) or Ulett (2014).

5. I’m giving myself a pass to put evolutionary quantitative genetics (EQG) in the background here because foregrounding it would be confusing. The theory is fundamentally phenomenological rather than causal, in the sense that it was not built in a bottom-up way from mechanisms or causes, but specified in a top-down way by the constraint of flexibly capturing the measurable relations of certain important quantities. So, it is always difficult to fathom EQG in a discussion of causes, though clearly the original motivation was tied to a neo-Darwinian view of variation as raw materials, i.e., variation (passive object) as a material cause, not varigenesis (active process) as an agent with dispositional effects. Because raw materials are just raw materials, providing substance only and not form, the only meaningful question to ask about them is some version of “how much do I have?” But if variation is seen as a dynamic process operating in a multidimensional space, then we have lots of questions to ask, or (stated differently) lots of ways to parameterize it.

Apropos, EQG following on the multivariate generalization of Lande and Arnold (1983) is no longer strictly aligned with neo-Darwinism, but became a formalism with the (initially cryptic) potential to support more causally oriented theorizing with varigenesis as a dispositional factor in evolution via M, although quantitative geneticists themselves have no love for this idea, and it has only a limited scope because the entire framework has a limited scope. The framework, by original conception, only applies to quantitative traits with abundant infinitesimal variation, and the typical implementations treat dimensional heterogeneity but not directional bias. That is, varigenesis (M) represents a process that generates different amounts of abundant infinitesimal raw material in different multivariate dimensions. It can generate more variation along some dimension, but biases in one direction (along a dimension) are usually not considered, and when they are considered, they are not found to be important (except Xue, et al 2015 find support for directional trends in a quantitative character, albeit with a non-standard approach).

The ultimate point here is that EQG also provides a specific and rigorous theoretical grounding for internalist thinking, one that is taken very seriously by some leading thinkers (e.g., Thomas Hansen and Günter Wagner), but this grounding is of such limited utility that I’m finding it convenient to set aside for my purposes here, e.g., it doesn’t provide a way to rebut the Haldane-Fisher argument, to account for directional trends or mutation-biased adaptation, or to justify findability claims. However, I am open to being convinced, by someone who knows better, that EQG provides a broader causal grounding for internalism than I have suggested.

6. A folk theory of biases in the introduction process emerged in an odd place: the macroevolution debate of the 1970s and 1980s. The participants in this debate quickly reached a consensus that no new fundamental mechanisms were needed to account for macroevolution, only a hierarchical expansion of existing mechanisms, i.e., an expansion from the traditional level of a population of individuals, to all the levels in a hierarchy of populations (cells, individuals, species, higher taxa).

However, in the process of this expansion, some of the participants creatively misinterpreted traditional thinking. In particular, Vrba and Eldredge (1984) depicted evolution as a dual process of the introduction and reproductive sorting (by selection and drift) of variants, and they emphasized that evolutionary biases could emerge from either introduction or sorting. Based on this formula, they helpfully reinterpreted evo-devo statements (Rachootin and Thompson; Oster and Alberch) to mean that “bias in the introduction of phenotypic variation may be more important to directional phenotypic evolution than sorting by selection.” That is, their elegantly stated verbal theory (1) recognizes, as distinct phases of the evolutionary process, the production or introduction of variation, and the reproductive sorting of variation (by selection and drift), (2) in parallel, distinguishes biases in introduction from biases in sorting as alternative causes of evolutionary bias, and (3) generalizes this theory of dual causation to multiple levels of a hierarchy.

In this way, Vrba and Eldredge (1984) proposed a novel quasi-mutationist theory of evolution as a process of mutation proposes, sorting disposes. However, the theory lacked any model or formalization, so it was not possible to generate quantitative expectations or offer any proofs. Furthermore, participants in the macroevolution debate did not treat this as a radical proposal demanding validation, because it was presented (mistakenly) as merely a restatement of orthodoxy. That is, whereas Maynard Smith et al (1985) recognized the conflict between implicit evo-devo theories and classical population genetics— presumably because the authors included Maynard Smith, Lande and Kauffman (individuals with expertise in formal theory)—, Vrba and Eldredge did not. Their language continues to reverberate in the paleontology literature, but the issues of causation have never been clarified, to my knowledge.

(7) Here and elsewhere I refer to the introduction process in a general way, because it is a generally useful idea that goes beyond mutational origination. Various processes that we do not normally consider as mutation can introduce discrete genetic novelties, e.g., events of lateral transfer, inter-compartmental transfer, recombination, and endosymbiogenesis. Further, large classes of evolutionary processes feature dynamic dependence on events of introduction. In island biogeography, for instance, we can conceptualize a dual process of introduction (a gravid fly is blown to an island) and establishment (an immigrant fly gives rise to a persistent lineage on the island), such that biases in either stage would be effectual. Adaptive dynamics could be seen as a dual proposal-acceptance (introduction-invasion) process, subject to biases in introduction. Cases of cultural evolution such as the evolution of ideas or of language likewise could be treated with origin-fixation dynamics. Per Vrba and Eldredge (1984), the birth of a species is an event of introduction in a hierarchy of levels. I suspect that the origin-fixation formalism typically is not applied in these fields, although I have seen a case of its application to neologisms in language evolution.

In a discrete world, there is always an event that introduces something novel, i.e., an event that makes the step from a frequency of 0 to a frequency of 1/N. However, in the world of modeling, or perhaps in the physical world, there may be cases in which it is useful to define the introduction process as the transient of a continuous value as it departs from 0. Certainly one may foresee, for the case of evolutionary dynamics, conditions under which the dynamics of this departure from 0 are dominated by the contribution of mutation from other alleles even when other processes are operating simultaneously (indeed, there was such a deterministic treatment in Yampolsky and Stoltzfus, 2001).

8. Traditionalists will certainly respond to this kind of claim by quote-mining the canon to find scraps that show Fisher and Haldane paying attention to some aspect of mutation or mutation rates, and objecting on this basis that of course Fisher and Haldane recognized the importance of new mutations or the rates of occurrence of beneficial mutations. However, our focus here is on scientific theories— not on people or vague suggestions—, and particularly on the theories that have shaped evolutionary discourse by being written down, formalized, shared, taught, and applied, i.e., theories and theory-based arguments that actually matter because they drive research, they are used in explanations, and they are used in arguments, e.g., certain Synthesis positions on causation (levels, types, forces) were used in the 1980s and 1990s to make evo-devo reformists sit down and shut up, i.e., these ideas of causation performed real work in evolutionary discourse, making them important. If Haldane secretly recanted his mutation pressure argument from 1927, 1932, and 1933, or if he offered an alternative mutationist theory in a later piece that had no influence, this is irrelevant both to science and (as a first approximation) to scientific history. Perhaps Haldane was secretly a mutationist. Perhaps he also was secretly a Christian and a capitalist. Who cares? Meanwhile, the Haldane-Fisher argument is a genuine argument used repeatedly in evolutionary discourse. It is not in any sense a straw-man argument: it is an argument that matters in a way that can be documented. So long as the canon defining the historical discourse on evolution includes sources like Fisher (1930), Huxley (1942) or Haldane (1932), the Haldane-Fisher argument is part of evolutionary thought, a documented and readily recognizable thread woven into the 20th century discourse on evolution.

A great deal of damage has been done in recent evolutionary discourse by the conflation of arguments about the novel scientific significance of an idea and the primarily cultural arguments that attempt to anchor the idea in tradition. A measure of the genuine scientific novelty of idea X in disciplinary matrix Y is the extent to which practitioners in Y would benefit from integrating X, or the extent to which they are currently reasoning incorrectly due to a lack of awareness of X. This has almost nothing to do with the ability to anchor X in the relevant historical canon, particularly if those who are practicing the art of back-projection have very low standards and are subject to confirmation bias, as is the case in evolutionary discourse. For instance, a press release regarding the demonstration of mutation-biased adaptation by Cano, et al 2022 makes this about “helping to return Darwin’s second scenario [of evolution by new mutations] to its rightful place in evolutionary theory.”

9. Apropos of note 8, the process of historical distortion through back-projection — the projection of contemporary views backwards onto intellectual progenitors — is evident in regard to Fisher’s geometric model, e.g., in Tenaillon (2014). The supplement to Stoltzfus (2017) explains this transmogrification, and Rockman (2012) makes essentially the same point in a footnote. Fisher’s original argument suited a deterministic world in which an allele is chosen by selection if it is beneficial, regardless of the degree of beneficiality, so that the problem of the size distribution of changes in evolution is solved completely by solving for the chance of beneficiality as a function of effect-size. A fully explicit version of Fisher’s argument would go like this:

  1. the population has a set X of alleles with some distribution of effect-sizes, whose presence is logically prior to selection, so that it implicitly reflects a generative process (in Fisher’s version, this is implicitly a random sample of mutation vectors in the geometric space),
  2. within X there is a subset X’ with s > 0,
  3. selection will choose every member of X’ deterministically,
  4. finally, the geometric model gives the chance that an allele is beneficial, i.e., is a member of X’, as a function of effect-size,

That is, the geometric model yields the chance that an allele is a member of the set that is chosen deterministically by selection. Kimura took the explicit part of this argument, the geometric model, and embedded it within a stochastic origin-fixation conception of evolution, so that effect-size of the benefit becomes important. That is, Kimura took proposition (3) and replaced it with “selection will choose from X’ in proportion to the chance of fixation” (e.g., 2s). Not only is this mutationist innovation contrary to Fisher’s thinking, it utterly changes the conclusion of the argument to favor intermediate-sized changes instead of infinitesimal ones. Yet, contemporary authors use “Fisher’s model” to describe Kimura’s model, and imply that Fisher shared Kimura’s mutationist conception of evolution but made a mistake or was confused about how to calculate the result. Again, see Stoltzfus (2017) or Rockman (2012).

10. Wright, in addition to invoking a continuous space of allele frequencies, also depicted a discrete space, a connected network of genotypic nodes. He claimed that the two representations were equivalent, which shows that he did not think this through very carefully. If we imagine a network of the 8 genotypes that form from combinations of alleles at 3 loci (each with 2 alleles), and we imagine the focal system being placed on one of these nodes in the network, we will get a completely different set of expectations about how evolution works than if we imagine the focal system as a point in the 3-dimensional space of allele frequencies. To make evolution in the discrete space act even remotely like evolution in the continuous one, we must instead distribute the probability density of the focal system over the entire discrete network.

NRC Research Associateship: mutation and evolution

The US National Research Council (NRC) offers competitive Research Associateships for post-doctoral and senior scientists to conduct research in participating federal labs. The awards include a generous stipend as well as benefits (health insurance, travel, relocation), as explained on the program web site.

To apply, you must write a brief research proposal that reflects a plan of your own, or a plan that we develop together, involving some computational approach to molecular evolution. Especially welcome are proposals for empirical or theoretical work on biases in the introduction of variation as a dispositional factor in evolution, building on work such as Yampolsky and Stoltzfus (2001), Stoltzfus and McCandlish (2017) or Stoltzfus and Norris (2016).

The upcoming deadline for proposals is February 1, 2021 (there is another deadline August 1). If you are interested, contact me with a brief introduction, and we’ll go from there.

Arlin Stoltzfus (arlin@umd.edu)

Research Biologist, NIST (Data Scientist, Office of Data & Informatics)
Fellow, IBBR; Adj. Assoc. Prof., UMCP;
IBBR, 9600 Gudelsky Drive, Rockville, MD, 20850

Sources

Stoltzfus A, McCandlish DM. 2017. Mutational biases influence parallel adaptation. Mol Biol Evol 34:2163-2172

Stoltzfus A, Norris RW. 2016. On the Causes of Evolutionary Transition:Transversion Bias. Mol Biol Evol 33:595-602.

Yampolsky LY, Stoltzfus A. 2001. Bias in the introduction of variation as an orienting factor in evolution. Evol Dev 3:73-83.