Steve Vigdor, April 5, 2019
In Part I of this series, we provided a brief history of the development of the concepts central to the theory of evolution. We also surveyed the evolution of the opposition to the theory, from its fundamentalist religious beginnings, through creation science, to Intelligent Design (ID). In Parts II and III, we reviewed the extensive evidence supporting the theory of evolution, from laboratory observations to fossil and biogeographical data and to evidence from the modern mapping of genomes for multiple species. In this part, we return to the opposition in order to expose at least some of the flaws in objections raised by ID advocates.
ID has a narrow, perhaps vanishingly narrow, needle to thread in its ambition to be treated as science and taught in public schools as an alternative theory to evolution. The historical record shows very clearly that ID was an immediate replacement for creation science when the latter ran afoul in U.S. courts of the Establishment Clause constitutional prohibition of State-promoted religion. In an attempt to avoid the same fate, ID advocates systematically avoid reference to the Bible, and are thus reduced to claim that at least some aspects of life development on Earth represent the work of an intelligent designer whose identity and purpose they pretend not to know.
This strategy has not worked. In his ruling in the 2005 case Kitzmiller v. Dover Area School District, Judge John E. Jones III saw through the pretense to decide that ID was simply a form of creationism, and that a school board decision in York County, Pennsylvania to require its teaching alongside the theory of evolution still violated the Establishment Clause. In particular, Jones rejected the identification of ID as science:
“We find that ID fails on three different levels, any one of which is sufficient to preclude a determination that ID is science. They are: (1) ID violates the centuries-old ground rules of science by invoking and permitting supernatural causation; (2) the argument of irreducible complexity, central to ID, employs the same flawed and illogical contrived dualism that doomed creation science in the 1980s; and (3) ID’s negative attacks on evolution have been refuted by the scientific community. … Expert testimony reveals that since the scientific revolution of the 16th and 17th centuries, science has been limited to the search for natural causes to explain natural phenomena.”
ID is incapable of providing a scientific alternative to the theory of evolution. Its proponents can claim, and have claimed, that any of the extensive evidence for evolution summarized in Parts II and III of this blog series is really evidence for intelligent design, and such a claim cannot be refuted since it is fundamentally supernatural. The ID approach in general cannot make falsifiable predictions of future developments. Thus, the main current threads of ID focus instead on attempts to poke holes in the theory of evolution. They persistently argue against Darwinian theory, ignoring most features that have been added in the modern synthesis to flesh out contemporary evolutionary theory. We discuss the flaws in these attempts below, emphasizing the failures of the intellectual centerpiece of modern ID, the concept of irreducible complexity.
15) Proposed Examples of Irreducible Complexity
The concept of irreducible complexity has been around in one form or other far longer than the concepts of natural selection, common descent and genetic mutation. For example, both Jean-Jacques Rousseau in the mid-18th century and the English clergyman William Paley, in his 1802 book Natural Theology or Evidences of the Existence and Attributes of the Deity, compared the intricacies of the universe, the solar system and the anatomies and adaptations of living creatures to those of a watch. They argued that just as those intricacies in the latter case point definitively to the work of a watchmaker, the complexities in nature point definitively to God’s intelligent design. It was the scientific rejection of this theistic watchmaker analogy that led Richard Dawkins to title his 1986 book on evolutionary biology The Blind Watchmaker.
More recently, the concept of irreducible complexity has made a comeback in biochemical clothing in the work especially of Michael Behe. In his 1996 book Darwin’s Black Box: The Biochemical Challenge to Evolution, Behe tries to give ID a more scientific-sounding basis, defining an irreducibly complex system as “a single system which is composed of several well-matched, interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning.” Behe adds: “An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.” He argues that it is overwhelmingly improbable that sequential mutation could have followed such irreducibly complex evolutionary pathways, because each incrementally added part of the eventual system would have conferred, on its own prior to system completion, no functional advantage to be naturally selected for the fitness of the species.
As we will show, this argument is based upon flawed reasoning, ignoring the plethora of neutral mutations, the enormous redundancy of protein sequences capable of performing the same function, and the adaptations of “old” genes to support new functions. The argument furthermore manifests a lack of imagination about how many and varied pathways can be sampled over millions or billions of years of natural evolution and how many alternative pathways can reach the same functionality. We will show that evolutionary pathways leading to Behe’s examples of irreducibly complex biosystems have been proposed or, in some cases, even observed in the laboratory.
Behe chose to illustrate his concept with a particularly inapt non-biological example: a mousetrap. He pointed out that the device could not trap mice unless all five of its interacting pieces – the base, the catch, the spring, the hammer and the hold-down bar – were functioning together. But skeptics were able quickly to point out perfectly acceptable alternative functions that could be served by the device with one or more of these components removed. Biologist Kenneth Miller even noted that a high school classmate of his had, in fact, constructed an excellent spitball launcher by removing the hold-down bar and the catch from a working mousetrap. This illustrates the danger of neglecting developments from systems that originally served different functions. In addition, both the mousetrap example and the watchmaker analogy suffer from the inaccessibility of biological reproduction and genetic heritability in inanimate objects.
The biochemical systems Behe suggested as irreducibly complex included the bacterial flagellum (a tail-like rotating propeller used as a molecular motor) of E. coli, the blood-clotting cascade, the adaptive immune system, and cilia (hairlike protuberances from some cells). Another example long promoted by creationists and ID advocates is the eye of sighted animals. Other ID proponents have added the defense mechanism of the bombardier beetle, which directs a spray of hot fluid at potential predators. Evolutionary pathways have been suggested for each of these. We will deal below in detail with the evolution of the eye and of the flagellum, paying particular attention to a falsifiability challenge that Behe suggested to his critics: “To falsify such a claim, a scientist could go into the laboratory, place a bacterial species lacking a flagellum under some selective pressure (for mobility, say), grow it for ten thousand generations, and see if a flagellum — or any equally complex system — was produced. If that happened, my claims would be neatly disproven.” The arguments supporting the roles of mutation and natural selection in these two cases are very similar to those for the other allegedly irreducibly complex systems as well.
16) Flagellum Regained
We begin with the bacterial flagellum because it is the example most frequently used by ID proponents as the “poster child” for irreducible complexity. For example, the word “flagellum” and its cognates appear 385 times in the transcript of the Kitzmiller v. Dover Area School District trial, and one of the defense attorneys referred to the case as the “Bacterial Flagellum Trial.” Figure 16.1 shows a scanning electron microscope image of bacteria possessing a flagellum, together with a schematic diagram of the cellular structures needed to allow for rotation of the flagellum to propel bacterial motion. As many as 40 different proteins have to work in concert to produce a functioning flagellum. The assumption implicit in Behe’s identification of the flagellum as an irreducibly complex system is that the first 39 of these proteins would have served no useful purpose favored by natural selection, until the 40th was added. Thus, he argues that the system could not possibly have been produced by a sequence of “numerous, successive, slight modifications of a precursor system.”
Behe’s claims are by now thoroughly debunked. First of all, the composition of flagella is quite different for distinct bacterial species, and often involves distinct proteins. Many of the proteins involved can be deleted or mutated and the flagellum will still function. There appear to be thousands, and perhaps even millions, of different bacterial flagellar systems, manifesting a diversity that suggests different evolutionary lineages from a common ancestor, rather than thousands or millions of distinct intelligent designs. A significant fraction of the proteins involved in flagellum function in disease-causing bacteria are quite similar in amino acid sequence to the proteins needed for these bacteria to secrete toxins through a needle-like structure (a so-called Type III Secretion System or T3SS, see Fig. 16.1), to penetrate the boundaries of host cells the bacteria are attacking. Other aspects of the flagellar machinery bear strong resemblance to recycled parts that occur elsewhere in nature. Thus, it is quite clear that a subset of flagellum proteins can have served a distinct useful (and naturally selected) function, discrediting the entire postulate of irreducible complexity.
Michael Behe remained unconvinced by such evidence and issued the falsification challenge above in a 2016 response to his critics. He seemed convinced that no experimenters would take him up on the challenge. In responding specifically to Brown University molecular biologist Kenneth Miller, Behe claimed “…why doesn’t he just take an appropriate bacterial species, knock out the genes for its flagellum, place the bacterium under selective pressure (for mobility, say), and experimentally produce a flagellum – or any equally complex system – in the laboratory? … If he did that my claims would be utterly falsified. But he won’t even try it because he is grossly exaggerating the prospects of success.” But in point of fact, the experiment had already been done and reported in the scientific literature in 2015, in a paper entitled “Evolutionary Resurrection of Flagellar Motility via Rewiring of the Nitrogen Regulation System.”
At a laboratory in the University of Reading in the U.K., Taylor and collaborators studied soil-based bacteria from which they engineered what should have been a catastrophic deletion of a gene that plays a master role in the regulatory network that governs flagellum synthesis. Elimination of the protein expressed by the deleted gene was used to produce two independent strains of bacteria without flagella, and therefore unable to move through agar-based petri dishes. The resulting point-like bacterial colony and an electron microscope image of a single flagellum-less bacterium are shown in the left-hand frames of Fig. 16.2. When the local nutrients at the site of the starting colony were depleted, starvation imposed a strong selection on mutations that would allow the bacteria to re-evolve motility, i.e., the ability to move and consume nutrients during motion, even if those mutations would cause some deterioration in other biological functions. Over the ensuing 96 hours, the researchers observed, for each of the two independent strains of modified bacteria, a regeneration of flagellar motility in two mutation steps, depicted by the other images in Fig. 16.2.
Sequencing of the genome of the bacteria at each stage in the evolution revealed a pathway reached via two successive point mutations to remaining genes that initially served different purposes. The first mutation led to increased production of a protein normally associated with regulation of nitrogen uptake and assimilation, but now with a single amino acid substitution. The modified protein – a distant cousin of the one that had originally been eliminated by gene deletion – began to exert some control over the regulatory network for flagellum synthesis, at the cost of disruption to the nitrogen uptake. This mutant strain, represented in the middle frames of Fig. 16.2, regained some motility though a flagellum could not yet be discerned in electron microscope images. The second point mutation altered a different amino acid, with the effect that the modified protein was now switched from nitrogen regulation to flagellum regulation. The bacteria with both mutations regained flagella (see right-hand frames in Fig. 16.2) and full motility, at the expense of nitrogen assimilation.
The above experiment clearly demonstrates that the flagellum system is not irreducibly complex; parts can be removed and naturally replaced by different parts and the functionality can be restored. It further clearly demonstrates the flexibility of mutation plus natural selection in adapting genes developed for one purpose to serve a different purpose that becomes more urgent due to a change in the environment or to a different change in an organism’s genome. This flexibility is an example of exaptation, the ubiquitous process by which nature retools structures for new functions, a process that Behe and other ID advocates persistently ignore in claims of irreducible complexity. For example, in his ruling in Kitzmiller v. Dover Area School District, Judge Jones commented on the cross-examination of Behe: “However, Professor Behe excludes, by definition, the possibility that a precursor to the bacterial flagellum functioned not as a rotary motor, but in some other way, for example as a secretory system.”
Michael Behe has written: “Natural selection can only choose among systems that are already working, so the existence in nature of irreducibly complex biological systems poses a powerful challenge to Darwinian theory.” The challenge is nowhere nearly as great as he and other ID advocates estimate. In the above experiment natural selection has indeed chosen systems already working, but working for a different purpose in the organism. In any case, the concept of “working” needs to be expansive. Some evolutionary adaptations occur by horizontal gene transfer (not allowed in the above experiment), in which case the “working” systems may initially be in a different organism or a different species. Some use genes that are currently serving no function because they are duplicates or they lack other genes in a regulatory network to express them.
An apparently irreducibly complex function – the secretion of antifreeze proteins into the blood of certain Arctic codfish to keep them from freezing – has recently been shown to result as the final step in a natural evolutionary pathway that cobbles together the needed genes from sections of non-functional non-coding DNA, supplemented by DNA sequence duplications and relocations and point mutations. Some mutations (certainly neutral ones, but even some marginally harmful ones) may be passed along to later generations, to be adapted eventually as parts of a complex system, not because they themselves are naturally selected, but rather as “freeloaders” that happen to be accompanied by different genomic mutations that do improve fitness. The bottom line message of contemporary evolutionary biology is that there are myriad paths by which the genome can evolve toward innovative functions, such as the flagellum. ID proponents ignore the vast majority of these evolutionary pathways.
I haven’t seen a Behe response to the flagellar resurrection experiment, but I imagine he would object that the researchers had not deleted all the genes associated with the flagellum, but rather had just removed the gene that allowed all the other relevant genes to be expressed. He claims to be open-minded and willing to accept evidence that contradicts his assertions. Jerry Coyne of the University of Chicago has offered criticism to the effect that ID proponents, if faced with evidence that one of their irreducibly complex systems could indeed evolve by successive mutation and natural selection, would simply claim yet another system as evidence of ID. But Behe responds: “If Coyne demonstrated that the flagellum (which requires approximately forty gene products) could be produced by selection, I would be rather foolish to then assert that the blood clotting system (which consists of about twenty proteins) required intelligent design.”
But I suspect that Behe is, like most other ID advocates, too dug in to accept the evidence. Quoting again from Judge Jones’ ruling in the Kitzmiller case: “In fact, on cross-examination, Professor Behe was questioned concerning his 1996 claim that science would never find an evolutionary explanation for the immune system. He was presented with fifty-eight peer-reviewed publications, nine books, and several immunology textbook chapters about the evolution of the immune system; however, he simply insisted that this was still not sufficient evidence of evolution, and that it was not ‘good enough.’” As another example of his continuing denial, Behe has just come out with a new book, Darwin Devolves: The New Science About DNA that Challenges Evolution, in which he continues to avoid all mention of the possibility of exaptation as a natural and common aspect of mutation plus natural selection.
17) Evolution of the Eye
Before Behe focused the concept of irreducible complexity at a cellular level, the favorite system creationists and ID advocates used to illustrate such complexity was the eye. William Paley, the originator of the watchmaker analogy, called the eye “a miracle of design.” Charles Darwin admitted in Origin of Species that the eye’s evolution by natural selection would seem, at first glance, “absurd in the highest possible degree.” But he admitted this just before proceeding to outline the possible steps in the evolution by natural selection, stages that reasonably parallel those revealed by later research in evolutionary biology. In Darwin’s Black Box, Behe acknowledged that the evolution of the anatomical features of the eye has been well explained (see Fig. 17.1), but still presented the complexity of the biochemical reactions underlying light sensitivity as a challenge for evolution.
In fact, a quite plausible evolutionary pathway for eyes can be constructed from examination of light sensitivity across a wide range of living organisms. Light sensitivity is important even to single-celled organisms, where it would have been naturally selected during very early stages of the tree of life, in order to support the use of photosynthesis in the organisms’ metabolic processes. The primary providers of light sensitivity in all species studied are a class of photoreceptor proteins called opsins. These produce electrochemical signals from the interactions of the photons carrying the energy of light waves with electrons within the protein’s atoms. Many bacterial species contain “eyespots” comprising groups of such photoreceptor proteins, as part of their flagellar systems. The electrochemical signals generated are used to propel the organisms toward the light, though they do not yet provide vision.
The fossil record suggests that the development of complex, image-forming eyes from these primitive photoreceptor systems first occurred roughly 540 million years ago, during a span of several million years near the start of the Cambrian explosion. Indeed, light sensitivity and vision were probably at least partly responsible for that explosion in biodiversity: the oxygen produced in the atmosphere from photosynthesis among pre-Cambrian single-celled organisms would have fueled the metabolism and development of land-based species; and the development of eyes may have sparked predator-prey relationships and a biological “arms race” that accelerated evolutionary adaptation for survival. In the higher animals, the photoreceptors are connected to nerve fibers, and generate impulses on those nerves. The nerve impulses can be transmitted directly to muscles, as they are in some jellyfish, or to the brain for more sophisticated processing. Brains probably evolved after eyes, when they were first needed for such information processing.
Flgure 17.1 illustrates the likely stages in the evolution of the anatomy of the eye among vertebrates. It begins with a region of photosensitive cells connected to nerve fibers, which provide light sensitivity but no directional or image-forming sensitivity. Natural selection then would have favored spreading those photosensitive cells around walls at the back of a cavity and reducing the size of the aperture through which light passed to those cells. This would have provided much improved directional sensitivity and the formation of crude images, as in a pinhole camera. Indeed, a modern example of such pinhole eye sockets is found today in the nautilus (see Fig. 17.2).
Image resolution would then have improved in subsequent stages, adding first a cavity-filling humor to optimize the index of refraction and the bending of incident light rays, while also blocking ultraviolet radiation from reaching the retina. The subsequent addition of a lens to focus images onto the retina filled with photoreceptor cells would have produced the next imaging improvement. The development of the cornea, and later of the aqueous humor, would have protected the eye from external contamination, while also improving the focusing of light onto the retina. At some point during the evolutionary path, the photoreceptor cells would have added to the opsin a different type of molecule called the chromophore, which is a pigment that allows the organism to distinguish different colors of incident light by providing selective absorption of certain wavelengths of light. One rough numerical estimate of the time required for reasonably effective vertebrate eyes to have evolved from the original photoreceptor patches is a few hundred thousand generations, based on estimated rates of mutation and natural selection forcing.
There are differences among species in the detailed biochemistry by which opsins generate nerve impulses, and these suggest that multiple distinct lineages may have evolved from the original primitive eyespots at an early stage in the Cambrian explosion. On the other hand, there is a surprising genetic similarity among all animals in the proteins controlling the positioning of eyes. This protein expresses the gene known as PAX6 in species as diverse as octopuses, mice and fruit flies, as well as in humans. This commonality suggests that this gene may have existed before the development of the structures it now controls, and been co-opted for eye development from a different original purpose, in yet another example of nature’s exaptation.
The bottom line is that the evolution of the eye remains an area of active research, but there is no evidence to suggest that the mechanisms normally at play in natural evolution are insufficient to produce modern vision.
18) Digital Life Confronts ID Pseudoscience
Some ID proponents have attempted to bolster their scientific credentials by promulgating restrictive, self-serving definitions or “laws” of nature that require a strong role for intelligent design. These have attracted little support outside the devoted ID community because they simply do not accommodate established scientific results. As we will see below, their flaws are further exposed by extensive computer simulations of evolution among “digital” or “artificial” life forms.
One example is Michael Behe’s latest book Darwin Devolves, which is centered on what Behe calls “The First Rule of Adaptive Evolution: Break or blunt any gene whose loss would increase the number of offspring.” He has amplified this statement in a dismissive response to a negative review of his book, written by Nathan Lents, Joshua Swamidass and Richard Lenski. In his response, Behe writes: “The rule summarizes the fact that the overwhelming tendency of random mutation is to degrade genes, and that very often is helpful. Thus natural selection itself acts as a powerful de-volutionary force, increasing helpful broken and degraded genes in the population.” Behe argues that mutation and natural selection can increase diversification at the species and genus levels only via the degradation of genes, leading species to eventual dead ends. But “purposeful design” by an “intelligent agent” is necessary for meaningful innovation that produces complex new functions and new species.
As is normal for the ID crowd, Behe accepts only that evidence that supports his predetermined conclusion, ignoring or rejecting all evidence to the contrary. Thus, he spends a good deal of time in his book analyzing results from Lenski’s famous long-term evolutionary study (described in Part II of this blog series) of more than 65,000 E. coli generations placed in a limited-nutrient environment. As Lents, Swamidass and Lenski point out in their review, Behe emphasizes “the many mutations that arose that degraded function—an expected mode of adaptation to a simple laboratory environment, by the way—while dismissing improved functions and deriding one new one as a ‘sideshow’.” Behe continues to ignore the experimentally established important role of exaptation – seen, for example, in the experiment behind Fig. 16.2 – in repurposing genes originally developed for one function to play innovative roles in complex new functions.
By ignoring contrary evidence, Behe grossly overstates his case. Random mutation does often, but certainly not “overwhelmingly,” result in degraded genes. Behe appears to neglect the sizable frequency of neutral or only slightly deleterious mutations, which very often serve as stepping stones in a sequence of mutations that lead eventually, over one or another among myriad evolutionary paths, to innovative functionality. The latter frequency is the point of Andreas Wagner’s book Arrival of the Fittest, and is furthermore illustrated pointedly by the artificial life simulations to be discussed below.
Mutations that break or degrade genes do sometimes, but hardly “always,” lead to short-term adaptive improvements that make species vulnerable to environmental changes over the longer term. After all, it is estimated that 99% of all species that ever existed on Earth are now extinct. But many species have continued to evolve and to thrive, and many new species have been produced. The evidence reviewed in Parts II and III of this blog series and in studies referenced above is completely consistent with the claim that such evolution results from natural processes involving common descent fueled by occasionally innovative sequences of mutation and natural selection. Behe’s “first rule” argues for intelligent design by simply postulating that the natural alternative doesn’t exist.
An analogous attempt to promote ID by pseudoscience arises in William Dembski’s definition of “specified complexity.” In his 1998 book The Design Inference: Eliminating Chance Through Small Probabilities, Dembski purports to invoke information theory to show that many specified patterns – i.e., ones that admit short descriptions – in living species are too complex to have possibly arisen by chance during the history of the universe, and therefore must have resulted from intelligent design. He proposes 10150 as an upper limit on “the total number of [possible] specified events throughout cosmic history,” so that any pattern that Dembski calculates to have a random probability of occurrence below the universal probability bound of 1 part in 10150 becomes, by definition, a result of ID. He furthermore ignores in his probability estimates the crucial role played by naturally selected performance of simpler functions as steps along the way to more complex functionality.
Dembski’s universal probability bound is quite arbitrary, his probability estimates are highly suspect, and his logic is circular: if he tells us an event or structure is too improbable to result from natural causes, then we all must conclude either that it results from purposeful design or otherwise does not conform to Dembski’s concept of “specified” information. He attempts to sound more scientific by introducing a “Law of Conservation of Information” which claims “that natural causes can only transmit [pre-existing] complex specified information, but never originate it.” This “law” obviously serves Dembski’s purpose in arguing for ID, but has no known basis in information theory or mathematics, and is inconsistent with laboratory observations of new functionality gained via random mutations. Nonetheless, variants of Dembski’s “law” are prominent in many ID dismissals of evolution by mutation and natural selection.
Dembski appears to confuse low predictability with impossibility of occurrence. There are many everyday occurrences in which randomness plays a significant role that lead to one outcome among far more than 10150 possible outcomes. As just one example, consider a Major League Baseball season, in which 30 teams pair up to play 162 games apiece. Each of the total of 2430 games in a season has a binary outcome, a win for either the home or the road team. There are thus 22430 or approximately 10732 possible patterns of 2430 game results. If you tried to predict the exact pattern before the season begins, and did so completely randomly, the probability of predicting the actual outcome would be 10-732. If you took into account that some teams are more fit than others to compete, and that home teams have an advantage, on average, you might get to a 60% success rate in predicting the results of individual games, but would still have a probability of only 10-539 to guess the actual outcome pattern, still far below Dembski’s universal probability bound. The detailed game result pattern is therefore essentially unpredictable, but nonetheless one specified outcome always appears by the end of each season.
Nobody actually tries to predict the pattern of individual game results for a baseball season; it’s hard enough to get right for the March Madness NCAA basketball tournament, which involves many fewer games. But there are a number of simulation software groups that do project beforehand the final standings – i.e., the number of wins each team will have at the end – of a baseball season. Just as there is enormous redundancy in amino acid sequences that can produce proteins serving the same basic function, there is enormous redundancy in the individual game result patterns that can produce the same final standings. For example, a team that wins 90 games has about 1047 distinct ways to choose those 90 games among the 162 games they play. That redundancy gives baseball predictors an outside chance to project the final standings correctly, taking into account the strong selection pressure that leads some teams to invest strongly to make the playoffs. In the biological case, that redundancy gives nature an outside chance to “stumble into” innovative functionality that will then be naturally selected because it improves a species’ adaptation to its environment.
Because it is so difficult to monitor macroevolution of living species as it occurs, and to record the mutations that drive it at each successive stage, computer simulations are beginning to play an important role in debunking the pseudoscientific claims of Behe, Dembski and their ID cohort, and in providing new insights to nature’s paths toward innovative adaptation. A particularly interesting example has been provided by Richard Lenski and his collaborators in a 2003 article in the journal Nature, entitled The Evolutionary Origin of Complex Features. They explore that origin among “digital organisms – computer programs that self-replicate, mutate, compete and evolve.” These digital organisms differ from computer viruses in the sense that their mutation and evolution do not require direct intervention, but rather occur “naturally” via imperfect copying and competition for the “energy” needed to execute their instructions.
The basic structure of a digital organism is illustrated in Fig. 18.1. Each individual organism studied by Lenski, et al. has a “genome” comprising a loop of simple executable instructions chosen from a library of 26 possible instructions. The team studied populations comprising 3600 individual organisms, each of which began with an identical hand-written ancestral genome 50 instructions long. Fifteen of those primordial 50 instructions were needed to allow genome self-replication by a sequence of copy commands, while the remaining 35 were null operations that had no executable outcome. The different organisms competed with one another, initially on an equal basis, for “energy” needed to permit the execution of one instruction at a time. But the competition could become unequal as a result of mutations allowed to occur, at low probability, each time a copy command to replicate an instruction in the genome was executed. The copying errors included point mutations in which an instruction was replaced by another, chosen at random from the 26-instruction library, as well as single-instruction insertions into or deletions from the genome. On average, 0.225 mutations occurred per genome replication. Occasionally, a mutation would cause the asymmetrical division of a copied genome, leading to the deletion or duplication of multiple instructions.
Natural selection of mutations occurred because the relative rate at which execution energy was delivered to each mutated genome depended on the product of the genome’s length and its “computational merit.” Merit was awarded when a mutated genome developed the capability to execute various logic functions that were completely missing from the ancestral genome. The instruction library included only a single logic function, NAND, which operates on the data in two registers and produces output data in which each bit is set to zero when the corresponding bits of the register data are both equal to one, and the output bit to one otherwise. But more complex logic functions could be constructed from instruction strings that included multiple instances of the NAND function. When n instances of NAND were required, a computational merit of 2n was awarded.
The most complex logic operation considered was EQU, which is satisfied only when the output data from a previous operation is equal, bit by bit, to a new result which sets output bits to one only when the corresponding bits from the next two input data strings are equal to each other. The EQU function requires a minimum of five NAND occurrences, and mutated genomes that attain the ability to execute EQU thus get their computational merit increased by 32. Attainment of various less complex logic function capabilities earn merit increases of 2, 4, 8 or 16. If the ability to carry out a logic function disappears due to mutation, the organism’s merit is decreased by the same amount. As a genome’s length and computational merit, and hence its relative energy delivery rate, increase, it is able to self-replicate faster and increase its abundance in the evolving population.
This digital exercise clearly is not the same as evolution of living species, but it incorporates essential features of evolutionary theory: common descent from a self-replicating ancestor, competition for limited resources, mutation and natural selection. And the results obtained by Lenski and collaborators reveal the essential roles of vast redundancy in ways to achieve a complex function, the resulting diversity of ways in which it is achieved, and the crucial dependence on a sequence of mutations that may have been naturally selected to achieve different functions in past generations. The results demonstrate clearly that allegedly irreducibly complex functions – ones whose execution cannot proceed unless many distinct parts of a genome work together – do not at all require the imposition of intelligent design. Specified complex information is generated by a sequence of naturally selected random mutations. Complex function capability is achieved, and comes over many generations to dominate the resulting populations, even though the majority of mutations sampled are either deleterious or neutral.
In particular, Lenski and collaborators analyzed 50 independent populations, each comprising 3600 individuals allowed to evolve from the same ancestral genome. In 23 of those populations, descendants eventually achieved the EQU function. The number of generations in which an offspring differed from its parent by one or more mutations that it took for EQU to first appear varied among those 23 successful populations from 51 to 721. The length of the pivotal descendant genome varied from 49 to 356 instructions. The minimum number of mutations to the ancestor that could possibly have produced an EQU-performing descendant was 16. So the longer and highly variable actual paths to get there indicate the circuitousness and unpredictability of evolution leading to this complex feature.
This unpredictability is further evidenced by analysis of the last mutations that led to the first appearance of the EQU capability. These included single mutations, double mutations, point mutations spanning 10 of the 26 instructions in the library, instruction insertions, deletions and duplications. The mutations that finally led to the EQU capability sometimes lost the capability to carry out simpler logic functions that were enabled in earlier generations. The mutations in generations that preceded the first EQU appearance included beneficial, deleterious and neutral ones. Some of those deleterious mutations had survived natural selection because they were accompanied by other beneficial mutations in the same generation, while in other cases the deleterious mutations turned into beneficial ones via addition of a new mutation in the succeeding generation.
Lenski et al. determined how many instructions in the pivotal genomes were essential to performance of the EQU function by systematically replacing each instruction, one by one, with a null operation. In one typical population where the pivotal genome comprised a total of 60 instructions, EQU ceased to function when any one of 35 of those instructions were nullified. So even though the function first appeared via a single point mutation to the immediate parent, the performance of the function was clearly complex. 27 of those 35 essential instructions had been added to the ancestor genome by random mutations over many generations; they clearly were not implanted by intelligent design in the ancestor. 22 of the essential instructions were needed as well for the performance of five simpler logic functions by the pivotal descendant. That these earlier mutations survived to eventually participate in EQU is an example of exaptation: parts of the genome that were previously selected to perform less complex functions became repurposed in the service of a new, innovative function that brought even greater adaptive benefit to the species. The benefit of the EQU function was sufficiently great that it survived many more generations of mutations and natural selection, despite relying on the simultaneous performance of so many instructions.
This digital exercise reveals the essential flaws in ID arguments that ignore the roles of redundancy and exaptation in arriving at complex biological systems. It also reveals one of the hidden benefits of random sampling. Before monitoring the simulations, the “intelligent designers” of the ancestor genome tried to construct by hand the shortest sequence of instructions they could think of to carry out the EQU function. They predicted that shortest sequence would require 19 essential instructions. But random sampling found an even more efficient alternative: one evolved population developed an ancestor that used only 17 essential instructions to execute EQU. Since there was no merit awarded for brevity, in other successful populations the number of essential instructions ranged up to 43. Random sampling of mutations, when combined with redundancy, exaptation and natural selection forcing, is in fact a very effective way to find a path, among many, to successful adaptation over many generations. But the path found in any population is essentially unpredictable.
ID proponents have, predictably, rejected the relevance of such computer simulations. For example, in response to the article described above, William Dembski claims: “The simulation by Lenski et al. assumes that all functioning biological systems are evolutionary kludges of subsystems that presently have function or previously had function. But there’s no evidence that real-life irreducibly complex biological machines, for instance, can be decomposed in this way. If there were, the Lenski et al. computer simulation would be unnecessary. Without it, their demonstration is an exercise in irrelevance.” The experiment described earlier on the resurrection of flagellar motility does, indeed, provide evidence of exaptation in Dembski’s favorite example of a “real-life irreducibly complex biological machine.” In that experiment, natural evolution used random mutations and natural selection to repurpose genes originally selected for nitrogen uptake to address the more urgent and more complex task of generating motility. It is precisely in the context of such laboratory evidence that the computer simulations provide further insight into the wide variety and essential unpredictability of evolutionary pathways to innovative functionality.
A succinct comment made by another rejectionist, Casey Luskin, in response to the Lenski, et al. paper unwittingly reveals the fundamental tautology at the heart of ID: “In a true irreducibly complex system, there will be no selective advantage along an evolutionary pathway.” In other words, if research reveals such a pathway marked by stepwise naturally selected advantages, then the system evolved is not a “true irreducibly complex system.” Until evolutionary biologists reveal such pathways for every candidate system ID proponents can come up with, Luskin and his cohort will insist on the critical role of intelligent design. They will continue to reject contrary scientific evidence by defining it to be irrelevant. This is a standard element of the science denier’s toolbox.
As is the case with other science topics covered on this site, evolutionary biology remains a field of very active research. The science itself has evolved enormously since Charles Darwin launched the field in the mid-19th century, with especially strong boosts from the related fields of genetics, microbiology, molecular biology, paleontology, and even computer science. There remain many detailed unanswered questions being addressed by ongoing research. But the basic evolutionary concepts of common descent, random mutation, natural selection and genetic drift are supported by a very broad and deep database of evidence. Some of that evidence, from laboratory observations of microevolution, the fossil record, biogeographical observations, genomic mapping, and computer simulations, has been discussed in this blog series. More of the evidence is presented in many of the references we have included in these entries. As detailed on the talkorigins site, evolutionary theory makes many falsifiable predictions, and none have yet been falsified by contrary evidence.
In contrast, Intelligent Design, while it may be attractive in some faith-based communities, is pseudoscience. It is incapable of providing a coherent, falsifiable alternative to the theory of evolution. Its proponents aim not to provide such an alternative, but rather to poke holes in the theory of evolution by posing challenges based on the perceived complexity of many biological systems. Posing such challenges is part of the normal scientific method. But ID advocates show little inclination to accept the scientific evidence that provides answers to their challenges, because doing so would jeopardize their fixed, predetermined conclusion about how living species have evolved.
Stephen Meyer, A Scientific History and Philosophical Defense of the Theory of Intelligent Design, https://www.discovery.org/a/7471/
Michael Behe, Darwin’s Black Box (Free Press, 1996) (https://en.wikipedia.org/wiki/Darwin%27s_Black_Box)
K.R. Miller, Only a Theory: Evolution and the Battle for America’s Soul (Penguin Books, 2008) (https://www.amazon.com/Only-Theory-Evolution-Battle-Americas-ebook/dp/B0015DWKXG/ref=dp_kinw_strp_1)
K.R. Miller, The Flagellum Unspun (http://www.millerandlevine.com/km/evol/design2/article.html)
T.B. Taylor, et al., Evolution. Evolutionary Resurrection of Flagellar Motility via Rewiring of the Nitrogen Regulation System, Science 347, 1014 (2015) (http://science.sciencemag.org/content/347/6225/1014)
M.J. Pallen and N.J. Matzke, From the Origin of Species to the Origin of Bacterial Flagella, Nature Reviews. Microbiology 4, 784 (2006) (http://mcb.berkeley.edu/courses/mcb140/urnov/pallen.pdf)
S.V. Rajagopala, et al., The Protein Network of Bacterial Motility, Molecular Systems Biology 3, 128 (2007) (http://msb.embopress.org/content/3/1/128)
Michael Behe, Philosophical Objections to Intelligent Design: A Response to Critics, https://evolutionnews.org/2016/10/philosophical_o/
X. Zhuang, C. Yang, K.R. Murphy and C.-H. Cheng, Molecular Mechanism and History of Non-Sense to Sense Evolution of Antifreeze Glycoprotein Gene in Northern Gadids, Proceedings of the National Academy of Sciences 116, 4400 (2019) (https://www.pnas.org/content/116/10/4400 and https://whyevolutionistrue.wordpress.com/2019/03/14/the-evolution-of-irreducibly-complex-antifreeze-proteins-in-a-polar-fish-and-a-fish-slap-at-behe/)
Andreas Wagner, Arrival of the Fittest: Solving Evolution’s Greatest Puzzle (Current Publishing, 2014)
M.F. Land, The Evolution of Eyes, Annual Review of Neuroscience 15, 1 (1992) (https://www.annualreviews.org/doi/10.1146/annurev.ne.15.030192.000245)
D.-E. Nilsson and S. Pelger, A Pessimistic Estimate of the Time Required for an Eye to Evolve, Proceedings of the Royal Society B 256, 53 (1994) (https://royalsocietypublishing.org/doi/10.1098/rspb.1994.0048)
G.C. Finnigan, et al., Evolution of Increased Complexity in a molecular machine, Nature 481, 360 (2012) (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3979732/)
W.A. Dembski, The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge University Press, 1998) (https://en.wikipedia.org/wiki/The_Design_Inference)
T.D. Schneider, Evolution of Biological Information, Nucleic Acids Research 28, 2794 (2000) (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC102656/)
M. Behe, Darwin Devolves: The New Science About DNA that Challenges Evolution (HarperCollins, New York, 2019)
N.H. Lents, S.J. Swamidass and R.E. Lenski, The End of Evolution?, Science 363, 590 (2019) (http://science.sciencemag.org/content/363/6427/590)
R.E. Lenski, C. Ofria, R.T. Pennock and C. Adami, The Evolutionary Origin of Complex Features, Nature 423, 139 (2003) (http://myxo.css.msu.edu/papers/nature2003/Nature03_Complex.pdf)
C. Luskin, Evolution by Intelligent Design: A Response to Lenski et al., http://www.ideacenter.org/contentmgr/showdetails.php/id/1319