Wolfram’s (2002 ) “A New Kind of Science” uses an approach to solving scientific problems which differs radically from those of traditional science (although there have been forerunners). It has been made possible by the availability of high-speed computers. The idea behind NKS is torun a large number of computer programs based on a variety of “rules” and look at the results. Particularly useful has been the application of such rules to cellular automata (Wolfram 1986 , 2002 ). A cellular automaton is composed of rows of cells, each cell characterised by a particular state (such as black or white, or red and green). A rule specifies how an automaton develops from one to the next computational state, based on its previous state and the state of its neighbours. – Rohde (2005a,b  ) has discussed the application of NKS in ecology and evolution.
What is complexity?
The main aim of NKS is to investigate the origins of complexity. But how is complexity defined? Lloyd (2006 ) refers to 32 definitions of complexity. Generally, one says that a system’s complexity increases with the number of its parts which are arranged in some intricate way. In information theory, the ‘complexity’ of a pattern is defined as being equivalent to the information theoretic content of a pattern, which is equivalent to the smallest algorithm that can describe that pattern. Low complexity or information content is decribed by a short algorithm, higher complexity by a longer one. Weaver (1948 ) distinguishes disorganized and organized complexity. In the former, a system consists of a very large number of components which interact largely in a random manner, but the system as a whole has properties that can be characterised with probability and statistical methods. If a system shows organised complexity, on the other hand, the parts composing it are “correlated”, i.e., they interact in a non-random fashion, and the system has properties not “dictated” by the parts. Wolfram , pp. 552-559 discusses the concepts of random and complex in the context of NKS, stressing the importance of human perception (Wolfram uses visual perception for his computer generated patterns, but there is no reason not to use acoustic perception for detecting patterns in music, for example) and (mathematical and statistical) analysis. He concludes that randomness should be assumed whenever no simple program can be found that detects regularities in a pattern, although simple programs generate such patterns. Another way of saying this is that no short description can be found from which the patterns can be reproduced. Concerning complexity, Wolfram points out, again using visual perception and mathematical-statistical analysis, that the shorter possible descriptions of patterns are, the lower the complexity. He says (p.559) “So in practice what we most often mean when we say that something seems complex is that the” …. human visual perception has “failed to extract a short description”. – Expressed differently, we can say that complexity (of NKS) can be looked at as an indication of the degree of organisation (pattern formation) within a system: complexity would be lowest in a system in which all elements are correlated, forming one distinct pattern.
Important general results
Extensive studies have shown that very simple rules (as measured by the number of instructions in a rule) lead to simple repetitive (Figure 1) or nested (Figure 2) patterns, but that a slight increase in a rule’s complexity (i.e. a slight increase in the number of instructions) may lead to very complex, or even apparently random (or better pseudo-random) patterns (Figure 3). Increasing thecomplexity of rules above a certain threshold does not lead to a further increase in the complexity of patterns(defined as the degree of “organisation” in the system, see above). In other words: complexity of the result does not correspond to the complexity of the rule.
Figure 1. Cellular automaton with a repetitive pattern. The rule used to generate this pattern says: make a cell black if either of its neigbours was black in the previous step, but make it white if both its neigbours were white. Note: as in figures 2 and 3, the cellular automaton starts with a single black cell.
Figure 2. Cellular automaton with nested (fractal) pattern. Note that the pattern consists of many nested triangles of exactly the same form but of different size. The rule used to generate this pattern says: make a cell black when one or the other, but not both, of its neighbours were black in the previous step.
Figure 3. Cellular automaton generated with Wolfram’s rule 30 (i.e., if a cell and its right-hand neigbour were white in the previous step, make the new colour of the cell what the previous colour of its left-hand neighbour was. Otherwise, make the new colour the opposite to that). Left and right truncated. Note some regularities on the left (diagonal bands), and scattered throughout some triangles and other small structures, but overall the irregular appearance is overwhelming. The sequence below the initial black cell in the middle, over 1 million steps, has no repeats whatsoever, as revealed by various highly sophisticated statistical and mathematical tests, i.e. it is apparently random.
Other systems beside cellular automata, such as mobile automata, tag systems, cyclic tag systems, Turing machines, substitution systems, sequential substitution systems, register machines and symbolic systems such as Mathematica (see Wolfram 2002  for definitions and details) follow the same principle: complex patterns result from rules whose complexity lies between a lower and upper threshold, a further increase in their complexity does not lead to more complex patterns.The principle of computational equivalenceAccording to this principle, in all but the most simple systems a pattern corresponds to a computation of equivalent complexity. Or:the computations necessary for predicting the development of any complex system require a minimum of as many steps as contained in the system itself.In other words, it isimpossible to predict by some mathematical equation (a predictive law that shortcuts the actual computational process) the pattern expected some steps down. Since traditional science is based on the formulation of precise predictive laws, there is a fundamental limit to it.
A scientific revolution?
Wolfram claims that NKS and modelling using cellular automata have led to a scientific revolution. Not all scientists agree (e.g., Cybenko 2002 ), but an increasing number of studies have, for example, used cellular automata successfully in ecological studies (examples in Molofsky and Bever 2004  for ecology). Boguta (2003 ) believes that NKS is a fundamental new science and will crucially feed ideas into the other sciences. It does not simply want to “model and simulate”, but suggests to scientists to simplify their systems and use systematic methods so that their problems can be solved in unexpected ways.
Wolfram’s application of NKS to evolution
Here I outline some applications of NKS to evolution as discussed in Wolfram (2002  ), supplemented by other studies.Evolution can be interpreted as random searches for programs in order to maximise fitness. In simple rules, iterative (repeated) random searches may find optimal solutions fairly fast, but if rules become slightly more complex (complexity defined here as the number of instructions in a rule), only a huge number of steps can approach the best solution. As a result,most species (“programs”) are unlikely to be optimally adapted to their niche, they are trapped in relatively easy-to-find suboptimal niches. In evolutionary history, which has lasted for billions (milliards) of years, the number of mutations has been enormous, andbecause relatively simple rules may result in morphological/physiological complexity (defined here as the number of traits of an organism), some mutations must have led to complex patterns early in evolutionary history. And this is indeedsupported by paleontological findings and studies of the phylogeny (using DNA, among other methods) and morphology of extant organisms. Many complex organisms are very ancient and have changed little over evolutionary time. An example is the xiphosuranLimulus, the horseshoe crab, whose morphology has hardly changed since it first turned up in the fossil record 445 million years ago, and is therefore, like the crossopterygian fishes, included among the “living fossils”. In many groups even a reduction in complexity through evolutionary history is evident: among the vertebrates, the most ancient group still existent (beside the sharks) is the bony fishes; they have a much larger number of bones in the head skeleton than the much more recently evolved mammals. For example, the crossopterygian fishes appeared at the beginning of the Devonian (about 416 million years ago). They had about 150 headbones, humans, who are only a few million years old, have 28 (Rensch 1954 ). This is indeed a general trend referred to as Williston’s rule: a consistent decrease in the number of headbones from the most ancient to the most recent vertebrate classes. Russian authors have introduced the term “oligomerisation” for the general trend, found in many animal groups, of the reduction in the number of similar components in animal species and organs in the course of evolution. Rensch uses the term rationalisation to describe the same trend. – An example of an early origin of complex organisms, many of them with many similar segments, is the Burgess shale fauna of the Cambrian (over 500 million years ago), well documented by numerous fossils. – A well examined example from extant organisms, i.e., the parasitic flatworms (Platyhelminthes), is the Aspidogastrea, a group of flukes, likely to have arisen over 400 million years ago, as suggested by DNA studies and studies of comparative anatomy using cladistic analyses (Littlewood et al. 1999 ). Their complexity is astonishing, as indicated by the number of receptor types, the number of nerves in the nervous system and the number of adhesive components forming the adhesive system (Rohde 1971 , see knols on the Aspidogastrea I and II and figures 4 to 6). Is it possible that a single mutation or a few mutations, corresponding to the simple program leading to repetitive patterns of cellular automata (Figure 1), have led to the repetitive structure of the adhesive disc and its nerves, and to the large number of anterior nerves? A reduction in complexity, again perhaps by few mutations, could have led to the much simpler structure of other flukes (Digenea). Alternatively, one or few mutations could have led to the transformation of the simple sucker still found in the Digenea, to the adhesive disk of the Aspidogastrea.- Simulations using cellular automata can, of course, only suggest such scenarios, which would have to be verified by genetic, developmental and comparative anatomical studies. In view of the possibility to determine entire genomes relatively cheaply, such studies seem possible in the foreseeable future.
Figure 4. Part ofMacraspis, an aspidogastrean from elasmobranchs. Note the large number of alveoles separated by transverse septa forming the ventral adhesive disk, which resembles a computer-generated repetitive pattern (Figure 1).© Klaus Rohde
Figure 5. Reconstruction of nervous system in the middle part of the aspidogastreanMulticotyle purvisifrom turtles, based on serial transverse sections. Note the large number of nerves innervating the adhesive disc, and the repetitive pattern of the adhesive disc. Original Klaus Rohde. © Klaus Rohde
Figure 6. Anterior part of nervous system of the aspidogastrean Multicotyle purvisinear the brain reconstructed from serial sections. Note: the basic pattern of the nervous system consists of longitudinal nerves (=connectives) and circular (transverse) nerves (=commissures). In most flatworms (Platyhelminthes) there are three pairs of connectives (ventral, lateral and dorsal nerves in Figure 5) connected by external commissural rings. In this species, however, there are nine anterior pairs of connectives connected by a number of internal and external rings. The dorsal part of one of the interior rings is enlarged to form a brain (cerebral commissure). Redrawn and strongly modified from . Original Klaus Rohde. © Klaus Rohde
That few mutations are sufficient to cause complex patterns, is also suggested by the great variation in the complexity of similar structures in related species. For example, the pigmentation pattern of cone shells varies enormously between species (suggesting that single or few mutations are responsible for the differences, which is possible only in relatively short programs). The patterns are very similar to patterns generated by randomly selected cellular automata with simple rules (which also suggests that genetic programs for the patterns are short). Since the pigmented shells of some species are covered and at least partly obscured by living tissue, it is not likely that the patterns have arisen by natural selection.Simple organisms are very frequent, and even in large complex animals organs have some fairly simple components. The reason may be that selection can effectively optimise only relatively simple features, which have therefore survived.In general all this means that natural selection tries to avoid complexity and not increase it. That complex organisms exist, is a consequence of the random addition of “programs” (species) many of which happen to have complex features.Support from other approaches ?That perfect optimisation is impossible and that complexity is not necessarily a result of natural selection, was also suggested by some authors using completely different approaches.For example, Hengeveld and Walter (1999 , further references therein) concluded that optimisation of ecological traits can rarely be achieved because ofgreat temporal and spatial environmental variability. However, Wolfram goes further and suggests that optimisation is impossible even when environmental conditions are constant. Kauffman (1993 ) also put the overriding importance of natural selection in doubt, claiming thatmany traits of organisms have evolved not because but in spite of natural selection. He mentions similarities with the ideas of the rational morphologists Goethe, Cuvier and St. Hilaire, who tried to discover some logic or laws which “explained similar organisms as variations on some simple mechanisms that generate living forms”. Kauffman concluded that species are trapped in local optima, and global optima can seldom (if ever) be reached, i.e.,species are not optimally adapted.The principle of computational equivalence in ecologyLawton (1999 ) has pointed out that repeated patterns in ecological communities and populations are widespread, but that the discovery of universally valid ecological “laws” is highly unlikely. The principle of computatonal equivalence supplies the theoretical justification for this statement: predictive laws are impossible for complex systems (see discussion in  ).
In spite of the criticisms by various authors, Wolfram’s approach may lead to exciting new findings in ecology and evolutionary biology by systematically exploring various computational possibilities and suggesting genetic, developmental, comparative anatomical etc. studies. The approach is also useful for interpreting evolutionary/phylogenetic/paleontological patterns.
Wolfram, S. (2002). A new kind of science. Wolfram Media Inc. Champaign, Il.Wolfram, S. (1986). Theory and applications of cellular automata: Advanced series on complex systems. World Scientific Publishing: Singapore.Rohde, K. (2005a). Cellular automata and ecology. Oikos,110, 203-207.Rohde, K. (2005b). Nonequilibrium ecology. Cambridge University Press, Cambridge.Lloyd, S. (2006). Programming the Universe. Knopf.Weaver, W. (1948), Science and Complexity. American Scientist 36: 536.Cybenko, G. (2002). Review of NKS. Computing in Science & Engineering.Molofsky, J, and Bever, J,D. (2004). A new kind of ecology? BioScience 54, 440-446.Boguta, K. (2003). Comments on a review of NKS. http://forum.wolframscience.com/showthread.php?threadid=271Rensch, B. (1954). Neuere Probleme der Abstammungslehre.Ferdinand Enke Verlag, Stuttgart. English translation 1959: Evolution above the species level. Columbia University Press, New York.Littlewood, D.T.J., Rohde, K., Bray, R.A. and Herniou, E.A. (1999). Phylogeny of the Platyhelminthes and the evolution of parasitism. Biological Journal of the Linnean Society , 68, 257-287.Rohde, K. (971). Untersuchungen an Multicotyle purvisi Dawes, 1941 (Trematoda, Aspidogastrea). III. Licht- und elecktronenmikroskopischer Bau des Nervensystems. Zoologische Jahrbücher, Abteilung für Anatomie, 88, 320-363.Hengeveld, R. and Walter, G.H. (1999). The two coexisting ecological paradigms. Acta Biotheoretica 47, 141-170.Kauffman, S.A. (1993). The origins of order. Self-organization and selection in evolution. Oxford University Press, New York Oxford.Lawton, J.H. (1999). Are there general laws in ecology? Oikos 84, 177- 192.
Other relevant knols