The major claim of the book is that a certain class of search strategies, characterized by combining blocks of genetic data which have been pre-adapted in different environments, is inherently different from standard random-mutation gradualist evolution, and is capable of discovering optima in a complex fitness landscape which gradualism cannot. It does this by traversing the landscape in such a way that the local optima which trap gradualism do not exist.
Compositional Evolution doesn't circumvent the No-Free-Lunch theorems, of course, so this family of search strategies (like all others) is only efficient on a seach space (i.e. cost function) with certain properties. The main requirement is that it be hierarchically decomposable – that is, it must contain groups ("modules") of variables ("genes") which encapsulate internal complexity. The modules are still allowed to have strong dependencies with other modules – i.e. two modules may be interdependent in such a way that a certain configuration of one module may only confer a high fitness value if a certain configuration of another module is also present. This modular structure must be recursive, so that the modules contain modules which contain modules, all the way down.
The basic idea is that these modules can be swapped around until a good combination is found. This module of modules then acts as a new, larger unit of selection and is itself combined randomly with other higher-order modules. In order for evolution to make use of compositional operators, life would need to have (roughly) a hierarchical modular structure. This is a testable claim, and with the amount we know about genetics in e. coli and yeast, it shouldn't be too impractical to look for such modules. Watson himself suggests several areas for research. Of course, this raises questions about what exactly modularity would prove. Is modularity a sign of intelligent engineering, or a way to explain blind evolution? Is lack of modularity a sign of an obviously not designed cludge, or proof that natural selection couldn't overcome the many dependencies to create higher life forms?
The two models of compositional evolution examined in the book are sexual recombination (the random combination of genes from both parents by breaking each genome at certain points, and assembling the pieces into one new genome) and symbiogenesis (the fusing of two symbiotic organisms into one), which Watson describes as the 2 extremes of compositional evolution (symbiogenesis being at the high end because it incorporates all the genetic material from both "parents"). The main issue for all compositional mechanisms is discovering the modules. This is easily done in top-down, designed systems... but can it be done bottom-up, with no prior knowledge of the modules?
With respect to sexual recombination, the answer is "not really". In order for sexual recombination to act in a compositional manner, there are several requirements. First, the parents must be from different niches, or have evolved in somewhat different environments. Otherwise the population converges to the most fit individual without ever discovering regions of higher fitness. Secondly, the number of breakpoints must be constant (that is, not a function of the genome size). Otherwise the overactive recombination is too likely to break up existing good modules from the parents. And third, interdependent ("linked") modules must be adjacent on the chromosome. If they are not, then they are unlikely to travel together during recombination.
Of these, the second and the third are not very biologically plausible. Related genes are often collocated in bacteria but not, ironically, in organisms which reproduce sexually. Watson acknowledges this and postulates that linkage may act compositionally with lateral gene transfer in bacteria and that the important compositional unit in higher organisms is the gene, not groups of genes. The gene-as-compositional-unit comment, however, ignores the fact that his models are built upon hierarchical modularity. It is unclear what benefit a single level of modularity provides over traditional gradualism. Indeed, Watson seems disapointed with the restrictions on recombination and is most proud of his model of symbiogenesis.
Watson's model of symbiogenesis begins with many organisms, each containing only one of the lowest-level modules. Organisms are then randomly tested for the stability (fitness increase) of a join between them. If the join is more fit under a variety of contexts (chosen by comparing them against fully-specified genomes) than both of the "parents", then it is kept, otherwise it is discarded. The unit of selection is entire organisms, and as the algorithm progresses, the correct modules are discovered.
Aside from the fact that symbiotic encapsulation is at best rare and can't account for most of the genetic data, the thing that sounds most fishy to me is that join stability is tested with fully-specified individuals (created by concatenating existing individuals until a full specification is formed). This seems to me like look-ahead not available to evolutionary methods. In fact Watson himself refers to the process as "stochastic lookahead" and justifies it this way:
We want to select for good modules, but what makes a "good module"? A good module is precisely the thing that, when used in assemblies in the future, will produce good assemblies. So, how do we find such an entity? By trying it in a number of assemblies and seeing how it does. (p.261)The idea, I think, is that joins which are rejected are evolutionary dead ends. Looking ahead to see this allows us to conveniently disregard them before they are actually shown to be unfit. But suppose an ultimately unstable join is so much better in the short run that it eliminates the possibility of an optimal join? It seems to me that a more biologically accurate model would keep joins that are optimal at the time, rather than those which are optimal in some future context.