The topmost part of the figure shows a GP system that is fed a fitness function that evaluates the quality of crossover code, for example by computing the fitnesses of children generated from a few thousand parents selected from GP runs on about a dozen standard GP examples. Thus, the output of this GP system is code for crossover operators, typically thousands of different variants of such operators.
Crossover is very simple to implement. My standard GP system, which is quite rudimentary and not particularly useful, implements crossover as follows in Standard ML.
fun crossExps( E1 : exp, E2 : exp ) : exp * exp = let val Pos1 = rand_choice( all_poses E1 ) val Pos2 = rand_choice( all_poses E2 ) val Sub1 = pos_to_sub( E1, Pos1 ) val Sub2 = pos_to_sub( E2, Pos2 ) in ( pos_replace( E1, Pos1, Sub2 ), pos_replace( E2, Pos2, Sub1 ) ) endThe general auxiliary functions (all_poses, rand_choice, pos_to_sub and pos_replace) employed above have even simpler code. This code complexity is low, for example when compared with a propositional resolution theorem prover synthesized by ADATE. Thus, it is a relatively easy task to automatically synthesize crossover codes from first principles. The drawing above shows two different parts, crossover code and mutation code, being improved in parallel. Note that thousands of different crossover and mutation codes are produced in a single run (one iteration in the figure) and that there are millions of different (crossover,mutation) code pairs that can be tried. Self-improvment takes place when following a black arrow in the drawing and replacing a part of a system with synthesized code.