The Quest for the Holy Grail

Last week’s seminar was focused on Genetic Algorithms with varying population size. Like other GA’s parameters, population size was the subject of some studies in 1990s that aimed at exploring the established notion that different stages of the search require populations with different sizes. In general, we may say that larger populations are needed in the beggining of the search, and as the search proceeds towards the end, the GA is able to work with fewer individuals. One of the main objectives of the research on dynamic population GAs was to eliminate the population size parameter and design a GA that could adapt the size independently of the number of individuals in the first generation. That way, we could not only speed up the search process but also diminish the tuning effort. (On the other hand, improvements on scalability are, in my opinion, impossible to achieve with just a plain dynamic scheme.)

Some schemes only aim at overcoming the hard task of determining an optimal population size, and they do that by mimicking the process of tuning the population size by a GA’s expert (see also bisection method, K. Sastry). The parameter-less GA (Lobo et al.) works that way by evolving simultaneous and increasingly larger populations until the convergence criterion is met. This method does not speed up a standard GA. Instead, it may slow it down, but the worst-case scenario is of such a magnitude that is well worth using the parameter-less GA to avoid the tuning of the initial population size. Other techniques try to speed up the algorithms by dynamically changing the population size during the run, in a deterministic or adaptive manner.

Back in 1994, the authors (Arabas et al.) of the Genetic Algorithm with Varying Population Size (GAVaPS) claimed that the algorithm was able to adapt its population size during the run, according to the average fitness of the population and to the quality of the best or worst solutions. However, further studies could no replicate that described behaviour. Instead, GAVaPS population easily evolved towards extinction or experienced demographic burst that would increase its size and computational effort. The Adaptive Population Size GA (APGA) (Eiben et al.) tries to overcome this problem by engaging in a traditional steady state reproduction, that is, the population only generates two offspring in each generation. Although the authors presented a study were APGA outperforms several dynamic population sizing methods, F. Lobo and C. Lima later demonstrated that the algorithm does not really adapts its population size, but instead it remains below a specific values after a certain number of generation (In addition, they show that the test set were APGA apparently excels is probably not suited to evaluate GAs with dynamic population).

PRoFIGA (Valkó) is an example of deterministic control of the population (GAVaPS and APGA are adaptive). Unfortunately, it is also an example of an extremely complex algorithm when it comes to tuning it. PROFIGA removes the population size from the usual set of parameters, but it adds six (!) parameters. One of the main guidelines to parameter control is: make it simple. If we replace a parameter by another one (or more!), then the first objective – simplify the tuning for a non-expert user – is not met.

In 2001 (at GECCO), De Jong identified some of the open problems in Evolutionary Computation. Dynamic populations was one of them. Since then, not much has been done that we could consider as major breakthrough. In fact, varying population size is a serious candidate for the title of Evolutionary Computation Holy Grail! Take for instance the self-adaptation of the optimal population size. If the algorithm starts with a smaller population than the required, it has to grow its size, but it has to add genetic diversity along the way, because the initial genetic pool will no be able to converge to a solution (if it did, then the optimal population size would be expected one, but instead a smaller population, of course). Is it possible to design an effective dynamic scheme (that is, a scheme that could start with a population with random size, an then adapt in the first generations to the proper size)? Or should we start considering other applications for dynamic population GAs, and abandon the quest the for a GA which is independent of the initial population size?

Esta entrada fue publicada en EA, Friday_Paper_Seminar por cfernandes81. Guarda el enlace permanente.

Acerca de cfernandes81

Carlos M. Fernandes was born in Luanda in 1973 and lives in between Lisbon, Portugal, and Granada, Spain. He graduated (Technical University of Lisbon, 1998) in Electrotechnics Engineering and owns a master degree in the same field since 2002 (Technical University of Lisbon). He is currently pursuing a Ph.d. on Bio-inspired Computing. From 2001 to 2005 he was an assistant at Instituto Politécnico de Setúbal. (He is also a photographer and photography teacher.) Bio-inspired Computing is his major field of research: Genetic Algorithms, Estimation of Distribution Algorithms, Ant Colony Optimization, Particle Swarm Optimization and other metaheuristics. He is particularly interested in the hybridization of Bio-inspired Computing techniques with Self-Organization, Self-Organized Criticality Models and diversity maintenance strategies. In the present, Dynamic Optimization Problems are his mains target for applying such techniques. website: email:

2 comentarios en “The Quest for the Holy Grail

Deja una respuesta

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de

Estás comentando usando tu cuenta de Salir /  Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Salir /  Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Salir /  Cambiar )

Conectando a %s