Estudio de un Operador de Mutación para Algoritmos Genéticos Basado en la Teoría de la Criticalidad Auto-Organizada

Our studies on the sandpile mutation operator were extended, presented at MAEB2012 and published in the proceedings (in spanish). The abstract:

La mutación montón de arena es un operador para Algoritmos Genéticos basado en un modelo de Criticalidad Auto-Organizado con el mismo nombre. El operador ha sido desarrollado con el objetivo de resolver problemas con función objetivo variable. Este artículo propone un estudio del operador y la optimización de su desempeño, experimentando diferentes estrategias que conectan el modelo auto-organizado y el Algoritmo Genético. Las pruebas sobre el algoritmo se desarrollan con un gran conjunto de problemas dinámicos, diseñados con un generador de problemas a partir de funciones-base no dinámicas. Las mejores configuraciones del algoritmo son comparadas con dos AGs recientemente propuestos para optimización dinámica. Demostramos que un AG con el operador de mutación montón  de arena es eficiente para el conjunto de pruebas que es propuesto.

New journal paper in press

C.M. Fernandes, J.J. Merelo, A.C. Rosa, A comparative study on the performance of dissortative mating and immigrants-based strategies for evolutionary dynamic optimization, Information Sciences, in press.

Abstract – Traditional Genetic Algorithms (GAs) mating schemes select individuals for crossover independently of their genotypic or phenotypic similarities. In Nature, this behavior is known as random mating. However, non-random protocols, in which individuals mate according to their kinship or likeness, are more common in natural species. Previous studies indicate that when applied to GAs, dissortative mating – a type of non-random mating in which individuals are chosen according to their similarities – may improve their performance (on both speed and reliability). Dissortative mating maintains genetic diversity at a higher level during the run, a fact that is frequently observed as a possible cause of dissortative GAs’ ability to escape local optima. Dynamic optimization demands a special attention when designing and tuning a GA, since diversity plays an even more crucial role than it does when tackling static ones. This paper investigates the behavior of the Adaptive Dissortative Mating GA (ADMGA) in dynamic problems and compares it to GAs based on random immigrants. ADMGA selects parents according to their Hamming distance, via a self-adjustable threshold value. The method, by keeping population diversity during the run, provides an effective means to deal with dynamic problems. Tests conducted with dynamic trap functions and dynamic versions of Road Royal and knapsack problems indicate that ADMGA is able to outperform other GAs on a wide range of tests, being particularly effective when the frequency of changes is low. Specifically, ADMGA outperforms two state-of-the-art algorithms on many dynamic scenarios. In addition, and unlike preceding dissortative mating GAs and other evolutionary techniques for dynamic optimization, ADMGA self-regulates the intensity of the mating restrictions and does not increase the set of parameters in GAs, thus being easier to tune.

The Sandpile Mutation Operator at Torino, Italy

Last thursday morning, at the Evo* congress, I was presenting our study on the mutation rates evolved by the Sandpile Mutation Operator, an alternative mutation scheme for GAs, specifically designed for Dynamic Optimization Problems, based on the Self-Organized Criticality Theory and the Bak-Tang-Wiesenfeld Sandpile Model.

Environment-driven evolution in robot swarms

In the last Friday paper seminar we were discussing the paper:

which was presented at PPSN XI where we were attending as we mentioned.

Authors present a nice work on  swarm robotics where they try to evolve robot controllers using a fixed size population of autonomous robots. Evolution will take place in a decentralized fashion where no information on a possibly changing environment is provided. In that context, evolution is challenged to react to changes on-line and self-adapt to the environment without the global knowledge  on the problem that the fitness function would provide. That way «fitness» is implicit within the environment and the success criterion of a given strategy is defined as follows: one specific strategy is successful if it manages to spread over the population.

To that aim, authors propose mEDEA (minimal Environment-driven Distributed Evolutionary Adaptation), an intuitive algorithm which tackle the problem and tries to evolve controllers following a simple but elegant rule: those robot controllers that maximize the number of matings while preventing running out of energy will succeed on spreading their genomes.

Workshops at PPSN XI

Last week in conjunction with  PPSN XI, the Self* 2010 and PARCO 2010 workshops were held in Kraków (Poland)  where we were presenting  some of our most recent works. Specifically, you can find the respective presentations below.

  • A Self-Organized Critically Online Adjustment of Genetic Algorithms’ Mutation Rate

This paper describes an alternative mutation control scheme for Genetic Algorithms (GAs) inspired by the Self-Organized Criticality (SOC) theory. The strategy, which mimics a SOC system known as sand pile, is able to generate mutation rates that, unlike those generated by other methods of adaptive parameter control, oscillate between very low values and cataclysmic mutations. In order to attain the desired behaviour, the sandpile is not just attached to a GA; it is also modified in order for its conduct to reflect the stage of the search, i.e., the fitness distribution of the population. Due to its characteristics, the sandpile mutation arises as a promising candidate for efficient and yet simple and context-independent approach to dynamic optimization. An experimental study confirms this assumption: a GA with sandpile mutation outperforms a recently proposed SOC-based GA for dynamic optimization. Furthermore, the proposed method does not increase traditional GAs’ parameter set.

  • Influence of the Population Structure on the Performance of an Agent-based Evolutionary Algorithm

The Evolvable Agent model is a Peer-to-Peer Evolutionary Algorithm which focuses on distributed optimization over Peer-to-Peer infrastructures. The key idea of the model is that every agent-individual is designated as a peer and adopts a decentralised population structure defined by the Peer-to-Peer protocol newscast. In that context, this work aims to compare performances of the approach
when using two additional population structures other than newscast: a ring and a Watts-Strogatz topology.

Using Dissortative Mating Genetic Algorithms to Track the Extrema of Dynamic Deceptive Functions

Traditional Genetic Algorithms (GAs) mating schemes select individuals for crossover independently of their genotypic or phenotypic similarities. In Nature, this behaviour is known as random mating. However, non-random schemes − in which individuals mate according to their kinship or likeness − are more common in natural systems. Previous studies indicate that, when applied to GAs, negative assortative mating (a specific type of non-random mating, also known as dissortative mating) may improve their performance (on both speed and reliability) in a wide range of problems. Dissortative mating maintains the genetic diversity at a higher level during the run, and that fact is frequently observed as an explanation for dissortative GAs ability to escape local optima traps. Dynamic problems, due to their specificities, demand special care when tuning a GA, because diversity plays an even more crucial role than it does when tackling static ones. This paper investigates the behaviour of dissortative mating GAs, namely the recently proposed Adaptive Dissortative Mating GA (ADMGA), on dynamic trap functions. ADMGA selects parents according to their Hamming distance, via a self-adjustable threshold value. The method, by keeping population diversity during the run, provides an effective means to deal with dynamic problems. Tests conducted with deceptive and nearly deceptive trap functions indicate that ADMGA is able to outperform other GAs, some specifically designed for tracking moving extrema, on a wide range of tests, being particularly effective when speed of change is not very fast. When comparing the algorithm to a previously proposed dissortative GA, results show that performance is equivalent on the majority of the experiments, but ADMGA performs better when solving the hardest instances of the test set.

Full paper here.


Immigrants do all the work

Genetic Agorithms with Memory- and Elitism-based Immigrants in Dynamic Environments was the paper selected for presentation and discussion in last Friday’s seminar. The article, authored by Shengxiang Yang, was published in the last Fall edition of Evolutionary Computation and addresses evolutionary optimization in dynamic environments.

Yang proposes two Genetic Algorithms (GAs) for dynamic optimization based on Grefenstette’s classical Random Immigrants GA (RIGA). RIGA tackles (or tries to) changing optima by inserting, in each and every generation, a certain number of randomly generated individuals that replace the worst individuals in the population (or randomly selected individuals, in another version). This way, genetic novelty is constantly being introduced and the population is expected to have enough diversity to react to changes in the environments. However, RIGA suffers from a major “weakness”: the raw building-blocks introduced by the randomly generated individuals are quickly removed from the population because their fitness is usually below average. RIGA is frequently chosen as a peer-algorithm for comparison purposes in evolutionary dynamic optimization studies, but, due to this “weak spot”, it may be questioned if a Standard GA is not better suited to assess the efficiency of a new method (in fact, studies currently being developed in our lab reinforce this hypothesis). In order to improve RIGA’s performance, several alternative RIGA-based methods have been proposed in the past few years.

The two GAs described in Yang’s paper try to overcome the problem with random immigrants by inserting in the population mutated copies of the elite — Elitism-based Immigrants Genetic Algorithm (EIGA) — or mutated copies of the chromosomes kept in a memory — Memory-based Immigrants Genetic Algorithm (MIGA). Memory-based approaches for dynamic optimization use a memory to store good solutions, which are retrieved later, periodically, or when the environments changes. Memory GAs are known to improve traditional GAs performance when the dynamics of changes are cyclic, that is, the fitness function returns to previous “shapes” from time to time; on the other hand, memory schemes are not that effective when the changes are not periodic. Therefore, and as expected, MIGA outperforms other GAs when the changes are cyclic. EIGA is better when the changes in the environment are not severe. This behaviour is explained by the fact that introducing mutated copies of the best individual in the population provides the GA with means to tackle small changes because the algorithm is maintaining a kind of sub-population around the optimal solution, and small shifts in the environment are easily traceable by those near-optimal individuals.

Summarizing, the study shows that MIGA and EIGA are able to outperform other GAs under the conditions of the test set. However, there is one question that remains unanswered: what happens when changing the parameter values? For instance, diversity maintenance schemes for dynamic optimization deal with non-stationary environments by maintaining the diversity at a higher level. This means that maybe the optimal mutation probability of these algorithms is different from those of Standard GAs. Shouldn’t a proper comparison between the algorithms consider a range of possible mutation probabilities (Yang’s studies used the traditional pm = 1/l, where l is the chromosome length)? And what about population size? Isn’t population size the major bottleneck for GAs’ performance in stationary environments? Is it possible that a variation in the population size of the GA for dynamic optimization conduces the research to different conclusions? Studies currently under way will try to answer to some of these questions.