Genetic Agorithms with Memory- and Elitism-based Immigrants in Dynamic Environments was the paper selected for presentation and discussion in last Friday’s seminar. The article, authored by Shengxiang Yang, was published in the last Fall edition of Evolutionary Computation and addresses evolutionary optimization in dynamic environments.
Yang proposes two Genetic Algorithms (GAs) for dynamic optimization based on Grefenstette’s classical Random Immigrants GA (RIGA). RIGA tackles (or tries to) changing optima by inserting, in each and every generation, a certain number of randomly generated individuals that replace the worst individuals in the population (or randomly selected individuals, in another version). This way, genetic novelty is constantly being introduced and the population is expected to have enough diversity to react to changes in the environments. However, RIGA suffers from a major “weakness”: the raw building-blocks introduced by the randomly generated individuals are quickly removed from the population because their fitness is usually below average. RIGA is frequently chosen as a peer-algorithm for comparison purposes in evolutionary dynamic optimization studies, but, due to this “weak spot”, it may be questioned if a Standard GA is not better suited to assess the efficiency of a new method (in fact, studies currently being developed in our lab reinforce this hypothesis). In order to improve RIGA’s performance, several alternative RIGA-based methods have been proposed in the past few years.
The two GAs described in Yang’s paper try to overcome the problem with random immigrants by inserting in the population mutated copies of the elite — Elitism-based Immigrants Genetic Algorithm (EIGA) — or mutated copies of the chromosomes kept in a memory — Memory-based Immigrants Genetic Algorithm (MIGA). Memory-based approaches for dynamic optimization use a memory to store good solutions, which are retrieved later, periodically, or when the environments changes. Memory GAs are known to improve traditional GAs performance when the dynamics of changes are cyclic, that is, the fitness function returns to previous “shapes” from time to time; on the other hand, memory schemes are not that effective when the changes are not periodic. Therefore, and as expected, MIGA outperforms other GAs when the changes are cyclic. EIGA is better when the changes in the environment are not severe. This behaviour is explained by the fact that introducing mutated copies of the best individual in the population provides the GA with means to tackle small changes because the algorithm is maintaining a kind of sub-population around the optimal solution, and small shifts in the environment are easily traceable by those near-optimal individuals.
Summarizing, the study shows that MIGA and EIGA are able to outperform other GAs under the conditions of the test set. However, there is one question that remains unanswered: what happens when changing the parameter values? For instance, diversity maintenance schemes for dynamic optimization deal with non-stationary environments by maintaining the diversity at a higher level. This means that maybe the optimal mutation probability of these algorithms is different from those of Standard GAs. Shouldn’t a proper comparison between the algorithms consider a range of possible mutation probabilities (Yang’s studies used the traditional pm = 1/l, where l is the chromosome length)? And what about population size? Isn’t population size the major bottleneck for GAs’ performance in stationary environments? Is it possible that a variation in the population size of the GA for dynamic optimization conduces the research to different conclusions? Studies currently under way will try to answer to some of these questions.