Dynamic Control in Evolutionary Algorithms

Last Friday, in our weekly meeting, the paper by Di Tollo et al. “From Adaptive to More Dynamic Control in Evolutionary Algorithms” was presented and discussed. This work in centered on the adaptation of application rates of different types of crossover. A performance function is defined that takes into account the quality of solutions and diversity generated by each crossover. By varying a user-defined variable (teta), the importance of each factor can be regulated in order to set the desired compromise between quality and diversity (which gives rise to the idea of applying a multi-objective approach here). Then, after credit assignment (for each crossover), an operator is selected by Probability Matching (PM) or Multi-Armed Bandit (MAB) strategies.

For testing the proposed scheme, the authors define a framework with 20 different crossover operators of which the main characteristics are known (i.e., whether they favor intensity/exploitation or diversification/exploration). The system is applied to SAT problems. Several conclusions are drawn from those simple experiments. First, the type of SAT problem greatly influences the behavior of the system, as well as the criteria used to compute the performance of the operator and the selection strategy (PM or MAB). That is, setting teta to a hypothetical compromise value between intensification and diversification leads to a variety of different behaviors and not necessarily to that expected compromise.

The second part of the experiments is focused on the dynamic variation of teta. The authors conclude that the variation strategy influences the behavior of the algorithm and the progress of the system. They also conclude that, in fact, it is possible to favor diversity-oriented crossover or quality-oriented crossover by tuning teta. That is, it is possible to control the desired features by changing the teta value during the search.

Going a bit farther (and a bit faster) solving MasterMind using evolutionary algorithms

We left MasterMind last year in a good state using estimation of distribution algorithms; however, if we want to find a solution for higher dimensions (more colors, more pegs) we have to improve the number of evaluations. In this case we use something we call endgames; same as chess playing algorithms use a database of endgames for ending a game in a straightforward way, in MasterMind we can recognize a few occasions in which the search space is reduced drastically and it’s better either change the strategy or just change the search space. When we know the colors (that is, we obtain as many white+blacks as the length of the combination) the best is to just revert to exhaustive search over combination space; when the answer is 0 whites/blacks we can also exclude those colors from the search space and start, maybe with a smaller population.
This is what we do in the paper Improving and Scaling Evolutionary Approaches to the MasterMind Problem , which was presented a short time ago in the EvoGames workshop in Torino
IMG_1235
During the presentation, Carlos Cotta and Carlos Fernandes played the game shown above.
Here’s the presentation, which you can download at ease. Picture credits are included in the notes.

Parameter Tuning Again

Tomorrow’s seminar is again about parameter tuning. This time, we’ll discuss the paper «From Adaptive to More Dynamic Control in Evolutionary Algorithms», by Di Tollo et al., presented last week at the Evo* congress, in Torino. Here’s the abstract:

Adaptive evolutionary algorithms have been widely developed to improve the management of the balance between intensification and diversification during the search. Nevertheless, this balance may need to be dynamically adjusted over time. Based on previous works on adaptive operator selection, we investigate in this paper how an adaptive controller can be used to achieve more dynamic search scenarios and what is the real impact of possible combinations of control components. This study may be helpful for the development of more autonomous and efficient evolutionary algorithms.

The Sandpile Mutation Operator at Torino, Italy

Last thursday morning, at the Evo* congress, I was presenting our study on the mutation rates evolved by the Sandpile Mutation Operator, an alternative mutation scheme for GAs, specifically designed for Dynamic Optimization Problems, based on the Self-Organized Criticality Theory and the Bak-Tang-Wiesenfeld Sandpile Model.