So you want a summer internship in the University of Granada, Spain

The GeNeura team is composed by an international team of doctors and graduate students, mainly focused in bioinspired algoritms such as ant colony optimization algorithms, also in multiobjective versions, distributed and sequential EAs applied to neural net training, along with other applications.
So, if you have knowledge of

  • Evolutionary algorithms, or
  • neural nets, or
  • P2P or parallel or distributed computing and
  • Java, Perl, Ruby, Javascript or Python, and
  • your own source of funding, such as Erasmus, a Foreign Ministry scholarship, or anything like that

please send your resumé to JJ Merelo, specifying what you request from us, and the support you will have, and we’ll be back to you ASAP.

KohonAnts Slides (ALIFE XI)

Hello again to everyone!

These are the slides of the presentation of KohonAnts algorithm in ALIFE XI conference. ;)

It is an hybrid Ant Colony and Self-organizing Map algorithm for clustering and pattern classification.

A bit late, but I had some troubles with slideshare…

In any case…….Enjoy it. ;) :D

Here you can see an example of the evolution of the ants in the grid for the IRIS dataset:

Ants movement in the toroidal grid

Ants movement in the toroidal grid

Green class is quite similar to the other two classes, so it is difficult to get a fine cluster with it.

Thanks to Dave Oranchak. ;)

Decentralized Genetic Algorithms and Multiobjective Optimization

Last friday we were discussing the issues of decentralization and multiobjective optimization in Genetic Algorithms, in more detail:

  • Decentralizing GAs is a radical way of parallelizing GAs in which the parallel infraestructure is devoid of any central server. Hence, individuals have no global but local information provided by the neighbourhood.
  • Multiobjective GAs are GAs optimizing several conflicting objectives. In such a way the output of the GA is not a single but a set of solutions (usually named pareto-front) and it is based on some criterion of dominance.

Matching both issues is not straightforward since inserting a new solution within the set of solutions requires of a global comparison through the pareto-front and, talking about a decentralized GA, the pareto-front should be decentralized in advance. The consequence is that there are few works tackling such challenge. One of them is the paper A Cellular Genetic Algorithm for Multiobjective Optimization by Nebro et al.which can be considered as an step towards, although not a definitive solution to decentralized multiobjective optimization. The authors propose:

  • A decentralized GA: the Cellular Genetic Algorithm in which every individual is a single process.
  • An external archive (and centralized) to store the non-dominated solutions.
  • A simulation of the parallel environment.

In spite of the drawback of having a centralized archive which does not allow a real parallel/decentralized execution, results show to be algorithmically competitive against the state of the art algorithms NSGA-II and SPEA2. Therefore, one of the possible reading from the paper is that decentralized GAs are promising, although there is no a pure decentralized GA for Multiobjective optimization yet.

The Quest for the Holy Grail

Last week’s seminar was focused on Genetic Algorithms with varying population size. Like other GA’s parameters, population size was the subject of some studies in 1990s that aimed at exploring the established notion that different stages of the search require populations with different sizes. In general, we may say that larger populations are needed in the beggining of the search, and as the search proceeds towards the end, the GA is able to work with fewer individuals. One of the main objectives of the research on dynamic population GAs was to eliminate the population size parameter and design a GA that could adapt the size independently of the number of individuals in the first generation. That way, we could not only speed up the search process but also diminish the tuning effort. (On the other hand, improvements on scalability are, in my opinion, impossible to achieve with just a plain dynamic scheme.)

Some schemes only aim at overcoming the hard task of determining an optimal population size, and they do that by mimicking the process of tuning the population size by a GA’s expert (see also bisection method, K. Sastry). The parameter-less GA (Lobo et al.) works that way by evolving simultaneous and increasingly larger populations until the convergence criterion is met. This method does not speed up a standard GA. Instead, it may slow it down, but the worst-case scenario is of such a magnitude that is well worth using the parameter-less GA to avoid the tuning of the initial population size. Other techniques try to speed up the algorithms by dynamically changing the population size during the run, in a deterministic or adaptive manner.

Back in 1994, the authors (Arabas et al.) of the Genetic Algorithm with Varying Population Size (GAVaPS) claimed that the algorithm was able to adapt its population size during the run, according to the average fitness of the population and to the quality of the best or worst solutions. However, further studies could no replicate that described behaviour. Instead, GAVaPS population easily evolved towards extinction or experienced demographic burst that would increase its size and computational effort. The Adaptive Population Size GA (APGA) (Eiben et al.) tries to overcome this problem by engaging in a traditional steady state reproduction, that is, the population only generates two offspring in each generation. Although the authors presented a study were APGA outperforms several dynamic population sizing methods, F. Lobo and C. Lima later demonstrated that the algorithm does not really adapts its population size, but instead it remains below a specific values after a certain number of generation (In addition, they show that the test set were APGA apparently excels is probably not suited to evaluate GAs with dynamic population).

PRoFIGA (Valkó) is an example of deterministic control of the population (GAVaPS and APGA are adaptive). Unfortunately, it is also an example of an extremely complex algorithm when it comes to tuning it. PROFIGA removes the population size from the usual set of parameters, but it adds six (!) parameters. One of the main guidelines to parameter control is: make it simple. If we replace a parameter by another one (or more!), then the first objective – simplify the tuning for a non-expert user – is not met.

In 2001 (at GECCO), De Jong identified some of the open problems in Evolutionary Computation. Dynamic populations was one of them. Since then, not much has been done that we could consider as major breakthrough. In fact, varying population size is a serious candidate for the title of Evolutionary Computation Holy Grail! Take for instance the self-adaptation of the optimal population size. If the algorithm starts with a smaller population than the required, it has to grow its size, but it has to add genetic diversity along the way, because the initial genetic pool will no be able to converge to a solution (if it did, then the optimal population size would be expected one, but instead a smaller population, of course). Is it possible to design an effective dynamic scheme (that is, a scheme that could start with a population with random size, an then adapt in the first generations to the proper size)? Or should we start considering other applications for dynamic population GAs, and abandon the quest the for a GA which is independent of the initial population size?