Service Oriented Architecture for Research (an example in Evolutionary Computation)

Past week I presented my research line to other young researchers of the CITIC-UGR, inside the CITICoffee meetings (a Science Coffee to discuss about our work, without bosses or pressure, but with coffee and pastries!).

Although the slides are in Spanish, there are also diagrams with text in English, so it is not difficult to follow. They also include a Jackie Chan meme!

If you are interested in this kind of research (further results are now in the revision process), check this preliminary paper: draft or published version in Springer Link.

Dealing with Noisy Fitness in the Design of a RTS Game Bot

This paper is a part of my Final Degree Project and it’s the result of our participation in the Google AI Contest of 2010. It’s also my first presentation in an conference, and the first time in English. In this paper we talk about the design of a bot that can play (and win) to the game Planet Wars. In this post we can read the rules of the contest and the game.

In this paper, we study the impact of the noisy fitness in the desing of the bot, because the choose of a bad fitness can make useless the genetic algorithm.

The presentation can be found here:
http://www.slideshare.net/antaress/dealing-with-noisy-fitness-in-a-rts-game-bot-design

This paper was accepted in the EvoGame and was nominated for the best paper.

Pool based evolutionary algorithm presented in EvoStar 2012

This is the first internationally published paper (it was previously published in a Spanish conference of a series that deals with a system, intended for volunteer computing, that uses a pool for implementing distributed evolutionary algorithms. The basic idea is that the population resides in a pool (implemented using CouchDB), with clients pulling individuals from the pool, doing stuff on them, and putting them back in the pool. The algorithm uses, as much as possible, CouchDB features (such as revisions and views) to achieve good performance. All the code (for this and, right now, for the next papers) is available as open-source code.
The paper was accepted in the first edition of EvoPar as poster, and mainly concentrates on studying the effect of different parameters on scaling and performance, and comparing it with a canonical GA. Here’s the poster
Pool-based distributed evolutionary algorithms using an object database
The paper Pool-Based Distributed Evolutionary Algorithms Using an Object Database is available from SpringerLink if your university subscribes to it. If it does not, please send us an email.

Testing different diversity-enhacing migration and replacement policies in dynamic environments (an evolutionary robotics case)

The paper «Testing Diversity-Enhancing Migration Policies for Hybrid On-Line Evolution of Robot Controllers» has been published in Evostar 2012. This work was developed during my foreign stay at the Vrije Universiteit Amsterdam, with Doctor A.E. Eiben. Appart from having a great time of my life in Amsterdam, I did experiments, and science and stuff.

In this work, we present the results obtained from comparing several migration policies that tries to optimize in a noisy fitness environment: the on-line, on-board and hybrid evolutionary robotics problem. Three different migration policies have been studied (the most different migrant, random migrant and best migrant) and two replacement mechanisms: the migrant replaces the worst, or the migrant replaces the worst after being evaluated only if is better. Experiments with 4, 16 and 36 robots were conduced, with two different topologies (ring and panmictic) and also a comparison with other evolutionary robotics algorithms were performed. Results show that the replacement mechanism has more influence than the migration policy or topology, and it also affects the tuning of the algorithm parameters. We asked ourselves the next questions:

  • Using the hybrid approach (island model), which is the best combination of migration policy, admission policy, and island topology?
  • Is this combination better than the encapsulated and distributed alternatives?
  • Does the number of robots affect the result and if so, how?

Conclusions, graphs and stuff and in the paper, but summarizing, multikulti technique (receive the most different individual of my population from other islands) and accept it in my population after its evaluation perform better than other alternatives, even with less migration rate.

You can also check the poster here.

The Springer link to the paper is  Testing Diversity-Enhancing Migration Policies for Hybrid On-Line Evolution of Robot Controllers but you can download the draft.

The abstract:

We investigate on-line on-board evolution of robot controllers based on the so-called hybrid approach (island-based). Inherently to this approach each robot hosts a population (island) of evolving controllers and exchanges controllers with other robots at certain times. We compare different exchange (migration) policies in order to optimize this evolutionary system and compare the best hybrid setup with the encapsulated and distributed alternatives. We conclude that adding a difference-based migrant selection scheme increases the performance.

[EVO* 12] Validating a Peer-to-Peer Evolutionary Algorithm

Tomorrow we will be presenting the work «Validating a Peer-to-Peer Evolutionary Algorithm» in Evo* 2012 held in Malaga, Spain. You can find below the abstract and presentation of the work.

This paper proposes a simple experiment for validating a Peer-to-Peer Evolutionary Algorithm in a real computing infrastructure in order to verify that results meet those obtained by simulations. The validation method consists of conducting a well-characterized experiment in a large computer cluster of up to a number of processors equal to the population estimated by the simulator. We argue that the validation stage is usually missing in the design of large-scale distributed meta-heuristics given the difficulty of harnessing a large number of computing resources. That way, most of the approaches in the literature focus on studying the model viability throughout a simulation-driven experimentation. However, simulations assume idealistic conditions that can influence the algorithmic performance and bias results when conducted in a real platform. Therefore, we aim at validating simulations by running a real version of the algorithm. Results show that the algorithmic performance is rather accurate to the predicted one whilst times-to-solutions can be drastically decreased when compared to the estimation of a sequential run.