Volunteer-based evolutionary algorithms al dente

Planning the cook of a time consuming optimization problem? Have you considered to let a crowd of volunteers to help you in this endeavor? In a volunteer-based system,  volunteers provide you with free ingredients (CPU cycles, memory, internet connection,..) to be seasoned with only a pinch of peer-to-peer or desktop-grid technology.

If you are looking for a delicious cook of a volunteer-based evolutionary algorithm, you can find our recipe in this paper published in Genetic Programming and Evolvable Machines (pre-print version available here)

Title: “Designing robust volunteer-based evolutionary algorithms

Abstract This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the world. Given that resources may join the system only for a limited period of time, the challenge of a volunteer-based evolutionary algorithm is to take advantage of a large amount of computational power that in turn is volatile. The paper analyzes first the speed of convergence of massively parallel evolutionary algorithms. Then, it provides some guidance about how to design efficient policies to overcome the algorithmic loss of quality when the system undergoes high rates of transient failures, i.e. computers fail only for a limited period of time and then become available again. In order to provide empirical evidence, experiments were conducted for two well-known problems which require large population sizes to be solved, the first based on a genetic algorithm and the second on genetic programming. Results show that, in general, evolutionary algorithms undergo a graceful degradation under the stress of losing computing nodes. Additionally, new available nodes can also contribute to improving the search process. Despite losing up to 90% of the initial computing resources, volunteer-based evolutionary algorithms can find the same solutions in a failure-prone as in a failure-free run.

Anuncios

Sistemas Clasificadores

Los sistemas clasificadores son una fusión entre los algoritmos evolutivos, el aprendizaje por refuerzo y el supervisado. Se conocen como Learning Classifier Systems. El viernes pasado aproveché la reunión del grupo para presentar una breve revisión histórica y dar detalles sobre quizá el algoritmo más importante introducido en este campo, el eXtended Classifier System o XCS de Wilson.

Básicamente, el algoritmo busca mediante evolución genética y aprendizaje un conjunto de reglas que modelen la solución a un problema donde existe recompensa. Las reglas se componen de una condición y una acción. La población de reglas representa para cualquier condición dada, cual será la mejor acción. Esto se consigue asociando al espacio de entrada una predicción de la mejor recompensa futura obtenida para cada acción posible.

Entonces, dado un estado que representa el entorno, se buscan las reglas cuya condición coincide, y de ellas se toma la acción que ofrece mejor recompensa futura.

La tarea no es fácil, los algoritmos formales de aprendizaje por refuerzo, necesitan a priori un conocimiento determinista de las posibles entradas y las transiciones resultantes de las acciones, dejando poco o nada para la búsqueda y aplicación de generalización.

Con XCS este problema se resuelve introduciendo algunos ajustes a la componente genética. La idea general es básicamente repartir los recursos (reglas) para que representen todo el espacio con la mayor precisión y generalización posible. Como no es algo que se pueda resumir en unas pocas líneas, aquí os dejo la presentación:

“The L-Co-R co-evolutionary algorithm: a comparative analysis in medium-term time-series forecasting problems” at ECTA

Our paper “The L-Co-R co-evolutionary algorithm: a comparative analysis in medium-term time-series forecasting problems” was accepted for oral presentation in the latest ECTA-IJCCI conference.

The abstract:
This paper presents an experimental study in which the effectiveness of the L-Co-R method is tested. L-Co-R is a co-evolutionary algorithm to time series forecasting that evolves, on one hand, RBFNs building an appropriate architecture of net, and on the other hand, sets of time lags that represents the time series in order to perform the forecasting using, at the same time, its own forecasted values. This coevolutive approach makes possible to divide the main problem into two subproblems where every individual of one population cooperates with the individuals of the other. The goal of this work is to analyze the results obtained by L-Co-R comparing with other methods from the time series forecasting field. For that, 20 time series and 5 different methods found in the literature have been selected, and 3 distinct quality measures have been used to show the results. Finally, a statistical study confirms the good results of L-Co-R in most cases.

Paper “Testing the Differences of Using RGB and HSV Histograms During Evolution in Evolutionary Art” in ECTA

This week we are presenting the paper “Testing the Differences of Using RGB and HSV Histograms During Evolution in Evolutionary Art” in the Evolutionary Computation Theory and Applications.

This is the work we developed in the Hackathon of the Spanish Free Software Contest of the University of Granada with the help of several students of our university (who are also authors!).

In this work we have added Processing to our OSGiLiath (service oriented architecture for evolutionary algorithms) framework to generate images from individual representations to work with generative art. The fitness is the equality to a predefined image. HSV, RGB and a combination of both have been used.

The abstract:

This paper compares the use of RGB and HSV histograms during the execution of an Evolutionary Algorithm. This algorithm generates abstract images that try to match the histograms of a target image. Three different fitness functions have been used to compare: the differences between the individual with the RGB histogram of the test image, the HSV histogram, and an average of the two histograms at the same time. Results show that the HSV fitness also increases the similarities of the RGB (and therefore, the average) more than the other two measures.

And here is the poster:

ecta

OSGiLiath at #GECCO2013

Add your thoughts here… (optional)

OSGiLiath Evolutionary Framework

This week several members of Geneura group and ANYSELF project are attending to GECCO 2013 conference in Amsterdam. I’ve presented two papers related with OSGiLiath:

The first one, entitled Developing Services in a Service Oriented Architecture for Evolutionary Algorithms has been presented inside the EvoSoft workshop. It is a more technical continuation of the work “Service Oriented Evolutionary Algorithms“. Here is the abstract:

This paper shows the design and implementation of services for Evolutionary Computation in the Service Oriented Architecture paradigm. This paradigm allows independence in language and distribution, but the development requires to manage some technological and design issues, such as abstract design or unordered execution. To solve them, OSGiLiath, an implementation of an abstract Service Oriented Architecture for Evolutionary Algorithms, is used to develop new interoperable services taking into account these restrictions.

And here the presentation:

I also have presented the work “A Service Oriented…

Ver la entrada original 123 palabras más

Unreal Expert Bots at IWANN 2013

Last week there was held IWANN 2013 at Tenerife, an international conference mainly devoted to researches inside the neural networks scope. In it, Antonio Fernández Leiva, Raúl Lara and Me organized the Special Session on Artificial Intelligence and Games.

There were five works in the session, one of them “Designing and Evolving an Unreal Tournament— 2004 Expert Bot“.

It describes the designing and improvement, through off-line (not during the game) evolution, of an autonomous agent (or bot) for playing the game Unreal Tournament 2004. This was created by means of a finite state machine which models the expert behaviour of a human player in 1 vs 1 deathmatch mode, following the rules of the international competition.

Then, the bot was improved by means of a Genetic Algorithm, yielding an agent that is, in turn a very hard opponent for the medium-level human player and which can (easily) beat the default bots in the game, even in the maximum difficulty level.

The presentation can be seen at:

Moreover, you can watch one example of the evolution in the following video:

Finally, the Unreal Expert and Genetic bot’s source code are available at https://github.com/franaisa/ExpertAgent

Enjoy them. ;)

More results of the hispano-mexican collaboration

As a result of a collaboration with Mario García Valdez, Leonardo Trujillo and Francisco Fernández (this one from Spain) we have published two papers based on the EvoSpace framework a pool-based evolutionary architecture for interactive and straight evolutionary computation. The first paper describes the EvoSpace-i, the interactive part and is well described by Paco Fernández in our group blog, and the