Self-Organized Criticality and Genetic Algorithms

Our journal paper on the Sandpile Mutation operator for Genetic Algorithms is now available online: C.M. Fernandes, J.L.J. Laredo, A.C. Rosa, J.J. Merelo, “The sandpile mutation Genetic Algorithm: an investigation on the working mechanisms of a diversity-oriented and self-organized mutation operator for non-stationary functions“, Applied Intelligence, February 2013.

Abstract: This paper reports the investigation on the sandpile mutation, an unconventional mutation control scheme for binary Genetic Algorithms (GA) inspired by the Self-Organized Criticality (SOC) theory. The operator, which is based on a SOC system known as sandpile, is able to generate mutation rates that, unlike those given by other methods of parameter control, oscillate between low values and very intense mutations events. The distribution of the mutation rates suggests that the algorithm can be an efficient and yet simple and context-independent approach for the optimization of non-stationary fitness functions. This paper studies the mutation scheme of the algorithm and proposes a new strategy that optimizes is performance. The results also demonstrate the advantages of using the fitness distribution of the population for controlling the mutation. An extensive experimental setup compares the sandpile mutation GA (GGASM) with two state-of-the-art evolutionary approaches to non-stationary optimization and with the Hypermutation GA, a classical approach to dynamic problems. The results demonstrate that GGASM is able to improve the other algorithms in several dynamic environments. Furthermore, the proposed method does not increase the parameter set of traditional GAs. A study of the distribution of the mutation rates shows that the distribution depends on the type of problem and dynamics, meaning that the algorithm is able to self-regulate the mutation. The effects of the operator on the diversity of the population during the run are also investigated. Finally, a study on the effects of the topology of the sandpile mutation on its performance demonstrates that an alternative topology has minor effects on the performance.

Proyecto SIPEsCa

Título:
Sistema de Información y Predicción de bajo coste y autónomo para conocer el Estado de las Carreteras en tiempo real mediante dispositivos distribuidos (SIPEsCa, G-GI3000/IDIF)

Convocatoria pública:
Proyecto de investigación I+D+I relativos al ámbito competencial de la Consejería de Obras Públicas y Vivienda para los años 2011 a 2013 (G-GI3000/IDI0)

Justificación:
Con una población cada vez más informada y con dispositivos de comunicación ubicuos que poseen y usan habitualmente prácticamente el 90% de la población, obtener información sobre cómo se encuentra el tráfico en cualquier momento en cualquiera de los casi 20.000 kilómetros con los que cuenta la red viaria nacional, significaría poder gestionar de manera óptima una red de comunicaciones vital para un porcentaje elevado de usuarios.

Contar con un sistema de información sobre el estado del trafico y la predicción del uso de la red viaria por parte de los vehículos se antoja clave en el contexto histórico actual.

Descripción:
El proyecto parte de los trabajos preliminares que el grupo GeNeura ha venido haciendo con la empresa cordobesa Ciudad 2020 S.L. Se trata de una empresa radicada en Córdoba que ha creado el sistema City Analytics, el cual permite estimar con un 8,5% de error el número de personas que pasan por un entorno determinado, el tiempo medio que permanecen en una zona, el origen y destino de las mismas así como la tipología de las personas que concurren en la zona estudiada.

El objetivo principal es conseguir un sistema de información autónomo (desde un punto de vista energético), que sea de bajo coste, de rápida implantación y de alta fiabilidad, tal que informe sobre las condiciones del tráfico en tiempo real, no sólo para las instituciones y organismos encargados de la regulación y control del tráfico, sino también y específicamente para usuarios particulares (a través de alertas móviles, mediante web, etc.), y que gracias a la recopilación y procesado de datos, permita predecir el comportamiento futuro.

Estos objetivos persiguen los siguientes resultados desde el punto de vista hardware:

• Integración de varios dispositivos autónomos encargados de recopilar y enviar la información a los servidores.
• Implantación de varios dispositivos autónomos en las principales salidas de la ciudad para la recopilación de la información sobre la movilidad en el área metropolitana.

Desde el punto de vista de servicios web y herramientas de información:

• Website que permita conocer los principales datos de movilidad en tiempo real a partir de la información recopilada.
• Servicio de alertas automatizado y en tiempo real cuando se cumplan una serie de condiciones dadas.
• Servicio web (mediante la utilización de APIs) para servir información.
• Servicio de predicción para los flujos de movimiento.

El desarrollo del proyecto comenzará proximamente y se desarrollará hasta diciembre de 2013.

¡Os mantendremos informados de los avances!
logos_SIPESCA

Island Model for Multi-Objective Ant Colony Optimization Algorithms in Soft Computing

We are excited because our work «Pareto-based multi-colony multi-objective ant colony optimization algorithms: an island model proposal» has been published in Soft Computing Journal this month, and is now on-line at http://link.springer.com/article/10.1007%2Fs00500-013-0993-y

Yes, the title is a representation of how long the work is :D

The abstract is:

Multi-objective algorithms are aimed to obtain a set of solutions, called Pareto set, covering the whole Pareto front, i.e. the representation of the optimal set of solutions. To this end, the algorithms should yield a wide amount of near-optimal solutions with a good diversity or spread along this front. This work presents a study on different coarse-grained distribution schemes dealing with Multi-Objective Ant Colony Optimization Algorithms (MOACOs). Two of them are a variation of independent multi-colony structures, respectively having a fixed number of ants in every subset or distributing the whole amount of ants into small sub-colonies. We introduce in this paper a third method: an island-based model where the colonies communicate by migrating ants, following a neighbourhood topology which fits to the search space. All the methods are aimed to cover the whole PF, thus each sub-colony or island tries to search for solutions in a limited area, complemented by the rest of colonies, in order to obtain a more diverse high-quality set of solutions. The models have been tested by implementing them considering three different MOACOs: two well-known and CHAC, an algorithm previously proposed by the authors. Three different instances of the bi-Criteria travelling salesman problem have been considered. The experiments have been performed in a parallel environment (inside a cluster platform), in order to get a time improvement. Moreover, the system scaling factor with respect to the number of processors will be also analysed. The results show that the proposed Pareto-island model and its novel neighbourhood topology performs better than the other models, yielding a more diverse and more optimized set of solutions. Moreover, from the algorithmic point of view, the proposed algorithm, named CHAC, yields the best results on average.

The scheme of the proposed model can be seen in the next figure:

Pareto-based multi-colony island model

Pareto-based multi-colony island model

Enjoy (and CITE) it! :D

Super Mario autonomous agents at LION 2013

Recently, inside the last LION 7 (2013) conference (Special Session on Games and Computational Intelligence) there was presented the paper entitled «FSM-Based Agents for Playing Super Mario Game».

It describes the implementation and test of an autonomous agent which can play Super Mario game better than an expert user can do (in some trained levels).
It is build starting from a Finite State Machine and applying an Evolutionary Algorithm.

The presentation is:

You can watch one example of the obtained agent playing a game here:

Enjoy it. ;)

Finding an evolutionary solution to the game of Mastermind with good scaling behavior

As important as finding a solution to the game of MasterMind that is better than anyone else is to find one that can be applied to a wide range of sizes. In this paper we get rid of a parameter, the limit size of the consistent set we use for scoring every combination. This makes a faster algorithm, and not always worse than the optimal consistent set size.

This was the paper presented at LION by Antonio Fernández using this presentation