Particle Swarm Optimization in PPSN 2012

“Controlling the Parameters of the Particle Swarm Optimization with a Self-Organized Criticality Model” (Fernandes, Merelo, Rosa) is the title of the paper we have presented last week in PPSN. The key idea of the project is to use a Self-Organized Criticallity system called the Bak-Sneppen model of co-evolutionary speciesfor controlling the parameters (inertia weight and acceleration coefficients) of the PSO, as well a pertubation factor of the particles’ positions. In this stage of the research, the model is used as a black-box that, in each iteration, feeds each particle with specific parameter values related to the model’s dynamics. The evolution of the species seemed to fit the control requirements of the PSO parameters, and, in fact, the proposed scheme attained very interesting results when compared to other control strategies. Furthermore, neither the model nor the PSO require fine-tuning: the swarm is totally controlled by the Bak-Sneppen model. The paper can be found here. The abstract:

This paper investigates a Particle Swarm Optimization (PSO) with a Self-Organized Criticality (SOC) strategy that controls the parameter values and perturbs the position of the particles. The algorithm uses a SOC system known as Bak-Sneppen for establishing the inertia weight and acceleration coefficients for each particle in each time-step. Besides adjusting the parameters, the SOC model may be also used to perturb the particles’ positions, thus increasing exploration and preventing premature convergence. The implementation of both schemes is straightforward and does not require hand-tuning. An empirical study compares the Bak-Sneppen PSO (BS-PSO) with other PSOs, including a state-of-the-art algorithm with dynamic variation of the weight and perturbation of the particles. The results demonstrate the validity of the algorithm.

Advertisements

Pool based distributed evolutionary algorithms: a minimalistic survey

During the PPSN 2012 conference, we have participated in a workshop on parallel techniques in search, optimization and learning. The style of the workshop has made possible to present a survey paper that describes the different ways how a pool (a set of tuples with create/read/update/delete functions) can be used profitably for evolutionary computation and the tradeoffs involved in the different ways it can be used.
You can access the paper Using Pool-based Evolutionary Algorithms for Scalable and Asynchronous Distributed computing at the workshop site. Here’s the presentation, which I can’t embed.
After the presentation there was a lively discussion on the scalability we should expect and the types of interaction that would work the best; also on security issues and how to check that the client is not cheating.
This survey builds on our work on pool based evolutionary algorithms, which we presented in EvoStar, and relates it to work made on the free Dropbox platform