Workshop on Spatially Structured Metaheuristics

We cordially invite you to attend the following two-presentations on Spatially Structured Metaheuristics. This mini-workshop will be held at 11.30 a.m. in the CITIC-UGR building (June 26th, 2014).

Spatially Structured Metaheuristics: Principles and Practical Applications
by Juan Luis Jiménez Laredo (University of Luxembourg)

A relevant number of metaheuristics are based on population. Although conventions may establish different names, individuals in evolutionary algorithms, ants in ant colony optimization or particles in particle swarm optimization belong to the same side of a coin: they are all  atomic elements of the population (a.k.a. building-blocks). In this context, spatially structured metaheuristics investigate how to improve the performance of metaheuristics by confining these elements in neighborhoods. This talk aims at presenting the working principles of spatially structured metaheuristics and practical applications to enhance diversity, scalability and robustness.

 

Spatially Structured Metaheuristics: Dynamic and Self-organized Topologies
by Carlos M. Fernandes (University of Lisbon)

Population based metaheuristics are computational search or optimization methods that use a population of possible solutions to a problem. These solutions are able communicate, interact and/or evolve. Two types of strategies for structuring population are possible. In panmictic populations, every individual is allowed to interact with every other individual. In non-panmictic metaheuristics, also called spatially structured population-based metaheuristics, the interaction is restricted to a pre-defined or evolving structure (network). Traditional spatially structured metaheuristics are built on pre-defined static networks of acquaintances over which individuals can interact. However, alternative strategies that overcome some of the difficulties and limitations of static networks (extra design and tuning effort, ad hoc decision policies, rigid connectivity, and lack of feedback from the problem structure and search process) are possible. This talk discusses dynamic topologies for spatially structured metaheuristics and describes a new model for structuring populations into partially connected and self-organized networks. Recent applications of the model on Evolutionary Algorithms and Particle Swarm Optimization are given and discussed.

Volunteer-based evolutionary algorithms al dente

Planning the cook of a time consuming optimization problem? Have you considered to let a crowd of volunteers to help you in this endeavor? In a volunteer-based system,  volunteers provide you with free ingredients (CPU cycles, memory, internet connection,..) to be seasoned with only a pinch of peer-to-peer or desktop-grid technology.

If you are looking for a delicious cook of a volunteer-based evolutionary algorithm, you can find our recipe in this paper published in Genetic Programming and Evolvable Machines (pre-print version available here)

Title: «Designing robust volunteer-based evolutionary algorithms»

Abstract This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the world. Given that resources may join the system only for a limited period of time, the challenge of a volunteer-based evolutionary algorithm is to take advantage of a large amount of computational power that in turn is volatile. The paper analyzes first the speed of convergence of massively parallel evolutionary algorithms. Then, it provides some guidance about how to design efficient policies to overcome the algorithmic loss of quality when the system undergoes high rates of transient failures, i.e. computers fail only for a limited period of time and then become available again. In order to provide empirical evidence, experiments were conducted for two well-known problems which require large population sizes to be solved, the first based on a genetic algorithm and the second on genetic programming. Results show that, in general, evolutionary algorithms undergo a graceful degradation under the stress of losing computing nodes. Additionally, new available nodes can also contribute to improving the search process. Despite losing up to 90% of the initial computing resources, volunteer-based evolutionary algorithms can find the same solutions in a failure-prone as in a failure-free run.

[EVO* 12] Validating a Peer-to-Peer Evolutionary Algorithm

Tomorrow we will be presenting the work «Validating a Peer-to-Peer Evolutionary Algorithm» in Evo* 2012 held in Malaga, Spain. You can find below the abstract and presentation of the work.

This paper proposes a simple experiment for validating a Peer-to-Peer Evolutionary Algorithm in a real computing infrastructure in order to verify that results meet those obtained by simulations. The validation method consists of conducting a well-characterized experiment in a large computer cluster of up to a number of processors equal to the population estimated by the simulator. We argue that the validation stage is usually missing in the design of large-scale distributed meta-heuristics given the difficulty of harnessing a large number of computing resources. That way, most of the approaches in the literature focus on studying the model viability throughout a simulation-driven experimentation. However, simulations assume idealistic conditions that can influence the algorithmic performance and bias results when conducted in a real platform. Therefore, we aim at validating simulations by running a real version of the algorithm. Results show that the algorithmic performance is rather accurate to the predicted one whilst times-to-solutions can be drastically decreased when compared to the estimation of a sequential run.

[EuroGP’2011] A Peer-to-Peer Approach to Genetic Programming

Right today, April 28th, we have been presenting the paper «A Peer-to-Peer Approach to Genetic Programming» in EuroGP 2011  held in Torino, Italy within the Evo* conference series.  You can find below the abstract and the presentation of the work.

This paper proposes a fine-grained parallelization of the Genetic Programming paradigm (GP) using the Evolvable Agent model (EvAg). The algorithm is decentralized in order to take full-advantage of a massively parallel Peer-to-Peer infrastructure. In this context, GP is particularly demanding due to its high requirements of computational power. To assess the viability of the approach, the EvAg model has been empirically analyzed in a simulated Peer-to-Peer environment where experiments were conducted on two well-known GP problems. Results show that the spatially structured nature of the algorithm is able to yield a good quality in the solutions. Additionally, parallelization improves times to solution by several orders of magnitude.


Things I’ve learned reading just ONE paper

One of the most important parts of the research job is reading papers. As an early stages of a new research line you are going to deep, first month of reading may seem fruitless, because you can think you are not doing anything, nor developing new experiments or whatever. But, I realized, reading one of my friends’ (and another member of GeNeura) paper how many information you can learn in just one paper. As an example, I am going to write here all concepts I’ve learned reading his paper «EvAg: a scalable peer-to-peer evolutionary algorithm» (you can download it from here).

It is a paper about P2P populations where each individual of a GA is present in a different node (fine-grain) in the network and explain how to interchange information among the nodes. Many experiments are performed and all that stuff, but I am going to concentrate in some concepts, not in the paper results, because Juanlu explained them in previous posts of this blog.

Panmictic population: each member of the population has the same probability to reproduce with the others. Applying to a graph, it is a fully connected graph.

Selectorecombinative GA: Is a mutation-free GA. It is used in population studies, due to all diversity source is from the initial population.

Trap functions: scalable and decomposable functions (divided in Building Blocks, BB). Varying the number k of each block also varies the problem difficult. Fitness depends of the number of 1s in the block. For example, in a k=4. F(0) = 3, F(1)=2, F(11)=1, F(111)=0, F(1111) = 4. It could be deceptive (one of the edges leads to a sub-optimun, and other to a global optimum).

Watts-Strogatts network: It is a small-world graph where nodes forms clusters, like real Internet. It is very easy to create a WS network from a ring topology (just alter each edge to connect another node with a probability p, so some connections are deleted). In these networks, selective pressure is equivalent to panmitic populations, but more scalable due there are a lower number of edges.

Takeover time: time that the best individual take over the entire population.

Metrics for experiments:

  • Success Rate: number of times algorithm finds the optimum.
  • Average Evaluations to find Solution (AES)
  • Best fitness convergence
  • Genotipic entropy: diversity of the population based in the genotipic distance to optimum genome (using Hamming Distance, for example). Wilcoxon test can be used.

As you can see, no information about the results of this paper is presented in this post, I’ve just written about new things I didn’t knew.

Tip: I am using Evernote to take notes of every paper I read. Evernote is like Dropbox, but with notes. I annotate shops, projects, todo lists… and I have full synchronization with all my computers (in Mac OS X, GNU/Linux) and mobile devices (Android or iPhone). There exist many usages of this program, so I recommend to take a look.

Pablo.

Also in Rome… Evolvable Agents and Population Structures

In addition to our improvemnts to the Sandpile operator, we were also presenting at LION conference a study on the performance of different population structures over our P2P optimization model. The paper is entitled «Analysing the Performance of Different Population Structures for an Agent-based Evolutionary Algorithm» and will be available soon as an LNCS publication.

You can find an abstract and the presentation bellow.

The Evolvable Agent model is a Peer-to-Peer Evolutionary Algorithm which focuses on distributed optimisation over Peer-to-Peer infrastructures. The main idea of the model is that every agent (i.e. individual) is designated as a peer (i.e. network node) and adopts a decentralised population structure defined by the underlying Peer-to-Peer protocol newscast. In that context, this work aims to compare performances of the approach considering two additional population structures other than newscast: a ring and a Watts-Strogatz topology.

Workshops at PPSN XI

Last week in conjunction with  PPSN XI, the Self* 2010 and PARCO 2010 workshops were held in Kraków (Poland)  where we were presenting  some of our most recent works. Specifically, you can find the respective presentations below.

  • A Self-Organized Critically Online Adjustment of Genetic Algorithms’ Mutation Rate

This paper describes an alternative mutation control scheme for Genetic Algorithms (GAs) inspired by the Self-Organized Criticality (SOC) theory. The strategy, which mimics a SOC system known as sand pile, is able to generate mutation rates that, unlike those generated by other methods of adaptive parameter control, oscillate between very low values and cataclysmic mutations. In order to attain the desired behaviour, the sandpile is not just attached to a GA; it is also modified in order for its conduct to reflect the stage of the search, i.e., the fitness distribution of the population. Due to its characteristics, the sandpile mutation arises as a promising candidate for efficient and yet simple and context-independent approach to dynamic optimization. An experimental study confirms this assumption: a GA with sandpile mutation outperforms a recently proposed SOC-based GA for dynamic optimization. Furthermore, the proposed method does not increase traditional GAs’ parameter set.

  • Influence of the Population Structure on the Performance of an Agent-based Evolutionary Algorithm

The Evolvable Agent model is a Peer-to-Peer Evolutionary Algorithm which focuses on distributed optimization over Peer-to-Peer infrastructures. The key idea of the model is that every agent-individual is designated as a peer and adopts a decentralised population structure defined by the Peer-to-Peer protocol newscast. In that context, this work aims to compare performances of the approach
when using two additional population structures other than newscast: a ring and a Watts-Strogatz topology.

Thesis on Peer-to-Peer Evolutionary Computation

Last 27th May, I had the dissertation of my thesis entitled: «Peer-to-Peer Evolutionary Computation: A Study of Viability«. It analyzes the viability of the Peer-to-Peer Evolutionary Computation concept and uses to that aim a P2P system as a substrate platform for a parallel implementation of a spatially-structured EA. Dynamics of the P2P platform are extensively described as well as their interactions with the parallel EA which demonstrates good scalability and resilience to the degradation of the system.

Hope that you enjoy the read (if you are crazy enough to go ahead) as I did in the write.

New book on press

Parallel and Distributed Computational Intelligence

Parallel and Distributed Computational Intelligence

This January has been published  Parallel and Distributed Computational Intelligence a book by F. Fernandez de Vega and E. Cantú-Paz Editors in the book series of Studies in Computational Intelligence of Springer. It comes with some of the most recent advances in distributed Evolutionary Algorithms, EDAs or cellular automaton using novel platforms such as Volunteer Computing in Desktop Grids and P2P, Graphics Processing Units or GRIDs. Within this context, some examples of applications for realistic problems are provided, for instance the case of Bioinformatics Data Mining or Protein Structure Prediction.

We have participated with the book chapter «Evolvable Agents: A Framework for Peer-to-Peer Evolutionary Algorithms» which has the following abstract:

Distributed evolutionary computation programs often needs increasingly big amounts of computational power when tackling large instances of hard optimization problems, and Peer-to-Peer (P2P) systems could be an option for building the large virtual supercomputer in which they could be run. Even as distributed Evolutionary Algorithms (EA) do take advantage of parallel execution by simultaneously promoting diversity and reducing runtime, there are still many challenges on the parallelization of EAs in P2P systems. In this chapter we present a survey of the state of the art in P2P EAs and our solutions to the main P2P issues such as decentralization, massive scalability and fault tolerance.

For those that decide to buy the book, I really hope that you enjoy reading!!

[IJHPSA] Resilience to Churn of a Peer-to-Peer Evolutionary Algorithm

In this paper we analyse the resilience of a Peer-to-Peer (P2P) Evolutionary Algorithm (EA) subject to the following dynamics: computing nodes acting as peers leave the system independently from each other causing a collective effect known as churn. Since the P2P EA has been designed to tackle large instances of computationally expensive problems, we will assess its behaviour under these conditions, by performing a scalability analysis in five different scenarios using the Massively Multimodal Deceptive Problem as a benchmark.In all cases, the P2P EA reaches the success criterion without a penalty on the runtime. We show that the key to the algorithm resilience is to ensure enough peers at the beginning of the experiment; even if some of them leave, those that remain contain enough information to guarantee a reliable convergence.