We cordially invite you to attend the following two-presentations on Spatially Structured Metaheuristics. This mini-workshop will be held at 11.30 a.m. in the CITIC-UGR building (June 26th, 2014).
Spatially Structured Metaheuristics: Principles and Practical Applications
by Juan Luis Jiménez Laredo (University of Luxembourg)
A relevant number of metaheuristics are based on population. Although conventions may establish different names, individuals in evolutionary algorithms, ants in ant colony optimization or particles in particle swarm optimization belong to the same side of a coin: they are all atomic elements of the population (a.k.a. building-blocks). In this context, spatially structured metaheuristics investigate how to improve the performance of metaheuristics by confining these elements in neighborhoods. This talk aims at presenting the working principles of spatially structured metaheuristics and practical applications to enhance diversity, scalability and robustness.
Spatially Structured Metaheuristics: Dynamic and Self-organized Topologies
by Carlos M. Fernandes (University of Lisbon)
Population based metaheuristics are computational search or optimization methods that use a population of possible solutions to a problem. These solutions are able communicate, interact and/or evolve. Two types of strategies for structuring population are possible. In panmictic populations, every individual is allowed to interact with every other individual. In non-panmictic metaheuristics, also called spatially structured population-based metaheuristics, the interaction is restricted to a pre-defined or evolving structure (network). Traditional spatially structured metaheuristics are built on pre-defined static networks of acquaintances over which individuals can interact. However, alternative strategies that overcome some of the difficulties and limitations of static networks (extra design and tuning effort, ad hoc decision policies, rigid connectivity, and lack of feedback from the problem structure and search process) are possible. This talk discusses dynamic topologies for spatially structured metaheuristics and describes a new model for structuring populations into partially connected and self-organized networks. Recent applications of the model on Evolutionary Algorithms and Particle Swarm Optimization are given and discussed.
Planning the cook of a time consuming optimization problem? Have you considered to let a crowd of volunteers to help you in this endeavor? In a volunteer-based system, volunteers provide you with free ingredients (CPU cycles, memory, internet connection,..) to be seasoned with only a pinch of peer-to-peer or desktop-grid technology.
If you are looking for a delicious cook of a volunteer-based evolutionary algorithm, you can find our recipe in this paper published in Genetic Programming and Evolvable Machines (pre-print version available here)
Title: “Designing robust volunteer-based evolutionary algorithms”
Abstract This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the world. Given that resources may join the system only for a limited period of time, the challenge of a volunteer-based evolutionary algorithm is to take advantage of a large amount of computational power that in turn is volatile. The paper analyzes first the speed of convergence of massively parallel evolutionary algorithms. Then, it provides some guidance about how to design efficient policies to overcome the algorithmic loss of quality when the system undergoes high rates of transient failures, i.e. computers fail only for a limited period of time and then become available again. In order to provide empirical evidence, experiments were conducted for two well-known problems which require large population sizes to be solved, the first based on a genetic algorithm and the second on genetic programming. Results show that, in general, evolutionary algorithms undergo a graceful degradation under the stress of losing computing nodes. Additionally, new available nodes can also contribute to improving the search process. Despite losing up to 90% of the initial computing resources, volunteer-based evolutionary algorithms can find the same solutions in a failure-prone as in a failure-free run.
Energy-efficiency is one of the most important issues in nowadays computing and communication systems. Due to the importance of power and energy consumption in these systems, various techniques have been investigated and developed. However, the importance of the topic claims for new solutions able to improve performances and to cope with increasing workload demand. Moreover, the solutions should be able to preserve or even increase its efficiency when the number of resources increases.
The aim of this workshop is to create a thriving atmosphere to exchange new ideas, proposing real practical solutions and discussing challenges in scalable solutions for energy-efficiency in large scale parallel and distributed systems (Cluster, Grid Cloud),
information technology and communication.
SCALSOL 2012 will be organized as part of the 12th IEEE International Conference on Scalable Computing and Communications (SCALCOM 2012). Technically sponsored by IEEE TCSC Technical Area in Green Computing.
The WORKSHOP seeks original contributions in all relevant areas
including but not limited to the following topics:
- Scalability of energy-efficient solutions on large-scale parallel and distributed systems, information technologies and communication
- Green design scalability in the CLOUDS (cloud computing and distributed systems)
- Green high density computing and ambient (ubiquitous) computing
- Virtualization techniques for energy-efficiency in large scale distributed systems
- Algorithm designs for energy-efficiency scalable solutions
- Energy-aware scalable resource allocation and load balancing
- Energy-aware scalable scheduling
- Energy- efficient scalable solutions for network services and operations
- Energy-efficient scalable solutions for internet-based computing and applications
- Network models and simulation modules/tools for energy-efficient solutions
- Hardware solutions for energy-efficient large-scale networked systems
Check in the SCALSOL site (NOTE: Tentative deadline for paper submissions shall be extended, please, check the site for updates)
General Chair: Johnatan E. Pecero
Program Chair: Dzmitry Kliazovich
Proceedings Chair: Juan Luis Jimenez Laredo
Publicity Chair: Ezendu Ariwa
Local Organizing Committee:Cesar O. Diaz, Mateusz Guzek
Tomorrow we will be presenting the work “Validating a Peer-to-Peer Evolutionary Algorithm” in Evo* 2012 held in Malaga, Spain. You can find below the abstract and presentation of the work.
This paper proposes a simple experiment for validating a Peer-to-Peer Evolutionary Algorithm in a real computing infrastructure in order to verify that results meet those obtained by simulations. The validation method consists of conducting a well-characterized experiment in a large computer cluster of up to a number of processors equal to the population estimated by the simulator. We argue that the validation stage is usually missing in the design of large-scale distributed meta-heuristics given the difficulty of harnessing a large number of computing resources. That way, most of the approaches in the literature focus on studying the model viability throughout a simulation-driven experimentation. However, simulations assume idealistic conditions that can influence the algorithmic performance and bias results when conducted in a real platform. Therefore, we aim at validating simulations by running a real version of the algorithm. Results show that the algorithmic performance is rather accurate to the predicted one whilst times-to-solutions can be drastically decreased when compared to the estimation of a sequential run.
The deadline for submission has changed to 5 March 2012.
Papers should be submitted through the Natural Computing system by selecting this special issue (SI: Informal Environments) as “Article Type”.
Informal computing includes ways of creating computing systems which are not fixed or bound to an organization, such as:
- Parasitic or stealth computing: using computing resources without explicit authorization from the user, for instance by visiting a web page.
- Volunteer computing: the user submits resources to a pool explicitly, by running a program o visiting a web site.
- Freeriding computing: using computing resources which are free or available, to a certain extent, in the network; for instance, using Google Apps or resources such as Wolfram-Alpha. Similar to parasitic computing, except that the provider of those resources knows, but does not care (up to a certain extent).
- Ubiquitous computing: using computing power available in user devices such as mobile phones or other appliances.
Using these (and similar) kinds of computing presents its own challenges, since neither the topology nor the availability of a particular node is known; computing nodes will have different performances and capabilities (and the connection to them will too) so that evolutionary computing paradigms will have to be adapted to them to take full-advantage of the system without losing the essence of evolutionary algorithm.
Topics of interest include but are not limited to:
- Complex systems issues in parasitic/volunteer computing
- Emerging computing environments, free or low-price: cloud computing, NoSQL, REST and other web services
- Performance evaluation and measuring (speed ups, scalability, work load…).
- Adaptation of algorithms to dynamic, ad-hoc environments
- Evolutionary computation and other bioinspired algorithms in P2P, Map/Reduce and other dynamic environments.
- Bioinspired algorithms applied to those types of environments.
- Implementation issues
- Open source implementations
Both theoretical and applied works related to the topics are sought, as well as those that present a framework that is based on an informal computing environment.
JJ Merelo, University of Granada
Maribel García Arenas, University of Granada
Juan Luis Jiménez Laredo, University of Luxemburg
Francisco Fernández de Vega, University of Extremadura
David Corne, Heriot-Watt University
Right today, April 28th, we have been presenting the paper “A Peer-to-Peer Approach to Genetic Programming” in EuroGP 2011 held in Torino, Italy within the Evo* conference series. You can find below the abstract and the presentation of the work.
This paper proposes a fine-grained parallelization of the Genetic Programming paradigm (GP) using the Evolvable Agent model (EvAg). The algorithm is decentralized in order to take full-advantage of a massively parallel Peer-to-Peer infrastructure. In this context, GP is particularly demanding due to its high requirements of computational power. To assess the viability of the approach, the EvAg model has been empirically analyzed in a simulated Peer-to-Peer environment where experiments were conducted on two well-known GP problems. Results show that the spatially structured nature of the algorithm is able to yield a good quality in the solutions. Additionally, parallelization improves times to solution by several orders of magnitude.
This Friday, March 11th, we will discuss the paper Evolutionary Computing and Autonomic Computing: Shared Problems, Shared Solutions? by Prof. A.E. Eiben in our Friday Paper Seminar. Quite a position paper about Self* properties in Evolutionary Algorithms and the other way around; Evolutionary Algortihms in Self* Computing.
You are welcome to attend the chat on Sala de Juntas at the ETSIIT.
In addition to our improvemnts to the Sandpile operator, we were also presenting at LION conference a study on the performance of different population structures over our P2P optimization model. The paper is entitled “Analysing the Performance of Different Population Structures for an Agent-based Evolutionary Algorithm” and will be available soon as an LNCS publication.
You can find an abstract and the presentation bellow.
The Evolvable Agent model is a Peer-to-Peer Evolutionary Algorithm which focuses on distributed optimisation over Peer-to-Peer infrastructures. The main idea of the model is that every agent (i.e. individual) is designated as a peer (i.e. network node) and adopts a decentralised population structure defined by the underlying Peer-to-Peer protocol newscast. In that context, this work aims to compare performances of the approach considering two additional population structures other than newscast: a ring and a Watts-Strogatz topology.
In the last Friday paper seminar we were discussing the paper:
which was presented at PPSN XI where we were attending as we mentioned.
Authors present a nice work on swarm robotics where they try to evolve robot controllers using a fixed size population of autonomous robots. Evolution will take place in a decentralized fashion where no information on a possibly changing environment is provided. In that context, evolution is challenged to react to changes on-line and self-adapt to the environment without the global knowledge on the problem that the fitness function would provide. That way “fitness” is implicit within the environment and the success criterion of a given strategy is defined as follows: one specific strategy is successful if it manages to spread over the population.
To that aim, authors propose mEDEA (minimal Environment-driven Distributed Evolutionary Adaptation), an intuitive algorithm which tackle the problem and tries to evolve controllers following a simple but elegant rule: those robot controllers that maximize the number of matings while preventing running out of energy will succeed on spreading their genomes.
Last week in conjunction with PPSN XI, the Self* 2010 and PARCO 2010 workshops were held in Kraków (Poland) where we were presenting some of our most recent works. Specifically, you can find the respective presentations below.
- A Self-Organized Critically Online Adjustment of Genetic Algorithms’ Mutation Rate
This paper describes an alternative mutation control scheme for Genetic Algorithms (GAs) inspired by the Self-Organized Criticality (SOC) theory. The strategy, which mimics a SOC system known as sand pile, is able to generate mutation rates that, unlike those generated by other methods of adaptive parameter control, oscillate between very low values and cataclysmic mutations. In order to attain the desired behaviour, the sandpile is not just attached to a GA; it is also modified in order for its conduct to reflect the stage of the search, i.e., the fitness distribution of the population. Due to its characteristics, the sandpile mutation arises as a promising candidate for efficient and yet simple and context-independent approach to dynamic optimization. An experimental study confirms this assumption: a GA with sandpile mutation outperforms a recently proposed SOC-based GA for dynamic optimization. Furthermore, the proposed method does not increase traditional GAs’ parameter set.
- Influence of the Population Structure on the Performance of an Agent-based Evolutionary Algorithm
The Evolvable Agent model is a Peer-to-Peer Evolutionary Algorithm which focuses on distributed optimization over Peer-to-Peer infrastructures. The key idea of the model is that every agent-individual is designated as a peer and adopts a decentralised population structure defined by the Peer-to-Peer protocol newscast. In that context, this work aims to compare performances of the approach
when using two additional population structures other than newscast: a ring and a Watts-Strogatz topology.