Call for papers: Natural Computation Special Issue on Distributed Evolutionary Computation in Informal Environments

Introduction

Informal computing includes ways of creating computing systems which are not fixed or bound to an organization, such as:

  • Parasitic or stealth computing: using computing resources without explicit authorization from the user, for instance by visiting a web page.
  • Volunteer computing: the user submits resources to a pool explicitly, by running a program o visiting a web site.
  • Freeriding computing: using computing resources which are free or available, to a certain extent, in the network; for instance, using Google Apps or resources such as Wolfram-Alpha. Similar to parasitic computing, except that the provider of those resources knows, but does not care (up to a certain extent).
  • Ubiquitous computing: using computing power available in user devices such as mobile phones or other appliances.

Using these (and similar) kinds of computing presents its own challenges, since neither the topology nor the availability of a particular node is known; computing nodes will have different performances and capabilities (and the connection to them will too) so that evolutionary computing paradigms will have to be adapted to them to take full advantage of the system without losing the essence of evolutionary algorithm.
Thus, the main topics of the workshop are (but will not be limited to):

  • Performance prediction and analysis
  • New computing paradigms
  • Practical applications
  • Implementations

Call for papers

The papers should be related to evolutionary computation and other bioinspired metaheuristics such as ant colony optimization algorithms and particle swarm systems.

  • Complex systems issues in parasitic/volunteer computing
  • Emerging computing environments, free or low-price: cloud computing, NoSQL, REST and other web services
  • Performance evaluation and measuring  (speed ups, scalability, work load…).
  • Adaptation of algorithms to dynamic, ad-hoc environments
  • Evolutionary computation and other bioinspired algorithms in P2P, Map/Reduce and other dynamic environments.
  • Bioinspired algorithms applied to those types of environments.
  • Implementation issues
  • Open source implementations

Both theoretical and applied works related to the topics of the workshop are sought, as well as those that present a framework that is based on an informal computing environment.

Deadline

UpdateDeadline for submission is February 15th. Papers will be submitted through the Natural Computing system, by selecting this special issue (SI: Informal Environments) as «Article Type». Authors from the IWDECIE workshop are specially invited, and their reviews will be taken into consideration.

Editors

JJ Merelo, University of Granada
Maribel García Arenas, University of Granada
Juan Luis Jiménez Laredo, University of Luxemburg
Francisco Fernández de la Vega, University of Extremadura
David Corne, Heriot-Watt University

Using free cloud storage services for distributed evolutionary algorithms in GECCO 2011

Esta semana ha sido el GECCO en Dublin. Yo he asistido desde el jueves 14 hasta el sábado 16 y aunque no he podido ir a la totalidad del congreso si me ha dado tiempo a ver el ambiente general.
En este congreso he presentado el artículo titulado Using free cloud storage services for distributed evolutionary algorithms. Y la presentación la podéis encontrar en SlideShare.

En este artículo podéis encontrar cómo hemos usado Dropbox para crear un multicomputador que evoluciona un conjunto de islas que cooperan para resolver un Algoritmo Evolutivo Paralelo que actúa sobre dos problemas típicos de Computación Evolutiva (MMDP y 4-TRAP) y que, hasta el número de computadores donde lo hemos probado, es escalable puesto que los tiempos de evaluación de los individuos disminuyen a medida que añadimos nodos de computación al multi-computador.

Después de la presentación algunos de los asistentes realizaron algunas preguntas que os puedo aclarar por si os surge la duda. La primera era si todo el tráfico que se genera en la red que comparten los nodos del multi-computador llega al servidor de Dropbox para después distribuirse. Y la respuesta a esta pregunta es claramente «no». Está claro que a la vista de los resultados de este artículo y el presentado en Nueva Orleans en CEC (Cloud-based Evolutionary Algorithms: An algorithmic study) Dropbox distribuye los datos primero a los ordenadores donde está sincronizado y después de eso al servidor de Dropbox.

Respecto a la segunda pregunta, estaba relacionada con el esquema de evolución de las islas y en el artículo podéis encontrar cómo está especificado con detalle. Una descripción general sería que cada isla evoluciona una población de individuos con un esquema generacional y un algoritmo evolutivo clásico con un operador de cruce uniforme y un mutador tipo flip-flop para codificación binaria.

Visita a la empresa Ciudad 2020

Descripción de la empresa:

La empresa Ciudad 2020 (http://www.linkedin.com/companies/ciudad-2020 y http://www.blobject.es) es una empresa de servicios y tecnologías de la información radicada en Córdoba que tiene como objetivo organizar la información del entorno físico para la toma de decisiones óptimas de todo tipo de organizaciones en función del análisis del flujo y comportamiento de personas.

Es una empresa que trabaja en la Internet de las cosas, sociometría y reality mining. La especialidad de la empresa se centra en la realización de estudios de mercado, minería de datos, reality mining y la inteligencia artificial. Ciudad 2020 tiene experiencia en el desarrollo de proyectos innovadores y en el uso de tecnologías de vanguardia.

Un proyecto de esta empresa, llamado City Analytics (http://www.cityanalytics.net), aplica los principios y perspectiva metodológica del reality mining: Se basa en detectar los dispositivos bluetooth que pasan delante de un negocio, gracias a una antena instalada en un PC y el software correspondiente. Los datos monitorizados son enviados a los servidores de City Analytics, tratados y analizados, de manera que el usuario sólo tiene que conectarse para acceder a los informes de tránsito frente a su negocio. De esta forma, se facilita la toma de decisiones tanto a establecimientos comerciales como a urbanistas, arquitectos o ingenieros de software entre otros en función del flujo de personas que pasan por una calle.

Asuntos tratados:

Como objetivos/resultados de la visita cabe destacar los siguientes:

• Mantener un contacto directo durante dos jornadas de trabajo con los investigadores de la empresa ha sido de gran valor para el grupo de investigación, ya que ha supuesto un avance en las líneas iniciadas hasta el momento.

• Se nos ha presentado el producto (que se muestra en la siguiente fotografía) utilizado detectar los dispositivos bluetooth que pasan por cierto punto de un negocio o calle y así recopilar los datos que se utilizarán posteriormente en los estudios.

el chisme para recopilar datos

el chisme para recopilar datos

• Se ha avanzado en la preparación de un proyecto de investigación para la convocatoria de Proyectos de I+D+i de la Corporación Tecnológica de Andalucía (CTA) que lleva por título “Matriz de subidas y Bajadas en autobuses de Línea» (Mabalis) y que se presentará en la convocatoria de julio de 2011 (resumen de la convocatoria en http://bit.ly/q0W2kE).

Desarrollo de la visita:

Las jornadas de trabajo se han llevado a cabo los días 7 y 8 de julio de 2011. Dicha visita ha sido financiada mediante la ayuda del Programa de Ayudas a la Transferencia de Investigación de la Universidad de Granada (solicitud V/11/001 para la realización de Visitas a Empresas).

financiadores de la visita

financiadores de la visita

New journal paper in press

C.M. Fernandes, J.J. Merelo, A.C. Rosa, A comparative study on the performance of dissortative mating and immigrants-based strategies for evolutionary dynamic optimization, Information Sciences, in press.

Abstract – Traditional Genetic Algorithms (GAs) mating schemes select individuals for crossover independently of their genotypic or phenotypic similarities. In Nature, this behavior is known as random mating. However, non-random protocols, in which individuals mate according to their kinship or likeness, are more common in natural species. Previous studies indicate that when applied to GAs, dissortative mating – a type of non-random mating in which individuals are chosen according to their similarities – may improve their performance (on both speed and reliability). Dissortative mating maintains genetic diversity at a higher level during the run, a fact that is frequently observed as a possible cause of dissortative GAs’ ability to escape local optima. Dynamic optimization demands a special attention when designing and tuning a GA, since diversity plays an even more crucial role than it does when tackling static ones. This paper investigates the behavior of the Adaptive Dissortative Mating GA (ADMGA) in dynamic problems and compares it to GAs based on random immigrants. ADMGA selects parents according to their Hamming distance, via a self-adjustable threshold value. The method, by keeping population diversity during the run, provides an effective means to deal with dynamic problems. Tests conducted with dynamic trap functions and dynamic versions of Road Royal and knapsack problems indicate that ADMGA is able to outperform other GAs on a wide range of tests, being particularly effective when the frequency of changes is low. Specifically, ADMGA outperforms two state-of-the-art algorithms on many dynamic scenarios. In addition, and unlike preceding dissortative mating GAs and other evolutionary techniques for dynamic optimization, ADMGA self-regulates the intensity of the mating restrictions and does not increase the set of parameters in GAs, thus being easier to tune.

IWANN2011: GPU Computation in Bioinspired Algorithms. A review

Bioinspired methods usually need a high amount of computational resources.
For this reason, parallelization is an interesting alternative in order to decrease the execution time and to provide accurate results.
In this sense, recently there has been a growing interest in developing parallel algorithms using graphic processing units (GPU) also refered as GPU computation.
Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs).
As GPUs are available in personal computers, and they are easy to use and manage through several GPU programming languages (CUDA, OpenCL, etc.), graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics.
This paper reviews the use of GPUs to solve scientific problems, giving an overview of current software systems.

DECIE2011: Distributed Evolutionary Computation using REST

This paper analises distributed evolutionary computation based on the Representational State Transfer (REST) protocol, which overlays a farming model on evolutionary computation.
An approach to evolutionary distributed optimisation of multilayer perceptrons (MLP) using REST and language Perl has been done. In these experiments, a master-slave based evolutionary algorithm (EA) has been implemented, where slave processes evaluate the costly fitness function (training a MLP to solve a classification problem).
Obtained results show that the parallel version of the developed programs obtains similar or better results using much less time than the sequential version, obtaining a good speedup.

DECIE2011: SOAP vs REST: Comparing a master-slave GA implementation

In this paper, a high-level comparison of both SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) is made. These are the two main approaches for interfacing to the web with web services.
Both approaches are different and present some advantages and disadvantages for interfacing to web services: SOAP is conceptually more difficult (has a steeper learning curve) and more »heavy-weight» than REST, although it lacks of standards support for security.
In order to test their eficiency (in time), two experiments have been performed using both technologies:
a client-server model implementation and a master-slave based genetic algorithm (GA).
The results obtained show clear differences in time between SOAP and REST implementations.
Although both techniques are suitable for developing parallel systems, SOAP is heavier than REST, mainly due to the verbosity of SOAP communications (XML increases the time taken to parse the messages).