JJ nos expone el trabajo «A Neural Network System for Transformation of Regional Cuisine Style» en nuestra reunión semanal.
Archivo de la categoría: Articulos Interesantes
StarCraft protagoniza la reunión semanal de 30/oct/2017
En la pasada reunión del grupo Geneura, Victor estuvo exponiendo al resto de asistentes el trabajo denominado Predicting the Winner in Two Player StarCraft Games que fue publicado en el congreso CoSECiVi’15 por el profesor Antonio A. Sanchez-Ruiz.
El enlace a la presentación está en https://vrivas.github.io/explicando-sanchez-ruiz-2015/output/index.html
Evolutionary Algorithms in Heterogeneous Nodes
Today I presented a brief talk about some papers about the usage of heterogeneous computers for distributed EAs:
- Several guides about how to measure the performance of distributed metaheuristics are presented in the paper «Evaluation of Parallel Metaheuristics» by Alba et al.
- Demonstration about how heterogeneous clusters achieve superlinear speedup in EAs is presented in Heterogeneous Computing and Parallel Genetic Algorithms (also by Alba et al.)
- Finally, a new way to distribute different metaheuristics in different kinds of nodes, following some molecules presented in nature is described in: HydroC: A Hybrid Parallel Search Model for Heterogeneous Platforms.
All these ideas (and new ones) are being applied in our Service Oriented Architecture for Evolutionary Algorithms, we hope to show interesting results soon!
Here is the presentation (in Spanish).
Using free cloud storage services for distributed evolutionary algorithms in GECCO 2011
Esta semana ha sido el GECCO en Dublin. Yo he asistido desde el jueves 14 hasta el sábado 16 y aunque no he podido ir a la totalidad del congreso si me ha dado tiempo a ver el ambiente general.
En este congreso he presentado el artículo titulado Using free cloud storage services for distributed evolutionary algorithms. Y la presentación la podéis encontrar en SlideShare.
En este artículo podéis encontrar cómo hemos usado Dropbox para crear un multicomputador que evoluciona un conjunto de islas que cooperan para resolver un Algoritmo Evolutivo Paralelo que actúa sobre dos problemas típicos de Computación Evolutiva (MMDP y 4-TRAP) y que, hasta el número de computadores donde lo hemos probado, es escalable puesto que los tiempos de evaluación de los individuos disminuyen a medida que añadimos nodos de computación al multi-computador.
Después de la presentación algunos de los asistentes realizaron algunas preguntas que os puedo aclarar por si os surge la duda. La primera era si todo el tráfico que se genera en la red que comparten los nodos del multi-computador llega al servidor de Dropbox para después distribuirse. Y la respuesta a esta pregunta es claramente «no». Está claro que a la vista de los resultados de este artículo y el presentado en Nueva Orleans en CEC (Cloud-based Evolutionary Algorithms: An algorithmic study) Dropbox distribuye los datos primero a los ordenadores donde está sincronizado y después de eso al servidor de Dropbox.
Respecto a la segunda pregunta, estaba relacionada con el esquema de evolución de las islas y en el artículo podéis encontrar cómo está especificado con detalle. Una descripción general sería que cada isla evoluciona una población de individuos con un esquema generacional y un algoritmo evolutivo clásico con un operador de cruce uniforme y un mutador tipo flip-flop para codificación binaria.
Dynamic Control in Evolutionary Algorithms
Last Friday, in our weekly meeting, the paper by Di Tollo et al. “From Adaptive to More Dynamic Control in Evolutionary Algorithms” was presented and discussed. This work in centered on the adaptation of application rates of different types of crossover. A performance function is defined that takes into account the quality of solutions and diversity generated by each crossover. By varying a user-defined variable (teta), the importance of each factor can be regulated in order to set the desired compromise between quality and diversity (which gives rise to the idea of applying a multi-objective approach here). Then, after credit assignment (for each crossover), an operator is selected by Probability Matching (PM) or Multi-Armed Bandit (MAB) strategies.
For testing the proposed scheme, the authors define a framework with 20 different crossover operators of which the main characteristics are known (i.e., whether they favor intensity/exploitation or diversification/exploration). The system is applied to SAT problems. Several conclusions are drawn from those simple experiments. First, the type of SAT problem greatly influences the behavior of the system, as well as the criteria used to compute the performance of the operator and the selection strategy (PM or MAB). That is, setting teta to a hypothetical compromise value between intensification and diversification leads to a variety of different behaviors and not necessarily to that expected compromise.
The second part of the experiments is focused on the dynamic variation of teta. The authors conclude that the variation strategy influences the behavior of the algorithm and the progress of the system. They also conclude that, in fact, it is possible to favor diversity-oriented crossover or quality-oriented crossover by tuning teta. That is, it is possible to control the desired features by changing the teta value during the search.
Talk about parameter setting in Evolutionary Algorithms
Next friday I’ll give a talk about parameter control and parameter tunning (focusing in the latter) in EAs. I’ve read several papers of professor A. E. Eiben, about this interesting area and I will explain the differences among several kinds of parameters, different classification of settings and the state of the art with examples. You are all invited.
Date: Friday 15th. 12:00 pm.
Place: Sala de Juntas, ETS. Ingenierías en Informática y Telecomunicación. Universidad de Granada.
Pablo.
Things I’ve learned reading just ONE paper
One of the most important parts of the research job is reading papers. As an early stages of a new research line you are going to deep, first month of reading may seem fruitless, because you can think you are not doing anything, nor developing new experiments or whatever. But, I realized, reading one of my friends’ (and another member of GeNeura) paper how many information you can learn in just one paper. As an example, I am going to write here all concepts I’ve learned reading his paper «EvAg: a scalable peer-to-peer evolutionary algorithm» (you can download it from here).
It is a paper about P2P populations where each individual of a GA is present in a different node (fine-grain) in the network and explain how to interchange information among the nodes. Many experiments are performed and all that stuff, but I am going to concentrate in some concepts, not in the paper results, because Juanlu explained them in previous posts of this blog.
Panmictic population: each member of the population has the same probability to reproduce with the others. Applying to a graph, it is a fully connected graph.
Selectorecombinative GA: Is a mutation-free GA. It is used in population studies, due to all diversity source is from the initial population.
Trap functions: scalable and decomposable functions (divided in Building Blocks, BB). Varying the number k of each block also varies the problem difficult. Fitness depends of the number of 1s in the block. For example, in a k=4. F(0) = 3, F(1)=2, F(11)=1, F(111)=0, F(1111) = 4. It could be deceptive (one of the edges leads to a sub-optimun, and other to a global optimum).
Watts-Strogatts network: It is a small-world graph where nodes forms clusters, like real Internet. It is very easy to create a WS network from a ring topology (just alter each edge to connect another node with a probability p, so some connections are deleted). In these networks, selective pressure is equivalent to panmitic populations, but more scalable due there are a lower number of edges.
Takeover time: time that the best individual take over the entire population.
Metrics for experiments:
- Success Rate: number of times algorithm finds the optimum.
- Average Evaluations to find Solution (AES)
- Best fitness convergence
- Genotipic entropy: diversity of the population based in the genotipic distance to optimum genome (using Hamming Distance, for example). Wilcoxon test can be used.
As you can see, no information about the results of this paper is presented in this post, I’ve just written about new things I didn’t knew.
Tip: I am using Evernote to take notes of every paper I read. Evernote is like Dropbox, but with notes. I annotate shops, projects, todo lists… and I have full synchronization with all my computers (in Mac OS X, GNU/Linux) and mobile devices (Android or iPhone). There exist many usages of this program, so I recommend to take a look.
Pablo.
Paper Seminar: Evolutionary Computing and Autonomic Computing: Shared Problems, Shared Solutions?
This Friday, March 11th, we will discuss the paper Evolutionary Computing and Autonomic Computing: Shared Problems, Shared Solutions? by Prof. A.E. Eiben in our Friday Paper Seminar. Quite a position paper about Self* properties in Evolutionary Algorithms and the other way around; Evolutionary Algortihms in Self* Computing.
You are welcome to attend the chat on Sala de Juntas at the ETSIIT.
Passive Walking
Going on with robots, in the last talk we presented another interesting paper, this time written by Tad McGeer:
This is about the role of knees in passive dynamic walking bipeds. It is shown that giving just a slope, gravity energy is sufficient to keep walking down a pair of legs. With no other motor input, the machine will settle into natural gait. This evidences that morphological computation is implicit in natural locomotion systems. In this sense, this paper has originated new contoller systems in which the CoT (Coefficient of Transportation) could be optimized getting it near to the human one (of 0.2 vs 3.27 of Asimo robot by Honda). Furthermore, also prosthetics field could benefit from these new controllers and maybe evolutionary computation has new roles and goals to accomplish, but that is just another research issue.
Environment-driven evolution in robot swarms
In the last Friday paper seminar we were discussing the paper:
- Environment-Driven Embodied Evolution in a Population of Autonomous Agents by Nicolas Bredeche and Jean-Marc Montanier
which was presented at PPSN XI where we were attending as we mentioned.
Authors present a nice work on swarm robotics where they try to evolve robot controllers using a fixed size population of autonomous robots. Evolution will take place in a decentralized fashion where no information on a possibly changing environment is provided. In that context, evolution is challenged to react to changes on-line and self-adapt to the environment without the global knowledge on the problem that the fitness function would provide. That way «fitness» is implicit within the environment and the success criterion of a given strategy is defined as follows: one specific strategy is successful if it manages to spread over the population.
To that aim, authors propose mEDEA (minimal Environment-driven Distributed Evolutionary Adaptation), an intuitive algorithm which tackle the problem and tries to evolve controllers following a simple but elegant rule: those robot controllers that maximize the number of matings while preventing running out of energy will succeed on spreading their genomes.