P2P technology for computing tasks does not always mean P2P computing

What would you think of a Bugatti Veyron limited to a maximum speed of 20 Km/h?? mmmm… YES!! Give me back the money!!

… well, that’s roughly my feeling when I read a paper on P2P computing restricting the scalability analysis to some few nodes. Of course, the analogy is not completely fair since performing a real massively distributed (and decentralized) experiment presents some challenges that, in most of the cases so far, remain out of the scope of the state of the art research. That happens, for instance, to P2P environments applied to optimization, and more precisely to Evolutionary Computation.

Usually, you can find two different approches for P2P optimization testing, either using simulations or using a «GRID-like style» in real environments, each case presents its own advantanges and drawbacks:

  • Using a real P2P environment ( we performed a study like that using a 8×2 cluster). The adventage here is that the design has to face real restrictions, as communication or evaluation times, message passing restrictions or fault tolerance. Nevertheless, there are extreme difficulties to set a real and proper environment to test a model.

When you face a real environment, you find that:

  • Large number of resources are hard to grab
  • Scalability is hard to study. In the case of having few nodes, no real P2P computing is going on since no conclusions about large scalability can be drawn. On the other hand, if there are some good dozens of peers, other questions such as fault tolerance arise.

In the last friday paper seminar our team was discussing the following paper that uses the «grid-like style» for testing:

in which the authors propose a hybrid model combining islands with cellular EAs. Every peer holds an island and every island a cellular EA. As previously commented for grid-like testing, the scalability analysis is quite poor (up to 10 peers), additionally, the algorithm yields the best performance in the five peers’ scenario, pointing to a really poor scalability of the model. Nevertheless, the fact is that P-CAGE outperforms canonical EAs making of it a valid solution based on P2P technology, just that, such a solution is not really scalable and therefore, can not be really understood as P2P computing.

To conclude, I do think that simulations can benefit the understanding of complex interactions in P2P EAs at this stage of research, preventing situations as the one shown above, afterwards, there will be time to validate the models in real environments, letting that theory meets practice.

Pervasive evolutionary algorithms on mobile devices

Last week I received the paper aceptance notification for the DCAI conference in Salamanca. This time we are going to present a distributed algorithm framework for mobile devices using Bluetooth. Ok, today I am quite lazy (today, and all the days, actually), so I think the best idea is to copy the abstract.

Abstract. This work presents a Java framework which allows to implement easily connectivity applications via Bluetooth. Nowadays it is difficult to program Bluetooth devices, so it is necessary to use a high-level Application Programming Interface (API) to make easy the creation of applications in Java ME and Java SE platforms, the most extended ones. As a solution we show the development of a distributed computing environment using a layered, client-server, and event-based with asynchronous communication architecture. In addition we solve two well-known evolutionary computation problems (the Traveler Salesman Problem and the Wave Function Problem), as an example of use.

The most interesting thing is that we have used real mobiles in order to execute the experiments, with all associated problems. It is difficult to find a compatible mobile phone with a Bluetooth stack that works properly. Even is not easy communicate two phones of the same fabricant but different model! But finally we managed to start the experiment, as you can see in the next photo.

Two Nokias executing our distributed algorithm. Photo by DraXus.

You can download the draft from here.

Immigrants do all the work

Genetic Agorithms with Memory- and Elitism-based Immigrants in Dynamic Environments was the paper selected for presentation and discussion in last Friday’s seminar. The article, authored by Shengxiang Yang, was published in the last Fall edition of Evolutionary Computation and addresses evolutionary optimization in dynamic environments.

Yang proposes two Genetic Algorithms (GAs) for dynamic optimization based on Grefenstette’s classical Random Immigrants GA (RIGA). RIGA tackles (or tries to) changing optima by inserting, in each and every generation, a certain number of randomly generated individuals that replace the worst individuals in the population (or randomly selected individuals, in another version). This way, genetic novelty is constantly being introduced and the population is expected to have enough diversity to react to changes in the environments. However, RIGA suffers from a major “weakness”: the raw building-blocks introduced by the randomly generated individuals are quickly removed from the population because their fitness is usually below average. RIGA is frequently chosen as a peer-algorithm for comparison purposes in evolutionary dynamic optimization studies, but, due to this “weak spot”, it may be questioned if a Standard GA is not better suited to assess the efficiency of a new method (in fact, studies currently being developed in our lab reinforce this hypothesis). In order to improve RIGA’s performance, several alternative RIGA-based methods have been proposed in the past few years.

The two GAs described in Yang’s paper try to overcome the problem with random immigrants by inserting in the population mutated copies of the elite — Elitism-based Immigrants Genetic Algorithm (EIGA) — or mutated copies of the chromosomes kept in a memory — Memory-based Immigrants Genetic Algorithm (MIGA). Memory-based approaches for dynamic optimization use a memory to store good solutions, which are retrieved later, periodically, or when the environments changes. Memory GAs are known to improve traditional GAs performance when the dynamics of changes are cyclic, that is, the fitness function returns to previous “shapes” from time to time; on the other hand, memory schemes are not that effective when the changes are not periodic. Therefore, and as expected, MIGA outperforms other GAs when the changes are cyclic. EIGA is better when the changes in the environment are not severe. This behaviour is explained by the fact that introducing mutated copies of the best individual in the population provides the GA with means to tackle small changes because the algorithm is maintaining a kind of sub-population around the optimal solution, and small shifts in the environment are easily traceable by those near-optimal individuals.

Summarizing, the study shows that MIGA and EIGA are able to outperform other GAs under the conditions of the test set. However, there is one question that remains unanswered: what happens when changing the parameter values? For instance, diversity maintenance schemes for dynamic optimization deal with non-stationary environments by maintaining the diversity at a higher level. This means that maybe the optimal mutation probability of these algorithms is different from those of Standard GAs. Shouldn’t a proper comparison between the algorithms consider a range of possible mutation probabilities (Yang’s studies used the traditional pm = 1/l, where l is the chromosome length)? And what about population size? Isn’t population size the major bottleneck for GAs’ performance in stationary environments? Is it possible that a variation in the population size of the GA for dynamic optimization conduces the research to different conclusions? Studies currently under way will try to answer to some of these questions.

Genetic algorithm with rough set theory

Last Friday we discussed the paper «The generic genetic algorithm incorporates with rough-set theory – An application of the web services composition» of Liang and Huang.  This is the standard mixture paper: a kind of algorithm + another soft computing technique + an application of the real world = a complete paper. I am not kidding, I think that combining several techniques, and more important, using them in real applications, should be a basis of research.

Rought set theory provides a way to create a set of decission rules that can be selected in every problem with functional requirements. For example in the extensive area of web services composition. We can provide this information to a genetic algorithm to compose services avoiding constrained solutions and initial population using that decission rules. The authors also use non-functional requirements, such QoS, cost or avilability in the fitness function. They conclude that the usage of rough set in a GA could increase the convergence (but it is necessary to keep some unfeasible solutions during the process, just in case).

It is a easy-to-read paper, so probably you would like to read it instead my summary ;) Moreover, they present some ideas in the web service composition area.