About jjmerelo

Servidor de ustedes

Detección y predicción de flujos de personas y vehículos

En el marco del congreso CIMAS 21, que se celebrará en Granada, haré una presentación sobre las posibilidades de nuestro sistema de detección de tramas WiFi y Bluetooth, del que ya hemos hablado varias veces.

La presentación se centrará en los aspectos más analíticos de la plataforma, viendo las posibilidades que puede tener para un destino turístico con énfasis deportivo.




Early prediction of the outcome of Starcraft Games

As a result of Antonio Álvarez Caballero master’s thesis, we’ll be presenting tomorrow at the IJCCI 2017 conference a poster on the early prediction of Starcraft games.
The basic idea behind this line of research is to try and find a model of the game so that we can do fast fitness evaluation of strategies without playing the whole game, which can take up to 60 minutes. That way, we can optimize those strategies in an evolutionary algorithm and find the best ones.
In our usual open science style, paper and data are available in a repository.
Our conclusions say that we might be able to pull that off, using k-nearest neighbor algorithm. But we might have to investigate a bit further if we really want to find a model that gives us some insight about what makes a strategy a winner.


Dark clouds allow early prediction of heavy rain in Funchal, near where IJCCI is taking place

Self-organized criticality in software repositories, poster presented at ECAL 2017


The European Conference on Artificial Life or ECAL is not one of our usual suspects. Although we have attended from time to time, and even organized it back in 95 (yep, that is a real web page from 1995, minus the slate gray background), it is a conference I quite enjoy, together with other artificial life related conferences. Artificial life was quite the buzzword in the 90s, but nowadays with all the deep learning and AI stuff it has gone out of fashion. Last time I attended,ten years ago, it seemed more crowded. Be that as it may, I have presented a tutorial and a poster about our work on looking for critical state in software repositories. This the poster itself, and there is a link to the open access proceedings, although, as you know, all our papers are online and you can obtain that one (and a slew of other ones) from repository.
This is a line of research we have been working on for a year now, from this initial paper were we examined a single repository for the Moose Perl module. We are looking for patterns that allow us to say whether repositories are in a critical state or not. Being as they are completely artificial systems, engineering artefacts, looking for self organized criticality might seem like a lost cause. On the other hand, it really clicks with our own experience when writing a paper or anything, really. You write in long stretches, and then you do small sessions where you change a line or two.
This paper, which looks at all kinds of open source projects, from Docker to vue.js, looks at three different things: long distance correlations, free-scale behavior of changes, and a pink noise in the spectral density of the time series of changes. And we do find it, almost everywhere. Most big repos, with more than a few hundred commits, possess it, independently of their language or origin (hobbyist or company).
There is still a lot of work ahead. What are the main mechanisms for this self-organization? Are there any exceptions? That will have to wait until the next conference.


Asynchronous, heterogeneous, pool based evolutionary algorithms in GECCO 2017

Fresh back from GECCO 2017, which is probably the main event for evolutionary algorithms and other metaheuristics. Together with the conference proper, there are workshops and tutorials. Last year we achieved full score, with papers, posters and tutorials. Unfortunately, not this year.
We’re happy though with the two papers that were accepted in the EvoSoft workshop, which we usually attend, and the BBOB benchmarking workshop. Both used the same thing, EvospaceJS, Mario’s framework for working with tuple-space pool-based evolutionary algorithms. The idea of this pool is decoupling algorithms from population. And as soon as you do that, a world of posibility opens, like using different clients on the same pool. In the EvoSoft paper, evospace-js: asynchronous pool-based execution of heterogeneous metaheuristics, we presented the general framework and a pool of concept which combined PSO and evolutionary algorithms, with very interesting results. Here’s the rather laconic presentation, which is a reason more to check out the paper.
In the second paperBenchmarking a pool-based execution with GA and PSO workers on the BBOB noiseless testbed.
All in all, EvospaceJS and NodIO, the two frameworks we work with, offer a nice platform for experimentation with different kind of algorithms that can be easily transported to the cloud and adapted to volunteer computing environments. Whatever the case, it also has an interesting dynamics that has an influence on the working of the evolutionary algorithms. Sure, we will continue tapping this source of interesting insights on evolutionary models.


Entropy is the best predictor of volunteer computing system performance

In volunteer computing systems the users get to decide when, and how much, their own computers are going to be working in a particular problem. We have been working for some time in using volunteer computing for evolutionary algorithms, and all our efforts have focused in having a scalable back end and also finding how the user behaves in order to understand the behavior. A priori, one would think that the more users, the better. However, the fact that these systems are asynchronous and have heterogeneous capabilities means that it might happen that new users do not really have any contribution to the overall effort.
In this paper presented at the EvoStar conference this week, we took a different approach to analyzing performance by using compression entropy, computed over the number of contributions per minute. The bigger compression, the more uniform contributions are; the lower the compression, that means that the contributions change all the time. After some preliminary reports published in FigShare we found that there is a clear trend in an increasing entropy making the algorithm end much faster. This contradicts our initial guess, and also opens new avenues for the design of volunteer evolutionary computing systems, and probably other systems whose performande depends on diversity such as evolutionary algorithms.
Check out the poster and also the presentation done at the conference. You will miss, however, the tulip origami we gave out to the visitors of the poster.
In our research group we support open science, that is why you can find everything, from data to processing scripts to the sources of this paper, in the GitHub repository


Benchmarking evolutionary algorithms

People tend to think that there is a simple way of implementing evolutionary algorithms: whatever language they’re the most familiar with, or, by default, Java or C++. So after receiving several carps from reviewers who didn’t like our use of non-conventional languages like JavaScript of Perl, we decided to test a pile of languages performing simple evolutionary operations: mutation and crossover, and also a very common benchmark, OneMax.
Our poster was accepted in the ECTA conference and we recently presented it in Porto. Have a look at the paper and poster source https://github.com/geneura-papers/2016-ea-languages-PPSN/releases/tag/v1.0ECTA that uses Knitr, and check out the poster.


Evolutionary (and other) algorithms in the cloud

The cloud is where you run your applications, but it’s also how you will design your algorithms from now on. Evolutionary algorithms are specially suited for this, and that is why I have given tutorials on how to adapt evolutionary algorithms to the cloud in PPSN and lately, when one of the keynotes dropped, an updated and abridged version at ECTA 16.
In these tutorials I make an introduction to what is the cloud and what it means: basically, create applications as loosely connected, polyglot, multi-vendor sets of different programs. Which will spawn a slew of new algorithms, starting with the pool-based evolutionary algorithm we have working on for so long.