Aplicando algoritmos genéticos a un controlador ‘fuzzy’ para una gestión adaptativa del tráfico

Recientemente se ha aceptado el artículo “A hybrid Fuzzy Genetic Algorithm for an adaptive traffic signal system” en la revista open-access Advances in Fuzzy Systems. En él participamos varios de los investigadores de GeNeura.

El abstract es:

This paper presents a hybrid algorithm that combines Fuzzy Logic Controller (FLC) and Genetic Algorithms (GAs) and its application on a traffic signal system. FLCs have been widely used in many applications in diverse areas, such as control system, pattern recognition, signal processing, and forecasting. They are, essentially, rule-based systems, in which the definition of these rules and fuzzy membership functions is generally based on verbally formulated rules that overlap through the parameter space. They have a great influence over the performance of the system. On the other hand, the Genetic Algorithm is a metaheuristic that provides a robust search in complex spaces. In this work, it has been used to adapt the decision rules of FLCs that define an intelligent traffic signal system, obtaining a higher performance than a classical FLC-based control. The simulation results yielded by the hybrid algorithm show an improvement of up to 34% in the performance with respect to a standard traffic signal controller, Conventional Traffic Signal Controller (CTC), and up to 31% in the comparison with a traditional logic controller, FLC.

Esperamos que os guste (y que lo citéis, claro :D).

Advertisements

Sistemas Clasificadores

Los sistemas clasificadores son una fusión entre los algoritmos evolutivos, el aprendizaje por refuerzo y el supervisado. Se conocen como Learning Classifier Systems. El viernes pasado aproveché la reunión del grupo para presentar una breve revisión histórica y dar detalles sobre quizá el algoritmo más importante introducido en este campo, el eXtended Classifier System o XCS de Wilson.

Básicamente, el algoritmo busca mediante evolución genética y aprendizaje un conjunto de reglas que modelen la solución a un problema donde existe recompensa. Las reglas se componen de una condición y una acción. La población de reglas representa para cualquier condición dada, cual será la mejor acción. Esto se consigue asociando al espacio de entrada una predicción de la mejor recompensa futura obtenida para cada acción posible.

Entonces, dado un estado que representa el entorno, se buscan las reglas cuya condición coincide, y de ellas se toma la acción que ofrece mejor recompensa futura.

La tarea no es fácil, los algoritmos formales de aprendizaje por refuerzo, necesitan a priori un conocimiento determinista de las posibles entradas y las transiciones resultantes de las acciones, dejando poco o nada para la búsqueda y aplicación de generalización.

Con XCS este problema se resuelve introduciendo algunos ajustes a la componente genética. La idea general es básicamente repartir los recursos (reglas) para que representen todo el espacio con la mayor precisión y generalización posible. Como no es algo que se pueda resumir en unas pocas líneas, aquí os dejo la presentación:

Unreal Expert Bots at IWANN 2013

Last week there was held IWANN 2013 at Tenerife, an international conference mainly devoted to researches inside the neural networks scope. In it, Antonio Fernández Leiva, Raúl Lara and Me organized the Special Session on Artificial Intelligence and Games.

There were five works in the session, one of them “Designing and Evolving an Unreal Tournament— 2004 Expert Bot“.

It describes the designing and improvement, through off-line (not during the game) evolution, of an autonomous agent (or bot) for playing the game Unreal Tournament 2004. This was created by means of a finite state machine which models the expert behaviour of a human player in 1 vs 1 deathmatch mode, following the rules of the international competition.

Then, the bot was improved by means of a Genetic Algorithm, yielding an agent that is, in turn a very hard opponent for the medium-level human player and which can (easily) beat the default bots in the game, even in the maximum difficulty level.

The presentation can be seen at:

Moreover, you can watch one example of the evolution in the following video:

Finally, the Unreal Expert and Genetic bot’s source code are available at https://github.com/franaisa/ExpertAgent

Enjoy them. ;)

[GECCO’09] Improving Genetic Algorithms Performance via Deterministic Population Shrinkage

This year the Genetic and Evolutionary Computation Conference (GECCO) took place in Montréal (Québec-Canada) where we were presenting our last work in collaboration with the Laboratoire de Vision et Systèmes Numériques de l’Université Laval in Quebec City:

Despite the intuition that the same population size is not needed throughout the run of an Evolutionary Algorithm (EA), most EAs use a fixed population size.
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing (SVPS) scheme on the performance of Genetic Algorithms (GAs). It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter. The method uses as initial population size an estimation of the minimum size needed to supply enough building blocks, using a fixed-size selectorecombinative GA converging within some confidence interval toward good solutions for a particular problem. Following this methodology, a scalability analysis is conducted on deceptive, quasi-deceptive, and non-deceptive trap functions in order to assess whether SVPS-GA improves performances compared to a fixed-size GA under different problem instances and difficulty levels. Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.

Some other congresses held in September

I returned from Brussels a couple of days ago, where I went to present KANTS: Artificial Ant System for Classification (the model was already described here) at the 6th International Conference on Ant Colony Optimization and Swarm Intelligence. ANTS 2008 is similar to PPSN, with most of the papers being presented at poster sessions (only a few are chosen for oral presentation). This procedure works well when the poster sessions are not just a minor event of the congress, thrown to a distant room in the hotel/university where nobody even bothers to go, or scheduled to the end of a long day. ANTS sessions were well organized and every poster had an assigned space. My presentation was scheduled to the last day of the congress, when most of people had already packed for their trip back home, but nonetheless the session went well, with lots of people wandering around the room, clearly interested in the works. KANTS got the attention of some audience, and I think they were quite impressed by the simplicity (and efficiency) of the idea. The inevitable question arouse (are you planning to test KANTS on a real-world problem?) and this time we can answer yes, we are not only planning to do it, but we are already working on it (later we will report on those experiments). In the same section, another swarm-clustering was presented. I saw the poster, and the results on clustering appear to be quite good (but the system does not perform classification). I haven’t read the paper (as a matter of fact, it is published as an extended abstract), but I was able to realize that the algorithm is little bit complex, simulating the behavior of three different species: ants, birds and spiders.

A week before I was in Barcelona, at the 8th International Conference on Hybrid Intelligent Systems (HIS 2008), presenting the paper Tracking Extrema in Dynamic Fitness Functions with Dissortative Mating Genetic Algorithms. It is quite a different work, and more related to my thesis’ subject, bio-inspired computation on dynamic environments. It describes the experiments performed with an adaptive dissortative mating GA (ADMGA) on Dynamic Optimization Problems. Dissortative mating appears frequently in nature and refers to the occurrence of mating between dissimilar individuals more often than expected by chance. It maintains genetic diversity at a higher level, thus increasing the exploration stage of the algorithm. Dynamic fitness functions are more sensible to genetic diversity than static ones, and so dissortative mating is a good candidate to deal with that kind of problems. The paper describes mainly the experiments performed with trap functions and show that ADMGA may improve GAs on some dynamic optimization scenarios. Robustness is also addressed and results show that ADMGA maintains a more stable performance over the wide range of dynamic scenarios. The congress HIS is mainly dedicated to hybrid models and real-world applications, so ADMGA was somewhat “lost” among other works. But good news came after the congress, and this line of research will probably make its way through other media.

P.S. In Brussels, avoid Hotel Continental, near Midi Station, unless you need inspiration for another insects’ heuristics.

See ya in PPSN

Today is the last, final and defintive day for submitting papers to the PPSN (Parallel Problem Solving from Nature) conference, and I think we’ve beaten some record here, since our research group (the extended GeNeura, with visiting researchers, other partners in research groups, and so no) is submitting 10 papers. Taking into account the usual PPSN acceptance rate, we’ll probably only manage to get 3 accepted, so anything more than that will really be a success.
There are several evolutionary computation conferences out there, some of them good, some not as good, but for us PPSN is the absolute best. This would be the seudo-20th anniversary (actually, 10th edition, 18th anniversary), and for us it would be our personal 12th anniversary, since the first time we had something admitted in that conference was in 1996, and we haven’t failed a single one (even organized one of them). So even if this year we’re attending all EC conferences there are (CEC, GECCO -yes, GECCO-, Evostar), we really can’t miss this one.
So if you are submitting to PPSN (more than 220, for the time being), good luck, and see you there!