EAs + Monte Carlo Tree Search for the win (*BEST PAPER of EvoAPPS 2020 conference*)

Yes! We did it! :D

Yesterday we presented our last paper on the scope of AI in Videogames entitled «Testing hybrid computational intelligence algorithms for general game playing«, by two colleagues of the University of Málaga (A. E. Vázquez-Núñez and A. J. Fernández-Leiva), together with P. García-Sánchez (@fergunet) and I (A. M. Mora).

It was focused on the domain of General Game Playing, where the aim is to create an autonomous agent able to adapt itself to play to several different videogames (not previously known for it). We used the framework and followed the rules of the General Video Game AI Competition.

Thus, it’s an approach to implement a kind of human-like behaviour the first time we face a new game.

The manuscript was selected as the Best Paper of EvoAPPS 2020 conference (inside EVO* 2020), held online for the first time, due to COVID-19 situation.

The abstract of the work is:

General Videogame Playing is one of the hottest topics in the research field of AI in videogames. It aims at the implementation of algorithms or autonomous agents able to play a set of unknown games efficiently, just receiving the set of rules to play in real time. Thus, this work presents the implementation of eight approaches based on the main techniques applied in the literature to face this problem, including two different hybrid implementations combining Monte Carlo Tree Search and Genetic Algorithms. They have been created within the General Video Game Artificial Intelligence (GVGAI) Competition platform. Then, the algorithms have been tested in a set of 20 games from that competition, analyzing its performance. According to the obtained results, we can conclude that the proposed hybrid approaches are the best approaches, and they would be a very competitive entry for the competition.

And here is the presentation:

Enjoy it!

And, as usual, cite us. ;D

A new lap in the race to improve our evolutionary fuzzy-controllers for TORCS

Last week we presented at the IEEE Conference on Game 2019, held in London (UK), our new paper titled «Beating uncertainty in racing bot evolution through enhanced exploration and pole position selection«.

The abstract of the work is:

One of the main problems in the design through optimization of car racing bots is the inherent noise in the optimization process: besides the fact that the fitness is a heuristic
based on what we think are the keys to success and as such just a surrogate for the ultimate objective, winning races, fitness itself is uncertain due to the stochastic behavior of racing conditions and the rest of the (simulated) racers. The fuzzy-based genetic controller for the car racing simulator TORCS that we have defined in previous works is based on two fuzzy subcontrollers, one for deciding on the wheel steering angle and another to set the car target speed at the next simulation tick.
They are both optimized by means of an Evolutionary Algorithm, which considers an already tested fitness function focused on the maximization of the average speed during the race and the minimization of the car damage. The noisy environment asks for keeping diversity high during evolution, that is why we have added a Blend Crossover (BLX-alpha) operator, which is, besides, able to exploit current results at the same time it explores. Additionally, we try to address uncertainty in selection by introducing a novel selection policy of parents based in races, where the individuals are grouped and compete against others in several races, so just the firsts ranked will remain in the population as parents. Several experiments have been conducted, testing the value of the different controllers. The results show that the combination of a dynamic BLX-alpha crossover operator plus the pole position selection policy clearly beats the rest of approaches. Moreover, in the comparison of this controller with one of the participants of the prestigious international Simulated Car Racing Championship, our autonomous driver obtains much better results than the opponent.

The presentation can be seen below:

As usual, enjoy it and…cite us! :D

Improved Genetic Fuzzy Drivers presented at CIG 2018

Last week I presented at IEEE CIG 2018 (held in Maastricht, The Netherlands) our following step in our research about autonomous drivers for Car Racing Simulators, such as TORCS, titled «The Evolutionary Race: Improving the Process of Evaluating Car Controllers in Racing Simulators«.

As commented before by @jjmerelo and later by @fergunet, we designed with Mohammed Salem (University of Mascara) a driver’s AI in which two Fuzzy Subcontrollers were hybridized with a Genetic Algorithm.

In this work we present a better evaluation approach for the GA, combining three methods: heuristic track choosing, improved fitness functions, and race-based selection of the best.

The abstract of the work is:

Simulated car races have been used for a long time as an environment where car controlling algorithms can be tested; they are an interesting testbed for all kinds of algorithms, including metaheuristics such as evolutionary algorithms. However, the challenge in the evolutionary algorithms is to design a reliable and effective evaluation process for the individuals that eventually translates into good solutions to the car racing problem: finding a controller that is able to win in a wide range of tracks and with a good quantity of opponents. Evaluating individual car controllers involves not only the design of a proper fitness function representing how good the car controller would be in a competitive race, but also the selection of the best solution for the optimization problem being solved; this decision might not be easy when uncertainty is present in the problem environment; in this case, weather and track conditions as well as unpredictable behavior of other drivers. Creating a methodology for the automatic design of the controller of an autonomous driver for a car racing simulator such as TORCS is an optimization problem which offers all these challenges. Thus, in this paper we describe an analysis and some proposals to improve the evaluation of optimized fuzzy drivers for TORCS over previous attempts to do so. It builds on preliminary results obtained in previous papers as a baseline and aims to obtain a more competitive autonomous driver via redesign of the fitness evaluation procedure; to this end, two different fitness functions are studied in several experiments, along with a novel race-based approach for the selection of the best individual in the evolution.

And the presentation is:

You can check our paper in the proceedings of the conference.

Enjoy it!

(And cite us as usual :D)

Aplicando algoritmos genéticos a un controlador ‘fuzzy’ para una gestión adaptativa del tráfico

Recientemente se ha aceptado el artículo «A hybrid Fuzzy Genetic Algorithm for an adaptive traffic signal system» en la revista open-access Advances in Fuzzy Systems. En él participamos varios de los investigadores de GeNeura.

El abstract es:

This paper presents a hybrid algorithm that combines Fuzzy Logic Controller (FLC) and Genetic Algorithms (GAs) and its application on a traffic signal system. FLCs have been widely used in many applications in diverse areas, such as control system, pattern recognition, signal processing, and forecasting. They are, essentially, rule-based systems, in which the definition of these rules and fuzzy membership functions is generally based on verbally formulated rules that overlap through the parameter space. They have a great influence over the performance of the system. On the other hand, the Genetic Algorithm is a metaheuristic that provides a robust search in complex spaces. In this work, it has been used to adapt the decision rules of FLCs that define an intelligent traffic signal system, obtaining a higher performance than a classical FLC-based control. The simulation results yielded by the hybrid algorithm show an improvement of up to 34% in the performance with respect to a standard traffic signal controller, Conventional Traffic Signal Controller (CTC), and up to 31% in the comparison with a traditional logic controller, FLC.

Esperamos que os guste (y que lo citéis, claro :D).

Sistemas Clasificadores

Los sistemas clasificadores son una fusión entre los algoritmos evolutivos, el aprendizaje por refuerzo y el supervisado. Se conocen como Learning Classifier Systems. El viernes pasado aproveché la reunión del grupo para presentar una breve revisión histórica y dar detalles sobre quizá el algoritmo más importante introducido en este campo, el eXtended Classifier System o XCS de Wilson.

Básicamente, el algoritmo busca mediante evolución genética y aprendizaje un conjunto de reglas que modelen la solución a un problema donde existe recompensa. Las reglas se componen de una condición y una acción. La población de reglas representa para cualquier condición dada, cual será la mejor acción. Esto se consigue asociando al espacio de entrada una predicción de la mejor recompensa futura obtenida para cada acción posible.

Entonces, dado un estado que representa el entorno, se buscan las reglas cuya condición coincide, y de ellas se toma la acción que ofrece mejor recompensa futura.

La tarea no es fácil, los algoritmos formales de aprendizaje por refuerzo, necesitan a priori un conocimiento determinista de las posibles entradas y las transiciones resultantes de las acciones, dejando poco o nada para la búsqueda y aplicación de generalización.

Con XCS este problema se resuelve introduciendo algunos ajustes a la componente genética. La idea general es básicamente repartir los recursos (reglas) para que representen todo el espacio con la mayor precisión y generalización posible. Como no es algo que se pueda resumir en unas pocas líneas, aquí os dejo la presentación:

Unreal Expert Bots at IWANN 2013

Last week there was held IWANN 2013 at Tenerife, an international conference mainly devoted to researches inside the neural networks scope. In it, Antonio Fernández Leiva, Raúl Lara and Me organized the Special Session on Artificial Intelligence and Games.

There were five works in the session, one of them «Designing and Evolving an Unreal Tournament— 2004 Expert Bot«.

It describes the designing and improvement, through off-line (not during the game) evolution, of an autonomous agent (or bot) for playing the game Unreal Tournament 2004. This was created by means of a finite state machine which models the expert behaviour of a human player in 1 vs 1 deathmatch mode, following the rules of the international competition.

Then, the bot was improved by means of a Genetic Algorithm, yielding an agent that is, in turn a very hard opponent for the medium-level human player and which can (easily) beat the default bots in the game, even in the maximum difficulty level.

The presentation can be seen at:

Moreover, you can watch one example of the evolution in the following video:

Finally, the Unreal Expert and Genetic bot’s source code are available at https://github.com/franaisa/ExpertAgent

Enjoy them. ;)

[GECCO’09] Improving Genetic Algorithms Performance via Deterministic Population Shrinkage

This year the Genetic and Evolutionary Computation Conference (GECCO) took place in Montréal (Québec-Canada) where we were presenting our last work in collaboration with the Laboratoire de Vision et Systèmes Numériques de l’Université Laval in Quebec City:

Despite the intuition that the same population size is not needed throughout the run of an Evolutionary Algorithm (EA), most EAs use a fixed population size.
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing (SVPS) scheme on the performance of Genetic Algorithms (GAs). It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter. The method uses as initial population size an estimation of the minimum size needed to supply enough building blocks, using a fixed-size selectorecombinative GA converging within some confidence interval toward good solutions for a particular problem. Following this methodology, a scalability analysis is conducted on deceptive, quasi-deceptive, and non-deceptive trap functions in order to assess whether SVPS-GA improves performances compared to a fixed-size GA under different problem instances and difficulty levels. Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.

Some other congresses held in September

I returned from Brussels a couple of days ago, where I went to present KANTS: Artificial Ant System for Classification (the model was already described here) at the 6th International Conference on Ant Colony Optimization and Swarm Intelligence. ANTS 2008 is similar to PPSN, with most of the papers being presented at poster sessions (only a few are chosen for oral presentation). This procedure works well when the poster sessions are not just a minor event of the congress, thrown to a distant room in the hotel/university where nobody even bothers to go, or scheduled to the end of a long day. ANTS sessions were well organized and every poster had an assigned space. My presentation was scheduled to the last day of the congress, when most of people had already packed for their trip back home, but nonetheless the session went well, with lots of people wandering around the room, clearly interested in the works. KANTS got the attention of some audience, and I think they were quite impressed by the simplicity (and efficiency) of the idea. The inevitable question arouse (are you planning to test KANTS on a real-world problem?) and this time we can answer yes, we are not only planning to do it, but we are already working on it (later we will report on those experiments). In the same section, another swarm-clustering was presented. I saw the poster, and the results on clustering appear to be quite good (but the system does not perform classification). I haven’t read the paper (as a matter of fact, it is published as an extended abstract), but I was able to realize that the algorithm is little bit complex, simulating the behavior of three different species: ants, birds and spiders.

A week before I was in Barcelona, at the 8th International Conference on Hybrid Intelligent Systems (HIS 2008), presenting the paper Tracking Extrema in Dynamic Fitness Functions with Dissortative Mating Genetic Algorithms. It is quite a different work, and more related to my thesis’ subject, bio-inspired computation on dynamic environments. It describes the experiments performed with an adaptive dissortative mating GA (ADMGA) on Dynamic Optimization Problems. Dissortative mating appears frequently in nature and refers to the occurrence of mating between dissimilar individuals more often than expected by chance. It maintains genetic diversity at a higher level, thus increasing the exploration stage of the algorithm. Dynamic fitness functions are more sensible to genetic diversity than static ones, and so dissortative mating is a good candidate to deal with that kind of problems. The paper describes mainly the experiments performed with trap functions and show that ADMGA may improve GAs on some dynamic optimization scenarios. Robustness is also addressed and results show that ADMGA maintains a more stable performance over the wide range of dynamic scenarios. The congress HIS is mainly dedicated to hybrid models and real-world applications, so ADMGA was somewhat “lost” among other works. But good news came after the congress, and this line of research will probably make its way through other media.

P.S. In Brussels, avoid Hotel Continental, near Midi Station, unless you need inspiration for another insects’ heuristics.

See ya in PPSN

Today is the last, final and defintive day for submitting papers to the PPSN (Parallel Problem Solving from Nature) conference, and I think we’ve beaten some record here, since our research group (the extended GeNeura, with visiting researchers, other partners in research groups, and so no) is submitting 10 papers. Taking into account the usual PPSN acceptance rate, we’ll probably only manage to get 3 accepted, so anything more than that will really be a success.
There are several evolutionary computation conferences out there, some of them good, some not as good, but for us PPSN is the absolute best. This would be the seudo-20th anniversary (actually, 10th edition, 18th anniversary), and for us it would be our personal 12th anniversary, since the first time we had something admitted in that conference was in 1996, and we haven’t failed a single one (even organized one of them). So even if this year we’re attending all EC conferences there are (CEC, GECCO -yes, GECCO-, Evostar), we really can’t miss this one.
So if you are submitting to PPSN (more than 220, for the time being), good luck, and see you there!