Improved Genetic Fuzzy Drivers presented at CIG 2018

Last week I presented at IEEE CIG 2018 (held in Maastricht, The Netherlands) our following step in our research about autonomous drivers for Car Racing Simulators, such as TORCS, titled “The Evolutionary Race: Improving the Process of Evaluating Car Controllers in Racing Simulators“.

As commented before by @jjmerelo and later by @fergunet, we designed with Mohammed Salem (University of Mascara) a driver’s AI in which two Fuzzy Subcontrollers were hybridized with a Genetic Algorithm.

In this work we present a better evaluation approach for the GA, combining three methods: heuristic track choosing, improved fitness functions, and race-based selection of the best.

The abstract of the work is:

Simulated car races have been used for a long time as an environment where car controlling algorithms can be tested; they are an interesting testbed for all kinds of algorithms, including metaheuristics such as evolutionary algorithms. However, the challenge in the evolutionary algorithms is to design a reliable and effective evaluation process for the individuals that eventually translates into good solutions to the car racing problem: finding a controller that is able to win in a wide range of tracks and with a good quantity of opponents. Evaluating individual car controllers involves not only the design of a proper fitness function representing how good the car controller would be in a competitive race, but also the selection of the best solution for the optimization problem being solved; this decision might not be easy when uncertainty is present in the problem environment; in this case, weather and track conditions as well as unpredictable behavior of other drivers. Creating a methodology for the automatic design of the controller of an autonomous driver for a car racing simulator such as TORCS is an optimization problem which offers all these challenges. Thus, in this paper we describe an analysis and some proposals to improve the evaluation of optimized fuzzy drivers for TORCS over previous attempts to do so. It builds on preliminary results obtained in previous papers as a baseline and aims to obtain a more competitive autonomous driver via redesign of the fitness evaluation procedure; to this end, two different fitness functions are studied in several experiments, along with a novel race-based approach for the selection of the best individual in the evolution.

And the presentation is:

You can check our paper in the proceedings of the conference.

Enjoy it!

(And cite us as usual :D)

Advertisements

Our TORCS driving controller presented at EvoGAMES 2017

Last week, @jjmerelo presented at EvoGAMES 2017 (inside Evo* 2017) our work titled “Driving in TORCS using modular fuzzy controllers”.

This paper presents a novel car racing controller for TORCS (The Open Racing Car Simulator), which is based in the combination of two fuzzy subcontrollers, one for setting the speed, and one to control the steer angle. The obtained results are quite promissing, as the controller is quite competitive even against very tough TORCS teams.

The abstract of the paper is:

When driving a car it is essential to take into account all possible factors; even more so when, like in the TORCS simulated race game, the objective is not only to avoid collisions, but also to win the race within a limited budget. In this paper, we present the design of an autonomous driver for racing car in a simulated race. Unlike previous controllers, that only used fuzzy logic approaches for either acceleration or steering, the proposed driver uses simultaneously two fuzzy controllers for steering and computing the target speed of the car at every moment of the race. They use the track border sensors as inputs and besides, for enhanced safety, it has also taken into account the relative position of the other competitors. The proposed fuzzy driver is evaluated in practise and timed races giving good results across a wide variety of racing tracks, mainly those that have many turning points.

There was an interactive presentation at the conference, together with a poster:

The paper is available online from: https://link.springer.com/chapter/10.1007/978-3-319-55849-3_24

Enjoy (and cite) it! :D

 

Ms. PacMan in IEEE Transactions on CI and AI in Games

Our fans and followers must be happy! ;D

They can now access the excellent work by Federico Liberatore in IEEE ToCIAIG journal.

This is the best journal concerning Artificial Intelligence in games, with a very strict reviewing process, so, we are very proud of this success. ;)

This is the next step in the research started one year and a half ago designing competitive  Ghost Teams for catching Ms. PacMan.

The abstract is:

In the last year, thanks to the Ms. Pac-Man vs Ghosts competition, the game of Ms. Pac-Man has gained increasing attention from academics in the field of Computational Intelligence. In this work, we contribute to this research stream by presenting a simple Genetic Algorithm with Lexicographic Ranking (GALR) for the optimization of Flocking Strategy-based ghost controllers. Flocking Strategies are a paradigm for intelligent agents characterized by showing emergent behavior and for having very little computational and memory requirements, making them well suited for commercial applications and mobile devices. In particular, we study empirically the effect of optimizing homogeneous and heterogeneous teams. The computational analysis shows that the Flocking Strategy-based controllers generated by the proposed GALR outperform the ghost controllers included in the competition framework and some of those presented in the literature.

The paper can be found here: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7093170&tag=1

Enjoy it!

(And cite us) :D

Aplicación de Programación Genética para la generación de bots del RTS Planet Wars en CoSECiVi 2014

Este trabajo se publicó dentro del Primer Congreso de la Sociedad Española para las Ciencias del Videojuego (CoSECIVI), que se celebró en conjunción con el Gamelab 2014 en Barcelona.

En él se presentó el artículo titulado “Designing Competitive Bots for a Real Time Strategy Game using Genetic Programming”, cuyo resumen (en inglés) es:

The design of the Artificial Intelligence (AI) engine for an autonomous agent (bot) in a game is always a difficult task mainly done by an expert human player, who has to transform his/her knowledge into a behavioural engine. This paper presents an approach for conducting this task by means of Genetic Programming (GP) application. This algorithm is applied to design decision trees to be used as bot’s AI in 1 vs 1 battles inside the RTS game Planet Wars. Using this method it is possible to create rule-based systems defining decisions and actions, in an automatic way, completely different from a human designer doing them from scratch. These rules will be optimised along the algorithm run, considering the bot’s performance during evaluation matches. As GP can generate and evolve behavioural rules not taken into account by an expert, the obtained bots could perform better than human-defined ones. Due to the difficulties when applying Computational Intelligence techniques in the videogames scope, such as noise factor in the evaluation functions, three different fitness approaches have been implemented and tested in this work. Two of them try to minimize this factor by considering additional dynamic information about the evaluation matches, rather than just the final result (the winner), as the other function does.
In order to prove them, the best obtained agents have been compared with a previous bot, created by an expert player (from scratch) and then
optimised by means of Genetic Algorithms. The experiments show that the three used fitness functions generate bots that outperform the optimized human-defined one, being the area-based fitness function the one that produces better results.

La presentación del artículo se puede ver aquí:

El artículo se puede encontrar en: http://gaia.fdi.ucm.es/sites/cosecivi14/es/papers/24.pdf

Esperamos que os guste.

Y que nos citéis. :D

Evolving Evil: Optimizing Flocking Strategies through Genetic Algorithms for the Ghost Team in the Game of Ms. Pac-Man

by Federico Liberatore, Antonio Mora, Pedro Castillo, Juan Julián Merelo in EvoGAMES
Flocking strategies are sets of behavior rules for the interaction of agents that allow to devise controllers with reduced complexity that generate emerging behavior. In this paper, we present an application of genetic algorithms and flocking strategies to control the Ghost Team in the game Ms. Pac-Man. In particular, we define flocking strategies for the Ghost Team and optimize them for robustness with respect to the stochastic elements of the game and effectivity against different possible opponents by means of genetic algorithm. 
The performance of the methodology proposed is tested and compared with that of other standard controllers belonging to the framework of the Ms. Pac-Man versus Ghosts Competition. The results show that flocking strategies are capable of modelling complex behaviors and produce effective and challenging agents. 

The presentation is:

 

You can also see a brief demo here (we are the ghosts :D):

Enjoy it!

(And cite us, of course :D)

Hackathon in Videogames at EVO* 2014

Imagen

Hi to all,

Finally, the EVOHackathon will be held in the Oficina de Software Libre on Tuesday 22 April (one day before EVOGames conference).

There are 5 projects confirmed right now,namely:

  • Creating autonomous agents for Super Mario Bros. game
  • Creating an AI engine for the game Wetland (Greyman Studios)
  • Creating bots for 1 vs 1 combats in the RTS Planet Wars
  • Procedural generation of stages for a new game (Greyman Studios)
  • Progamer: Code visualization tool based in Super Mario Bros. levels

As you can see, two of them are proposed and will be directed by a videogames company.

We invite you to join us. It is free! :D

Sistemas Clasificadores

Los sistemas clasificadores son una fusión entre los algoritmos evolutivos, el aprendizaje por refuerzo y el supervisado. Se conocen como Learning Classifier Systems. El viernes pasado aproveché la reunión del grupo para presentar una breve revisión histórica y dar detalles sobre quizá el algoritmo más importante introducido en este campo, el eXtended Classifier System o XCS de Wilson.

Básicamente, el algoritmo busca mediante evolución genética y aprendizaje un conjunto de reglas que modelen la solución a un problema donde existe recompensa. Las reglas se componen de una condición y una acción. La población de reglas representa para cualquier condición dada, cual será la mejor acción. Esto se consigue asociando al espacio de entrada una predicción de la mejor recompensa futura obtenida para cada acción posible.

Entonces, dado un estado que representa el entorno, se buscan las reglas cuya condición coincide, y de ellas se toma la acción que ofrece mejor recompensa futura.

La tarea no es fácil, los algoritmos formales de aprendizaje por refuerzo, necesitan a priori un conocimiento determinista de las posibles entradas y las transiciones resultantes de las acciones, dejando poco o nada para la búsqueda y aplicación de generalización.

Con XCS este problema se resuelve introduciendo algunos ajustes a la componente genética. La idea general es básicamente repartir los recursos (reglas) para que representen todo el espacio con la mayor precisión y generalización posible. Como no es algo que se pueda resumir en unas pocas líneas, aquí os dejo la presentación: