Improved Genetic Fuzzy Drivers presented at CIG 2018

Last week I presented at IEEE CIG 2018 (held in Maastricht, The Netherlands) our following step in our research about autonomous drivers for Car Racing Simulators, such as TORCS, titled “The Evolutionary Race: Improving the Process of Evaluating Car Controllers in Racing Simulators“.

As commented before by @jjmerelo and later by @fergunet, we designed with Mohammed Salem (University of Mascara) a driver’s AI in which two Fuzzy Subcontrollers were hybridized with a Genetic Algorithm.

In this work we present a better evaluation approach for the GA, combining three methods: heuristic track choosing, improved fitness functions, and race-based selection of the best.

The abstract of the work is:

Simulated car races have been used for a long time as an environment where car controlling algorithms can be tested; they are an interesting testbed for all kinds of algorithms, including metaheuristics such as evolutionary algorithms. However, the challenge in the evolutionary algorithms is to design a reliable and effective evaluation process for the individuals that eventually translates into good solutions to the car racing problem: finding a controller that is able to win in a wide range of tracks and with a good quantity of opponents. Evaluating individual car controllers involves not only the design of a proper fitness function representing how good the car controller would be in a competitive race, but also the selection of the best solution for the optimization problem being solved; this decision might not be easy when uncertainty is present in the problem environment; in this case, weather and track conditions as well as unpredictable behavior of other drivers. Creating a methodology for the automatic design of the controller of an autonomous driver for a car racing simulator such as TORCS is an optimization problem which offers all these challenges. Thus, in this paper we describe an analysis and some proposals to improve the evaluation of optimized fuzzy drivers for TORCS over previous attempts to do so. It builds on preliminary results obtained in previous papers as a baseline and aims to obtain a more competitive autonomous driver via redesign of the fitness evaluation procedure; to this end, two different fitness functions are studied in several experiments, along with a novel race-based approach for the selection of the best individual in the evolution.

And the presentation is:

You can check our paper in the proceedings of the conference.

Enjoy it!

(And cite us as usual :D)

Advertisements

Creating Hearthstone decks by using Genetic Algorithms

I’m glad you’re here, friend! There’s a chill outside, so pull up a chair by the hearth of our inn and prepare to learn how the Ancient Gods use the power of the secret and ancient branch of the Evolution to generate Hearthstone decks by means of the magic and mistery!!

The_Innkeeper's_Tale_-_The_Innkeeper's_Tale2.jpg

Several months ago, my colleague Alberto Tonda and I were discussing about our latest adventures playing the Digital Collectible Card Game Hearthstone, when one of us said “Uhm, Genetic Algorithms usually work well with combinatorial problems, and solutions are usually a vector of elements. Elements such as cards. Such as cards of Hearthstone, the game we are playing right now while we are talking. Are you thinking what I’m thinking?”

Five minutes later we found an open-source Hearthstone simulator and started to think how to address the possibility of automatically evolve decks of Hearthstone.

The idea is quite simple: Hearthstone is played using a deck of 30 cards (from a pool of thousands available), so it is easy to model the candidate solution. With the simulator, we can perform several matches using different enemy decks, and obtain the number of victories. Therefore, we have a number that can be used to model the performance (fitness) of the deck.

Soooo, it’s easy to see one and one makes two, two and one makes three, and it was destiny, that we created a genetic algorithm that generates deck for Hearthstone for free.

Our preliminary results where discussed here, but we wanted to continue testing our method, so we tested using all available classes of the game, with the help of JJ, Giovanny and Antonio. All the best human-made decks were outperformed by our approach! And not only that, we applied a new operator called Smart Mutation that it is based in what we do when we test new decks in Hearthstone: we remove a card, and place another instead, but with +/-1 mana crystals, and not one completely random from the pool. The results were even better. Neat!

Maybe you prefer to read the abstract, that it is written in a more formal way than this post. You know, using the language of the science.

Collectible card games have been among the most popular and profitable products of the entertainment industry since the early days of Magic: The Gathering in the nineties. Digital versions have also appeared, with HearthStone: Heroes of WarCraft being one of the most popular. In Hearthstone, every player can play as a hero, from a set of nine, and build his/her deck before the game from a big pool of available cards, including both neutral and hero-specific cards.
This kind of games offers several challenges for researchers in artificial intelligence since they involve hidden information, unpredictable behaviour, and a large and rugged search space. Besides, an important part of player engagement in such games is a periodical input of new cards in the system, which mainly opens the door to new strategies for the players. Playtesting is the method used to check the new card sets for possible design flaws, and it is usually performed manually or via exhaustive search; in the case of Hearthstone, such test plays must take into account the chosen hero, with its specific kind of cards.
In this paper, we present a novel idea to improve and accelerate the playtesting process, systematically exploring the space of possible decks using an Evolutionary Algorithm (EA). This EA creates HearthStone decks which are then played by an AI versus established human-designed decks. Since the space of possible combinations that are play-tested is huge, search through the space of possible decks has been shortened via a new heuristic mutation operator, which is based on the behaviour of human players modifying their decks.
Results show the viability of our method for exploring the space of possible decks and automating the play-testing phase of game design. The resulting decks, that have been examined for balancedness by an expert player, outperform human-made ones when played by the AI; the introduction of the new heuristic operator helps to improve the obtained solutions, and basing the study on the whole set of heroes shows its validity through the whole range of decks.

You can download the complete paper from the Knowledge-based Systems Journal https://www.sciencedirect.com/science/article/pii/S0950705118301953

See you in future adventures!!!

Workshop on Spatially Structured Metaheuristics

We cordially invite you to attend the following two-presentations on Spatially Structured Metaheuristics. This mini-workshop will be held at 11.30 a.m. in the CITIC-UGR building (June 26th, 2014).

Spatially Structured Metaheuristics: Principles and Practical Applications
by Juan Luis Jiménez Laredo (University of Luxembourg)

A relevant number of metaheuristics are based on population. Although conventions may establish different names, individuals in evolutionary algorithms, ants in ant colony optimization or particles in particle swarm optimization belong to the same side of a coin: they are all  atomic elements of the population (a.k.a. building-blocks). In this context, spatially structured metaheuristics investigate how to improve the performance of metaheuristics by confining these elements in neighborhoods. This talk aims at presenting the working principles of spatially structured metaheuristics and practical applications to enhance diversity, scalability and robustness.

 

Spatially Structured Metaheuristics: Dynamic and Self-organized Topologies
by Carlos M. Fernandes (University of Lisbon)

Population based metaheuristics are computational search or optimization methods that use a population of possible solutions to a problem. These solutions are able communicate, interact and/or evolve. Two types of strategies for structuring population are possible. In panmictic populations, every individual is allowed to interact with every other individual. In non-panmictic metaheuristics, also called spatially structured population-based metaheuristics, the interaction is restricted to a pre-defined or evolving structure (network). Traditional spatially structured metaheuristics are built on pre-defined static networks of acquaintances over which individuals can interact. However, alternative strategies that overcome some of the difficulties and limitations of static networks (extra design and tuning effort, ad hoc decision policies, rigid connectivity, and lack of feedback from the problem structure and search process) are possible. This talk discusses dynamic topologies for spatially structured metaheuristics and describes a new model for structuring populations into partially connected and self-organized networks. Recent applications of the model on Evolutionary Algorithms and Particle Swarm Optimization are given and discussed.

Evolution using JavaScript in EvoStar

JavaScript, despite its age, is considered now an emergent language, since it is starting to have a ecosystem that allows the development of complex and high-performance applications. That is why in the recent EvoStar we had a poster that uses evolutionary algorithms libraries written using it. It is based on MOOTools to create an object-oriented browser-based library called jsEO, and is entitled An object-oriented library in JavaScript to build modular and flexible cross-platform evolutionary algorithms.

Evolving Evil: Optimizing Flocking Strategies through Genetic Algorithms for the Ghost Team in the Game of Ms. Pac-Man

by Federico Liberatore, Antonio Mora, Pedro Castillo, Juan Julián Merelo in EvoGAMES
Flocking strategies are sets of behavior rules for the interaction of agents that allow to devise controllers with reduced complexity that generate emerging behavior. In this paper, we present an application of genetic algorithms and flocking strategies to control the Ghost Team in the game Ms. Pac-Man. In particular, we define flocking strategies for the Ghost Team and optimize them for robustness with respect to the stochastic elements of the game and effectivity against different possible opponents by means of genetic algorithm. 
The performance of the methodology proposed is tested and compared with that of other standard controllers belonging to the framework of the Ms. Pac-Man versus Ghosts Competition. The results show that flocking strategies are capable of modelling complex behaviors and produce effective and challenging agents. 

The presentation is:

 

You can also see a brief demo here (we are the ghosts :D):

Enjoy it!

(And cite us, of course :D)

Volunteer-based evolutionary algorithms al dente

Planning the cook of a time consuming optimization problem? Have you considered to let a crowd of volunteers to help you in this endeavor? In a volunteer-based system,  volunteers provide you with free ingredients (CPU cycles, memory, internet connection,..) to be seasoned with only a pinch of peer-to-peer or desktop-grid technology.

If you are looking for a delicious cook of a volunteer-based evolutionary algorithm, you can find our recipe in this paper published in Genetic Programming and Evolvable Machines (pre-print version available here)

Title: “Designing robust volunteer-based evolutionary algorithms

Abstract This paper tackles the design of scalable and fault-tolerant evolutionary algorithms computed on volunteer platforms. These platforms aggregate computational resources from contributors all around the world. Given that resources may join the system only for a limited period of time, the challenge of a volunteer-based evolutionary algorithm is to take advantage of a large amount of computational power that in turn is volatile. The paper analyzes first the speed of convergence of massively parallel evolutionary algorithms. Then, it provides some guidance about how to design efficient policies to overcome the algorithmic loss of quality when the system undergoes high rates of transient failures, i.e. computers fail only for a limited period of time and then become available again. In order to provide empirical evidence, experiments were conducted for two well-known problems which require large population sizes to be solved, the first based on a genetic algorithm and the second on genetic programming. Results show that, in general, evolutionary algorithms undergo a graceful degradation under the stress of losing computing nodes. Additionally, new available nodes can also contribute to improving the search process. Despite losing up to 90% of the initial computing resources, volunteer-based evolutionary algorithms can find the same solutions in a failure-prone as in a failure-free run.

Sistemas Clasificadores

Los sistemas clasificadores son una fusión entre los algoritmos evolutivos, el aprendizaje por refuerzo y el supervisado. Se conocen como Learning Classifier Systems. El viernes pasado aproveché la reunión del grupo para presentar una breve revisión histórica y dar detalles sobre quizá el algoritmo más importante introducido en este campo, el eXtended Classifier System o XCS de Wilson.

Básicamente, el algoritmo busca mediante evolución genética y aprendizaje un conjunto de reglas que modelen la solución a un problema donde existe recompensa. Las reglas se componen de una condición y una acción. La población de reglas representa para cualquier condición dada, cual será la mejor acción. Esto se consigue asociando al espacio de entrada una predicción de la mejor recompensa futura obtenida para cada acción posible.

Entonces, dado un estado que representa el entorno, se buscan las reglas cuya condición coincide, y de ellas se toma la acción que ofrece mejor recompensa futura.

La tarea no es fácil, los algoritmos formales de aprendizaje por refuerzo, necesitan a priori un conocimiento determinista de las posibles entradas y las transiciones resultantes de las acciones, dejando poco o nada para la búsqueda y aplicación de generalización.

Con XCS este problema se resuelve introduciendo algunos ajustes a la componente genética. La idea general es básicamente repartir los recursos (reglas) para que representen todo el espacio con la mayor precisión y generalización posible. Como no es algo que se pueda resumir en unas pocas líneas, aquí os dejo la presentación: